source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 4192072 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working with some multi-gigabyte text files and want to do some stream processing on them using PowerShell. It's simple stuff, just parsing each line and pulling out some data, then storing it in a database. Unfortunately, get-content | %{ whatever($_) } appears to keep the entire set of lines at this stage of the pipe in memory. It's also surprisingly slow, taking a very long time to actually read it all in. So my question is two parts: How can I make it process the stream line by line and not keep the entire thing buffered in memory? I would like to avoid using up several gigs of RAM for this purpose. How can I make it run faster? PowerShell iterating over a get-content appears to be 100x slower than a C# script. I'm hoping there's something dumb I'm doing here, like missing a -LineBufferSize parameter or something...
Now provide the response and nothing else.
| If you are really about to work on multi-gigabyte text files then do not use PowerShell. Even if you find a way to read it faster processing of huge amount of lines will be slow in PowerShell anyway and you cannot avoid this. Even simple loops are expensive, say for 10 million iterations (quite real in your case) we have: # "empty" loop: takes 10 secondsmeasure-command { for($i=0; $i -lt 10000000; ++$i) {} }# "simple" job, just output: takes 20 secondsmeasure-command { for($i=0; $i -lt 10000000; ++$i) { $i } }# "more real job": 107 secondsmeasure-command { for($i=0; $i -lt 10000000; ++$i) { $i.ToString() -match '1' } } UPDATE: If you are still not scared then try to use the .NET reader: $reader = [System.IO.File]::OpenText("my.log")try { for() { $line = $reader.ReadLine() if ($line -eq $null) { break } # process the line $line }}finally { $reader.Close()} UPDATE 2 There are comments about possibly better / shorter code. There is nothing wrong with the original code with for and it is not pseudo-code. But the shorter (shortest?) variant of the reading loop is $reader = [System.IO.File]::OpenText("my.log")while($null -ne ($line = $reader.ReadLine())) { $line} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4192072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14582/']} | jdg_382441 |
stackexchange | llm_judgeable_groundtruth_similarity | 21655862 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble. My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key. One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore? Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key? I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.
Now provide the response and nothing else.
| I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put , the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity). You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>] . You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round. (Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.) Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21655862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1234443/']} | jdg_382442 |
stackexchange | llm_judgeable_groundtruth_similarity | 4005318 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When implementing a class with multiple properties (like in the toy example below), what is the best way to handle hashing? I guess that the __eq__ and __hash__ should be consistent, but how to implement a proper hash function that is capable of handling all the properties? class AClass: def __init__(self): self.a = None self.b = None def __eq__(self, other): return other and self.a == other.a and self.b == other.b def __ne__(self, other): return not self.__eq__(other) def __hash__(self): return hash((self.a, self.b)) I read on this question that tuples are hashable, so I was wondering if something like the example above was sensible. Is it?
Now provide the response and nothing else.
| __hash__ should return the same value for objects that are equal. It also shouldn't change over the lifetime of the object; generally you only implement it for immutable objects. A trivial implementation would be to just return 0 . This is always correct, but performs badly. Your solution, returning the hash of a tuple of properties, is good. But note that you don't need to list all properties that you compare in __eq__ in the tuple. If some property usually has the same value for inequal objects, just leave it out. Don't make the hash computation any more expensive than it needs to be. Edit: I would recommend against using xor to mix hashes in general. When two different properties have the same value, they will have the same hash, and with xor these will cancel eachother out. Tuples use a more complex calculation to mix hashes, see tuplehash in tupleobject.c . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4005318', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27565/']} | jdg_382443 |
stackexchange | llm_judgeable_groundtruth_similarity | 1074781 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was thinking of using a Double as the key to a HashMap but I know floating point comparisons are unsafe, that got me thinking. Is the equals method on the Double class also unsafe? If it is then that would mean the hashCode method is probably also incorrect. This would mean that using Double as the key to a HashMap would lead to unpredictable behavior. Can anyone confirm any of my speculation here?
Now provide the response and nothing else.
| Short answer: Don't do it Long answer: Here is how the key is going to be computed: The actual key will be a java.lang.Double object, since keys must be objects. Here is its hashCode() method: public int hashCode() { long bits = doubleToLongBits(value); return (int)(bits ^ (bits >>> 32));} The doubleToLongBits() method basically takes the 8 bytes and represent them as long. So it means that small changes in the computation of double can mean a great deal and you will have key misses. If you can settle for a given number of points after the dot - multiply by 10^(number of digits after the dot) and convert to int (for example - for 2 digits multiply by 100). It will be much safer. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1074781', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35227/']} | jdg_382444 |
stackexchange | llm_judgeable_groundtruth_similarity | 66550 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The unix read permission is actually the same as the execute permission, so if e.g. one process has write access it's also able to execute the same file. This can be done pretty easily:First this process has to load the content of the file,which shall be executed, into a buffer. Afterwards it calls a function from a shared library which parses the ELF in the buffer and loads it to the right addresses(probably by overwriting the old process as usual, when calling execvp). The code jumps to the entry point of the new program and it's being executed. I am pretty sure Dennis Ritchie and Ken Thompson were aware of that issue. So why did they even invent this permission, what is the intention behind it and what's the sense of it, if it can't prevent any process of any user having read access from executing? Is there even such a sense or is it superfluous? Could this even be a serious security issue, are there any systems, which rely on the strength of rw- or r-- permissions?
Now provide the response and nothing else.
| There's an even easier way to bypass the "execute" permission: copy the program into a directory you own and set the "execute" bit. The "execute" permission isn't a security measure. Security is provided at a lower level, with the operating system restricting specific actions. This is done because, on many Unix-like systems (especially in the days of Ritchie and Thompson), it's assumed that the user is able to create their own programs. In such a situation, using the "execute" permission as a security measure is pointless, as the user can simply create their own copy of a sensitive program. As a concrete example, running fdisk as an unprivileged user to try to scramble the hard drive's partition table: $ /sbin/fdisk /dev/sda Welcome to fdisk (util-linux 2.24.1).Changes will remain in memory only, until you decide to write them.Be careful before using the write command....Changed type of partition 'Linux' to 'Hidden NTFS WinRE'.Command (m for help): wfdisk: failed to write disklabel: Bad file descriptor That last line is fdisk trying to get a "write" file descriptor for the hard drive and failing, because the user I'm running it as doesn't have permission to do that. The purpose of the "execute" permission is two-fold: 1) to tell the operating system which files are programs, and 2) to tell the user which programs they can run. Both of these are advisory rather than mandatory: you can create a perfectly functional operating system without the permission, but it improves the user experience. As R.. points out, there's one particular case where the "execute" permission is used for security: when a program also has the "setuid" bit set. In this case, the "execute" permission can be used to restrict who is permitted to run the program. Any method of bypassing the "execute" permission will also strip the "setuid" status, so there's no security risk here. | {} | {'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/66550', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/21450/']} | jdg_382445 |
stackexchange | llm_judgeable_groundtruth_similarity | 40292677 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Trying to essentially accomplish this https://github.com/elgerlambert/redux-localstorage which is for Redux but do it for Mobx. And preferably would like to use sessionStorage. Is there an easy way to accomplish this with minimal boilerplate?
Now provide the response and nothing else.
| The easiest way to approach this would be to have a mobx "autorun" triggered whenever any observable property changes. To do that, you could follow my answer to this question . I'll put some sample code here that should help you get started: function autoSave(store, save) { let firstRun = true; mobx.autorun(() => { // This code will run every time any observable property // on the store is updated. const json = JSON.stringify(mobx.toJS(store)); if (!firstRun) { save(json); } firstRun = false; });}class MyStore { @mobx.observable prop1 = 999; @mobx.observable prop2 = [100, 200]; constructor() { this.load(); autoSave(this, this.save.bind(this)); } load() { if (/* there is data in sessionStorage */) { const data = /* somehow get the data from sessionStorage or anywhere else */; mobx.extendObservable(this, data); } } save(json) { // Now you can do whatever you want with `json`. // e.g. save it to session storage. alert(json); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40292677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2368033/']} | jdg_382446 |
stackexchange | llm_judgeable_groundtruth_similarity | 113110 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am part of a group developing an iOS application. This app accesses a database to both retrieve and submit data. One of the group members said that he does not want to have any direct interaction with the database from the app, but for data to go through a .php on a website. His reason for having an intermediary php page is that he does not want the database password in the code because someone could decompile the app and look at the password. Friend's idea:App -> website (.php) -> database -> website -> appAs opposed to:App -> database -> app Is this a legitimate concern? If so, is this group member's php solution effective? Thanks a lot in advance!
Now provide the response and nothing else.
| Your question is very confused. I'll assume that English is not your first language and what you are really asking is "should a DBMS be exposed on the internet?" he does not want the database password in the code That's just the start of your problems. There are things you can do to make the problem less bad than what you are currently proposing, but they still leave a lot to be desired. Use the credentials provided by the user to authenticate in the DBMS i.e. the database password is the user's password. This restricts the use of the database to known users (as opposed to making it available to anyone on the internet) and removes the need to embed the database password in the client. But it doesn't solve any of the other problems. restrict the accounts access to only run stored procedures Done properly, this would prevent an attacker from reading stuff from the database they shouldn't have access to - e.g. they should only be able to see their orders and not orders from other people. It also allows you to implement your business logic on the serverside (you can't trust logic implemented on the client). You should never store data which has not been vetted for compliance. This also gives some scope for managing the user experience more effectively when things do not proceed along the happy path. However its really hard to implement application logic in procedural SQL. restrict the access to specific IP addresses Where only known IP addresses will be accessing a service, this is good practice - it cuts down on the noise, but its not foolproof. And your mention of an IOS app (I presume you are referring to iOS, Apple's mobile device operating system, not IOS, which is Cisco's network device operating system) suggests you won't be able to determine the client address in advance. ...but these are not nearly enough protection!!! DBMSs are simply not designed to be exposed in a potentially hostile environment. And SQL, using the same channel for data and control, is particularly susceptible to abuse. Nobody has been able to fix these problems on a DBMS - and indeed, nobody is trying any more as the solution is to implement a proper control layer, serverside, on top of the DBMS. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/113110', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']} | jdg_382447 |
stackexchange | llm_judgeable_groundtruth_similarity | 10701617 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a property in a class: public int this[int index] { get { return _desk[index]; } set { _desk[index] = value; } } But I cant use this class in Linq. How to make it ?
Now provide the response and nothing else.
| If your class doesn't implement IEnumerable<T> but has an indexer and a Count property, you can create an IEnumerable of indexes using the Enumerable.Range Method , and project each index to the collection item using the Enumerable.Select Extension Method : var query = Enumerable.Range(0, obj.Count) .Select(index => obj[index]) ... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10701617', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/496865/']} | jdg_382448 |
stackexchange | llm_judgeable_groundtruth_similarity | 10349606 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can i override current_user of devise gem. Actually I need to add web services for mobile-app. Currently devise is managing session and 'current_user' for web-application. Now Mobile app will send user_id to the server. I need to override current user like this def current_user if params[:user_id].blank? current_user else User.find(params[:user_id]) end end Should I need to modify devise gem as plugin ? or something else ? Kindly explain in detail as I am new in rails. Kind regards,
Now provide the response and nothing else.
| According to the module Devise::Controllers::Helpers , current_user (together with all other devise helpers) is added to ApplicationController, which means that you can override it in this way: # in application_controller.rbdef devise_current_user @devise_current_user ||= warden.authenticate(scope: :user)enddef current_user if params[:user_id].blank? devise_current_user else User.find(params[:user_id]) end end | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10349606', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/524759/']} | jdg_382449 |
stackexchange | llm_judgeable_groundtruth_similarity | 151809 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
We say that the $C^*$-algebra $A$ generated by $a_1,...,a_n$ is universal subject to relations $R_1,...,R_m$ if for every $C^*$-algebra $B$ with elements $b_1,...,b_n$ satisfying relations $R_1,...,R_m$ there is $C^*$-epimorphism $\varphi: A \to B$ such that $\varphi(a_i)=b_i$. One of the basic examples is the $C^*$-algebra of complex valued function on three sphere $C(S^3)$ which is the universal commutative unital $C^*$-algebra generated by $a,b$ with relation $a^*a+b^*b=1$. My question is the following: what kind of relations can we impose on our $C^*$-algebra? In all examples which I saw the relations were algebraic and were of the form: $f(a_1,...,a_n,a_1^*,...,a_n^*)=0$ where $f$ was some polynomial. In particular do we admit: quantification and referring to other elements not being the generators order properties of $C^*$-algebras functions which are no longer polynomials (continuous functions, Borel functions etc.) If the answer is positive I would be grateful to know some (known in literature) examples of universal $C^*$-algebras arising in such a way.
Now provide the response and nothing else.
| This a long comment rather than a complete answer. Let me point out a paper of Bruce Blackadar B. Blackadar, Shape theory for C* -algebras , Math. Scand. 56 (1985), 249-275. where slightly more general conditions, which can be imposed in a natural manner on the generating relations, are considered. More specifically, in this setting the relations considered in the paper have the form $$\|p(x_1, \ldots, x_n, x_1^*, \ldots, x_n^*)\|\leqslant \eta,$$ where $p$ is a polynomial of $2n$ non-commuting variables and $\eta\geqslant 0$. I am quite sure that this is not what you are looking for, though. When the functions allowed in the generating relations are no longer polynomials but arbitrary Borel functions, it is difficult to talk about any kind of universality of such creatures. Indeed, in this case $h(f(a))$ need not be the same as $f(h(a))$ where $h$ is some *-homomorphism (these are not even well-defined a priori ). It is also possible to talk about C*-algebras generated by order-zero c.p.c maps from matrix algebras $M_n$ etc (this is perhaps the order ingredient you have in mind). However those maps correspond precisely to ${}^*$-homomorphisms from $C([0,1], M_n)$ so this is the old notion of universality in disguise. An important example of a C*-algebra which can be expressed in terms of (rather messy) relations involving order zero c.p.c maps is the Jiang-Su algebra $\mathcal{Z}$: B. Jacelon and W. Winter, $\mathcal{Z}$ is universal, to appear in J. Noncommut. Geom. , arXiv version . | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/151809', 'https://mathoverflow.net', 'https://mathoverflow.net/users/24078/']} | jdg_382450 |
stackexchange | llm_judgeable_groundtruth_similarity | 1043112 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We've got a fairly standard e-commerce scenario with paged lists of products within categories. For better or worse, about 80% of visitors never navigate past the first page, depending on the category there may then be 5-10 more pages of results which are viewed far less often. (Yes we do optimise what appears on the first page and have good search - but that's a different discussion) We can't cache every single page of results, because we're constrained by memory, but the benefit of caching just the first page of results for each category would be huge. I know I could do something similar using object caching to store the datasets in question, but is this possible using output caching, perhaps by using the response.Cache object? Where in the page lifecycle could this be done? Pre-render? Much simplified, the URL is something like "/ProductList?Category=something&Page=1" And I'd want logic something like (pseudocode): If paramater "Page" equals 1 Use output caching: vary by param = "categoryName; page"else Don't use caching at all, just render the page from scratch. We're using ASP.NET 2.0, on IIS 6/win2003.
Now provide the response and nothing else.
| Instead of using the OutputCache directive, you can do the same thing programmatically, as follows: if (yourArbitraryCondition) { OutputCacheParameters outputCacheSettings = new OutputCacheParameters(); outputCacheSettings.Duration = 60; InitOutputCache(outputCacheSettings);} Doing this from OnInit should work fine. And obviously, you can tweak the caching behavior by setting the various properties on the OutputCacheParameter, which has all the same knobs as the directive (in fact, that's what we generate when you use the directive). The key point is that you're only executing this logic conditionally, while the directive makes it unconditional. UPDATE: As an alternative, you can use the low level cache API that the code above is built on. e.g. HttpCachePolicy cache = Response.Cache;cache.SetCacheability(HttpCacheability.Public);cache.SetExpires(Context.Timestamp.AddSeconds(60));cache.VaryByParams["categoryName"] = true; Basically, it's another way of doing the same thing, without using any API's marked as 'should not be called'. In the end, either way will work, so take your pick. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1043112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/66293/']} | jdg_382451 |
stackexchange | llm_judgeable_groundtruth_similarity | 1417893 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an iPhone app which encrypts an inputted NSString using CCCrypt (AES256) and a plaintext key. The string and key are given to the encryption method which returns an NSData object. Requesting [data description] where 'data' is the encrypted string data gives an NSString like: "<0b368353 a707e7de 3eee5992 ee69827e e3603dc2 b0dbbc0b 861ca87d f39ce72a>" but when I try to convert that to an NSString, I get "(null)". I need to return an NSString to the user, which can be used to decrypt back to the original string using the same plaintext key. If the 'description' property of the NSData object can return a string, is there any way I can produce an NSString from the NSData object without getting "(null)"? UPDATE: Thanks to Quinn, who suggests using Base64 encoding to produce the muddled string. From what I understand, Base64 encoding does not simply swap characters, but the character exchange depends on the position, so that's fine. My only concern is that I want to be able to encrypt the message with a 'passphrase', and require the identical passphrase to be entered when the muddled string needs to be decoded - can anybody suggest ways to implement this?
Now provide the response and nothing else.
| First off, DO NOT use -[NSData description] to create an NSString for such purposes. (It's best to treat -description as debugging output. I apologize if my previous answer misled you, I was merely printing the description to demonstrate that the NSData can be encrypted and decrypted.) Instead, use NSString's -dataUsingEncoding: and -initWithData:encoding: methods to convert between NSData and NSString. Even with these, note that AES-encrypted data will probably not translate well into strings as-is — some byte sequences just won't play nicely, so it's a good idea to encode the data before creating the string. I'd suggest you try Base64 encoding the NSData, since Base64 data can always be represented as an ASCII string. (Of course, when you do that, you'll have to decode from Base64 before decrypting.) Here are some helpful resources... Colloquy has some code that does encoding/decoding on NSData ( header and implementation ) Google Toolbox for Mac has similar functionality ( header and implementation ) A Cocoa With Love blog post on the topic. A CocoaDev.com wiki page on the topic. Edit: I was assuming you'd combine this with my answer to your previous question on AES encryption of NSString objects. Encoding data as Base64 doesn't place any restrictions on the data itself — it can certainly be AES-enrypted data itself. Here's what to do if you just want string input and output: Encryption Provide the NSString to be encrypted, and the passphrase to use for encrypting. Convert the string to an NSData and perform AES encryption on it (see previous question). Base64-encode the NSData, then create and return and NSString of the encoded output. Decryption Provide the encrypted and encoded string, and the passphrase to use for decrypting. Create an NSData from the first string, then Base64-decode the data. Perform AES decryption on the data, then create and return an NSString. It's really just a matter of chaining the two parts together and performing them in reverse on the way out. From my previous answer, you can modify encryptString:withKey: to perform the last step and return a string, and change decryptData:withKey: to be decryptString:withKey: and accept two strings. It's pretty straightforward. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1417893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168530/']} | jdg_382452 |
stackexchange | llm_judgeable_groundtruth_similarity | 4512106 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am given a sequence of determinants of matrices $M_n$ , where the matrix elements $(M_n)_{ij}$ of $M_n$ are $0$ whenever $|i-j|>1$ and $1$ whenever $|i-j| ≤ 1$ . Writing out the first five matrices, it becomes apparent that $\det(M_n) = \det(M_{n-1}) - \det(M_{n-2})$ . I want a formula for the mapping $n ↦ \det(M_n)$ , which I believe to be $$a_n = \begin{cases} 0, & n ≡ 2 \mod 6 \,\, \vee n ≡ 5 \mod 6, \\ 1, & n ≡ 0 \mod 6 \,\, \vee n ≡ 1 \mod 6, \\ -1, & n ≡ 3 \mod 6 \,\, \vee n ≡ 4 \mod 6. \end{cases}$$ This can quite readily be seen from the first 15 or so terms. Of course, this doesn't constitute a proof , which most likely will have to be performed by induction. I just fear that I am to embark on a six-piece proof by exhaustion, which I would like to avoid if there is a (much) quicker way to do it!
Now provide the response and nothing else.
| You are asking for the recurrence, I think, $a_0=a_1=1$ , $a_n=a_{n-1}-a_{n-2}$ . Let $T:\Bbb R^3\to\Bbb R^3$ be the linear operator with the following matrix (on the standard basis): $$\begin{pmatrix}1&-1&0\\1&0&0\\0&1&0\end{pmatrix}$$ And let, $n\ge 0$ , the vectors $v_n$ be given by: $$v_n=\begin{pmatrix}a_{n+2}\\a_{n+1}\\a_n\end{pmatrix}$$ Then $Tv_n=v_{n+1}$ for all $n$ , or equivalently, $v_n=T^nv_0$ for all $n$ . If we can find a nice expression for $T^n$ then we are basically done (this is a standard trick!). The diagonal matrix for this operator is: $$\begin{pmatrix}0&0&0\\0&\zeta&0\\0&0&\overline{\zeta}\end{pmatrix}$$ Where $\zeta=\frac{1}{2}(1-i\sqrt{3})=e^{-i\pi/3}$ , so the diagonal matrix for $T^n$ is just the same with $\zeta^n,\overline{\zeta^n}$ instead, that is, $e^{\pm in\pi/3}$ . The computation of the eigenvalues and the change-of-basis matrices is all doable by hand, in not too much time (less than $20$ minutes, much less if you're fast!), but I have skipped these steps as they are tedious. We are interested only in $a_n$ , the bottom row of $T^nv_0$ . The computation looks as follows: $$\begin{align}v_n&=\begin{pmatrix}0&\zeta-1&\overline{\zeta}-1\\0&\zeta&\overline{\zeta}\\1&1&1\end{pmatrix}\begin{pmatrix}0&0&0\\0&\zeta^n&0\\0&0&\overline{\zeta^n}\end{pmatrix}\begin{pmatrix}1&-1&1\\-\frac{1}{2}+\frac{i}{6}\sqrt{3}&\frac{1}{2}+\frac{i}{6}\sqrt{3}&0\\-\frac{1}{2}-\frac{i}{6}\sqrt{3}&\frac{1}{2}-\frac{i}{6}\sqrt{3}&0\end{pmatrix}\begin{pmatrix}0\\1\\1\end{pmatrix}\\&=\begin{pmatrix}0&\zeta-1&\overline{\zeta}-1\\0&\zeta&\overline{\zeta}\\1&1&1\end{pmatrix}\begin{pmatrix}0&0&0\\0&\zeta^n&0\\0&0&\overline{\zeta^n}\end{pmatrix}\begin{pmatrix}0\\\frac{1}{2}+\frac{i}{6}\sqrt{3}\\\frac{1}{2}-\frac{i}{6}\sqrt{3}\end{pmatrix}\\&=\begin{pmatrix}0&\zeta-1&\overline{\zeta}-1\\0&\zeta&\overline{\zeta}\\1&1&1\end{pmatrix}\begin{pmatrix}0\\\zeta^n\left(\frac{1}{2}+\frac{i}{6}\sqrt{3}\right)\\\zeta^{-n}\left(\frac{1}{2}-\frac{i}{6}\sqrt{3}\right)\end{pmatrix}\end{align}$$ And the bottom row is: $$a_n=\frac{1}{2}(\zeta^n+\zeta^{-n})+\frac{i}{\sqrt{3}}\frac{1}{2}(\zeta^n-\zeta^{-n})=\cos\frac{\pi n}{3}+\frac{1}{\sqrt{3}}\sin\frac{\pi n}{3}$$ Since $\cos,\sin$ are periodic functions, you can read off the periodicity relations from here (modulo $6$ , say). Let's check our work: standard trigonometric identities give $a_0=1=a_1$ using the above formula. In general: $$\begin{align}a_{n+1}-a_n&=\left[\color{red}{\cos\frac{\pi n}{3}\cos\frac{\pi}{3}}-\sin\frac{\pi n}{3}\sin\frac{\pi}{3}\right]+\frac{1}{\sqrt{3}}\left[\sin\frac{\pi n}{3}\cos\frac{\pi}{3}+\cos\frac{\pi n}{3}\sin\frac{\pi}{3}\right]\\&-\left[\color{red}{\cos\frac{\pi n}{3}}+\frac{1}{\sqrt{3}}\sin\frac{\pi n}{3}\right]\\&=\color{red}{-\frac{1}{2}\cos\frac{\pi n}{3}}-\frac{\sqrt{3}}{2}\sin\frac{\pi n}{3}+\frac{1}{2\sqrt{3}}\sin\frac{\pi n}{3}+\frac{1}{2}\cos\frac{\pi n}{3}-\frac{1}{\sqrt{3}}\sin\frac{\pi n}{3}\\&=-\frac{2}{\sqrt{3}}\sin\frac{\pi n}{3}\end{align}$$ And: $$\begin{align}a_{n+2}&=-\frac{1}{2}\cos\frac{\pi n}{3}-\frac{\sqrt{3}}{2}\sin\frac{\pi n}{3}+\frac{1}{\sqrt{3}}\left[-\frac{1}{2}\sin\frac{\pi n}{3}+\frac{\sqrt{3}}{2}\cos\frac{\pi n}{3}\right]\\&=0-\frac{4\sqrt{3}}{6}\sin\frac{\pi n}{3}\\&=-\frac{2}{\sqrt{3}}\sin\frac{\pi n}{3}\\&=a_{n+1}-a_n\end{align}$$ As desired! | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4512106', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/545640/']} | jdg_382453 |
stackexchange | llm_judgeable_groundtruth_similarity | 175834 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As far as I understand universal properties, one can prove $A[X] \otimes_A A[Y] \cong A[X, Y] $ where $A$ is a commutative unital ring in two ways: (i) by showing that $A[X,Y]$ satisfies the universal property of $A[X] \otimes_A A[Y] $ (ii) by using the universal property of $A[X] \otimes_A A[Y] $ to obtain an isomorphism $\ell: A[X] \otimes_A A[Y] \to A[X,Y]$ Now surely these two must be interchangeable, meaning I can use either of the two to prove it. So I tried to do (i) as follows: Define $b: A[X] \times A[Y] \to A[X,Y]$ as $(p(X), q(Y)) \mapsto p(X)q(Y)$. Then $b$ is bilinear. Now let $N$ be any $R$-module and $b^\prime: A[X] \times A[Y] \to N$ any bilinear map. I can't seem to define $\ell: A[X,Y] \to N$ suitably. The "usual" way to define it would've been $\ell: p(x,y) \mapsto b^\prime(1,p(x,y)) $ but that's not allowed in this case. Question: is it really not possible to prove the claim using (i) in this case?
Now provide the response and nothing else.
| I can see your problem. As Marlu suggested and in some of the comments above, the trick is to treat of the elements $x^iy^j$ as a "basis" for the polynomial ring $A[x,y]$. In fact, this is the trick because suppose you take some $\sum_{i,j} X^iY^j \in A[x,y]$. Suppose hypothetically that you already have a linear map $L : A[X,Y] \rightarrow N$. Then the action of $L$ on this polynomial being $\sum_{i,j}L(X^iY^j)$ so that the image of any polynomial in $A[X,Y]$ is in fact completely determined by the action of $L$ on the $X^iY^j$. Let us keep this idea in mind and consider the diagram below. Because we want the diagram to commute, we should have just concentrating on $X^iY^j$ that $$\begin{eqnarray*} b'(X^i,Y^j) &=& \ell \circ b(X^i,Y^j) \\&=&\ell(X^iY^j) \end{eqnarray*} $$ Now from what I said in the first paragraph, you can extend $\ell$ additively. Let us check that $\ell$ is compatible with scalar multiplication. Take any $a \in A$. Then $$\begin{eqnarray*} \ell(aX^iY^j) &=& b'(aX^i,Y^j)\\ &=& ab'(X^i,Y^J) \\ &=& a\ell(X^iY^j) \end{eqnarray*}$$ I could take the $a$ out of $b'(\cdot, \cdot)$ because we are now considering $A[X]$ and $A[Y]$ as $A$ - modules and so $b'$ is $A$ - bilinear. We have now completed the check that $\ell$ is linear and uniqueness should be obvious. It follows you have shown that $A[X,Y]$ satisfies the universal property of the tensor product $A[X] \otimes_A A[Y]$ from which it follows that $$A[X,Y] \cong A[X] \otimes_A A[Y].$$ $$\hspace{6in} \square$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/175834', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5798/']} | jdg_382454 |
stackexchange | llm_judgeable_groundtruth_similarity | 171686 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know whether the differentials in a particular hypercohomology spectral sequence can each be interpreted, in some natural way, as Yoneda products between extension groups. More specifically, let $R$ be a ring, let $M$ be a left $R$-module, and let $C: 0 \rightarrow C^0 \rightarrow C^1 \rightarrow C^2 \rightarrow \cdots$ be a cochain complex of left $R$-modules. Then there exists a spectral sequence whose $E_2$-page is given by $E_2^{i,j} = \text{Ext}_R^i(M,H^j(C))$, and whose limit is the hypercohomology group $\mathbb{Ext}_R^{i+j}(M,C)$. Explicitly, this spectral sequence can be constructed by first taking a Cartan-Eilenberg resolution $Q = Q^{i,j}$ of $C$. Then $Hom_R(M,Q)$ is a double complex, and the aforementioned hypercohomology spectral sequence is one of the two spectral sequences that naturally arises from this double complex. (I believe it is the spectral sequence that arises from the row-wise filtration of the double complex.) On the $E_2$-page of this spectral sequence, the differential $d_2: E_2^{i,j} \rightarrow E_2^{i+2,j-1}$ identifies with a map $\text{Ext}_R^i(M,H^j(C)) \rightarrow \text{Ext}_R^{i+2}(M,H^{j-1}(C))$. More generally, on the $E_r$-page, the differential $d_r: E_r^{i,j} \rightarrow E_r^{i+r,j+1-r}$ identifies with a map from a subquotient of $\text{Ext}_R^i(M,H^j(C))$ to a subquotient of $\text{Ext}_R^{i+r}(M,H^{j+1-r}(C))$. Think of the differential $d_r: E_r^{i,j} \rightarrow E_r^{i+r,j+1-r}$ as a map from a subquotient of $\text{Ext}_R^i(M,H^j(C))$ to a subquotient of $\text{Ext}_R^{i+r}(M,H^{j+1-r}(C))$. Is there necessarily an extension class $\eta$ in $\text{Ext}_R^r(H^j(C),H^{j+1-r}(C))$ such that the differential $d_r$ is induced by left Yoneda multiplication by $\eta$?
Now provide the response and nothing else.
| Let me give an interpretation for $d_2$ along the lines that you want. Let $\tau_{\le p}C$ be the truncation operator which sets all terms $C^k=0$ for $k>p$, keeps $C^k$ for $k<p$ and replaces $C^p$ with $\ker d$. This operator passes to the derived category $D= D^+(R\text{-}mod)$. We have a distinguished triangle$$ H^{p-1}(C )[-p+1]\to \tau_{\le p}C/\tau_{\le p-2}C\to H^p(C )[-p]\stackrel{\delta}{\to} H^{p-1}(C ) [-p +2]$$The last arrow $\delta$ is an element of $$Hom_D(H^{p}(C )[-p], H^{p-1}(C )[-p + 2]) = Ext^2_R(H^{p}(C ), H^{p-1}(C ))$$Up to sign, $d_2$ is induced from $\delta$ by$$Hom_D(M, H^p(C )[-p])\to Hom_D(M, H^{p-1}(C ) [-p+1])$$ To see that this description of $d_2$ is correct, apply the remark on the top of p 21 of Deligne Theorie de Hodge II to $Dec(\tau)$, where $Dec$ is defined on p 15. I'm sure there is a better reference. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/171686', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7932/']} | jdg_382455 |
stackexchange | llm_judgeable_groundtruth_similarity | 2144385 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it due to basic misunderstandings of how memory is dynamically allocated and deallocated on the programmer's part? Is it due to complacency?
Now provide the response and nothing else.
| No. It's due to the sheer amount of accounting it takes to keep track of every memory allocation. Who is responsible for allocating the memory? Who is responsible for freeing it? Ensuring that you use the same API to allocate and free the memory, etc... Ensuring you catch every possible program flow and clean up in every situation(for example, ensure you clean up after you catch an error or exception). The list goes on... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2144385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/205458/']} | jdg_382456 |
stackexchange | llm_judgeable_groundtruth_similarity | 269332 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an exact solution for the following special case of Chini Equation with $2\geq a > 1 > b > 0, x, y \in \mathbb{R}^+$, $$\frac{dy}{dx} = 1 + \frac{a}{y} + \frac{b}{x}$$ I have tried to approach this using multiple methods and substitutions, but none has gotten me far. I know that Chini equation tends not to have closed form solution, but this looks so simple to not have a closed form solution! Could someone please suggest a way to try obtaining the solution?
Now provide the response and nothing else.
| One thing you can try is to select numerical values for $a$ and $b$ (say $a=b=1$ or $a=b^{-1}=2$) and then find a power series solution $y=1+\sum_{k>0}c_k(x-1)^k$ and inspect the coefficients $c_k$. With $a=b=1$ they are$$ 3, 2, 4, 11, 35, \frac{721}{6}, \frac{18163}{42}, \frac{540391}{336}, \frac{98091}{16}, \frac{26684211}{1120},\dotsc $$If there was a straightforward solution in terms of standard functions then you would expect that the coefficients would be arithmetically simple, with no large primes involved in $c_k$ when $k$ is small. (Large primes can appear in the numerator by accident, but not so easily in the denominator.) Also, the terms $k!c_k$ seem to be integers, so we can look them up in the Online Encyclopedia of Integer Sequences. In fact, the prime $286927$ appears in the numerator of $c_{10}$ and OEIS does not recognise the sequence $k!c_k$ so that is not promising for the existence of an elementary solution. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/269332', 'https://mathoverflow.net', 'https://mathoverflow.net/users/109419/']} | jdg_382457 |
stackexchange | llm_judgeable_groundtruth_similarity | 16344583 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to style a select element based on what option is selected with CSS only? I am aware of existing JavaScript solutions . I tried to style the option element itself, but this will give style only to the option element in the list of options, not to the selected element. select[name="qa_contact"] option[value="3"] { background: orange;} http://jsfiddle.net/Aprillion/xSbhQ/ If not possible with CSS 3, will CSS 4 subject selector help in the future - or will this stay a forbidden fruit to CSS? Update 2022 : using :has pseudo-class can help to style the select itself (in browsers that support both the :has pseudo-class and styling of <select> ), but it only works for HTML attributes, so only if the select has the option explicitly selected initially. It will not work dynamically without JS - after the user changes selection, that will only change DOM properties, but not the HTML attributes on which CSS attribute selectors depend: select:has(option[selected][value="3"]) { background: orange;}
Now provide the response and nothing else.
| Unfortunately, yes - this is something not currently possible with only CSS. As mentioned in the answers and comments to this question , there is currently no way to make the parent element receive styling based on its children . In order to do what you're wanting, you would essentially have to detect which of the children ( <option> ) is selected, and then style the parent accordingly. You could, however, accomplish this with a very simple jQuery call, as follows: HTML <select> <option value="foo">Foo!</option> <option value="bar">Bar!</option></select> jQuery var $select = $('select');$select.each(function() { $(this).addClass($(this).children(':selected').val());}).on('change', function(ev) { $(this).attr('class', '').addClass($(this).children(':selected').val());}); CSS select, option { background: #fff; }select.foo, option[value="foo"] { background: red; }select.bar, option[value="bar"] { background: green; } Here is a working jsFiddle . Back to the question about the future of selectors. Yes - the "Subject" selectors are intended to do exactly what you mention. If/when they ever actually go live in modern browsers, you could adapt the above code to: select { background: #fff; }!select > option[value="foo"]:checked { background: red; }!select > option[value="bar"]:checked { background: green; } As a side-note, there is still debate about whether the ! should go before or after the subject. This is based on the programming standard of !something meaning "not something". As a result, the subject-based CSS might actually wind up looking like this instead: select { background: #fff; }select! > option[value="foo"]:checked { background: red; }select! > option[value="bar"]:checked { background: green; } | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16344583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1176601/']} | jdg_382458 |
stackexchange | llm_judgeable_groundtruth_similarity | 271026 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ServiceCallContext object that must be passed in as the first parameter of any service call. I would like to put a User object on the context object but I know I can't force the caller to send back the User object they received from the service and I can't prevent them from filling that in with any information they like. So as I understand it, I would need to use a security token (via something like OAuth) to authenticate them and would always need to look up any User information I need on the service side after authentication rather than putting it on the context object. However, I know it's common to make a hash for a data file in order to verify that its contents haven't been tampered with. Would it be reasonable to do this for user information I put on a context object so that I don't have to make a call to the persistence layer for every service call? What are the risks of doing it that way?
Now provide the response and nothing else.
| Lets assume you are testing a class "interval", representing intervals of natural numbers: var interval= new Interval(1,100); Now your first two tests are "green": AssertIsTrue(interval.Contains(100))AssertIsFalse(interval.Contains(101)) and you are confident the author made no "off-by-one-error". Unfortunately, the implementation looks like this Contains(x){ return (x>= beginRange && x == endRange); // above line has a typo, "==" should have been be "<="} So you were better off to add another test at first hand, AssertIsTrue(interval.Contains(99)); which fails, revealing the bug missed by the first two tests. Or to be more general: off-by-one errors do not always manifest themselves by mixing up "<" and "<=", there are lots of other possibilities to get such things wrong, and as a tester, you are better off not to make any assumptions about how the internal implementation of a function looks like. | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/271026', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/108340/']} | jdg_382459 |
stackexchange | llm_judgeable_groundtruth_similarity | 160848 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does the compiler optimize out any multiplications by 1? That is, consider: int a = 1;int b = 5 * a; Will the expression 5 * a be optimized into just 5? If not, will it if a is defined as: const int a = 1;
Now provide the response and nothing else.
| It will pre-calculate any constant expressions when it compiles, including string concatenation. Without the const it will be left alone. Your first example compiles to this IL: .maxstack 2.locals init ([0] int32, [1] int32)ldc.i4.1 //load 1stloc.0 //store in 1st local variableldc.i4.5 //load 5ldloc.0 //load 1st variablemul // 1 * 5stloc.1 // store in 2nd local variable The second example compiles to this: .maxstack 1.locals init ( [0] int32 )ldc.i4.5 //load 5 stloc.0 //store in local variable | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/160848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16942/']} | jdg_382460 |
stackexchange | llm_judgeable_groundtruth_similarity | 144402 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Some questions about the constant made me curious as to exactly how much the accepted value has changed over the years. I am pretty sure that I learned a rounded value of $6.023\times10^{23}$ about 1970 which would have been around the 50th edition. I know that sometime after that the accepted value rounded to 4 significant figures was $6.022\times10^{23}$ . Of course since 2019 the accepted value has been fixed at exactly $6.02214076\times10^{23}$ It would seem that the CRC handbook is a reasonable source for the accepted value of the constant since it has been updated almost yearly. Please update the table in the answer in the community wiki page with other values
Now provide the response and nothing else.
| Please edit the table to include missing data Avogadro constant in the "CRC Handbook of Chemistry and Physics" Edition Years(s) page value CODATA? 1 (Link) 1913 -- Not Given -- 2 1914 No 3 (Link) 1914 123 6.42(31) × 10 23 No 4 1915 6.42(31) × 10 23 (note) No 5 1917 6.42(31) × 10 23 (note) No 6 (Link) 1917 286 6.42(31) × 10 23 No 7 (Link) 1918 369 6.42(31) × 10 23 No 8 (Link) 1920 523 6.06 × 10 23 No 9 (Link) 1922 605 6.06 × 10 23 No 10 (Link) 1924 678 6.06 × 10 23 No 11 (Link) 1926 784 6.06 × 10 23 No 12 1927 884 6.06 × 10 23 No 13 1928 986 6.06 × 10 23 No 14 1929 1151 6.06 × 10 23 No 15 1930 No 16 1931 1244 6.064(6) × 10 23 No 17 1932 6.064(6) × 10 23 (Note) No 18 1933 1488 6.064(6) × 10 23 No 19 1934 1890 6.064(6) × 10 23 No 20 1935 6.064(6) × 10 23 (Note) No 21 1936 1980 6.064(6) × 10 23 No 22 1937 2026 6.064(6) × 10 23 No 23 1939 2177 6.064(6) × 10 23 No 24 1940 2519 6.064(6) × 10 23 No 25 1941 2454 6.064(6) × 10 23 No 26 1942 2403 6.0228(11) × 10 23 No 27 1943 2499 6.0228(11) × 10 23 No 28 1944 2517 6.0228(11) × 10 23 No 29 1945 2585 6.0228(11) × 10 23 No 30 (Link) 1946 2631 6.0228(11) × 10 23 No 30(Rev) 1947 2631 6.0228(11) × 10 23 No 31 (Link) 1949 2683 6.0228(11) × 10 23 No 32 1950 2811 6.0228(11) × 10 23 No 33 1951 2833 6.0228(11) × 10 23 No 34 1952 No 35 1953 3092 6.0228(11) × 10 23 No 36 1954 3104 6.025 44(11) × 10 23 No 37 1955 3080 6.0228(11) × 10 23 No 38 1956 3136 6.0247(2) × 10 23 No 39 (Link) 1957 3134 6.0228(11) × 10 23 No 40 1958 3376 6.022 8(11) × 10 23 No 41 1959 3301 6.022 8(11) × 10 23 No 42 1960 3399 6.022 8(11) × 10 23 No 43 1961 No 44 1962 No 45 1964 F-95 6.022 57(9) × 10 23 No 46 1965 No 47 1966-1967 F-138 6.022 52(28) × 10 23 No 48 1967-1968 No 49 1968–1969 F-167 6.022 52(28) × 10 23 No 50 1969-1970 F-168 6.022 52(28) × 10 23 No 51 1970-1971 52 (Link) 1971-1972 F-189 6.022 169(40) × 10 23 No 53 1972-1973 F195 6.022 169(40) × 10 23 54 1973-1974 55 (Link) 1974-1975 F-222 6.022 169(40) × 10 23 No 56 1975–1976 6.022 045(31) × 10 23 57 (Link) 1976–1977 F-242 6.022 045(31) × 10 23 Yes, bulletin #11 58 1977-1978 F-242 6.022 045(31) × 10 23 Yes, bulletin #11 59 1978-1979 F-250 6.022 045(31) × 10 23 Yes, bulletin #11 60 1979-1980 F-244 6.022 045(31) × 10 23 Yes, bulletin #11 61 1980-1981 62 1981-1982 63 1982-1983 64 (Link) 1983-1984 F-198 6.022 045(31) × 10 23 Yes 65 (Link) 1984-1985 F-192 6.022 045(31) × 10 23 Yes 66 1985-1986 67 1986-1987 68 1987-1988 69 1988-1989 70 1989-1990 F-215 & F-217 6.022 1367(36) × 10 23 Yes, 1986 71 (Link) 1990-1991 1-1 6.022 1367(36) × 10 23 Yes, 1986 72 (Link) 1991-1992 1-1 6.022 1367(36) × 10 23 Yes, 1986 73 1992-1993 74 1993-1994 75 1994-1995 76 1995-1996 77 1996-1997 78 1997-1998 79 1998-1999 80 1999-2000 81 Link 2000-2001 1-7 6.022 141 99(47) × 10 23 Yes, 1998 82 2001–2002 1-7 6.022 141 99(47) × 10 23 Yes, 1998 83 2002-2003 6.022 141 99(47) × 10 23 (Note) (Yes, 1998) 84 2003–2004 1-7 6.022 141 99(47) × 10 23 Yes, 1998 85 2004-2005 1-2 6.022 141 5(10) × 10 23 Yes, 2002 86 2005–2006 1-5 6.022 141 5(10) × 10 23 Yes, 2002 87 2006–2007 1-5 6.022 141 5(10) × 10 23 Yes, 2002 88 2007–2008 1-1 6.022 141 5(10) × 10 23 Yes, 2002 89 (Link) 2008-2009 1-1 6.022 141 79(30) × 10 23 Yes, 2006 90 2009-2010 1-1 6.022 141 79(30) × 10 23 Yes, 2006 91 2010-2011 6.022 141 79(30) × 10 23 (Note) (Yes, 2006) 92 2011–2012 1-1 6.022 141 79(30) × 10 23 Yes, 2006 93 2012–2013 1-2 6.022 141 29(27) × 10 23 Yes, 2010 94 2013–2014 1-2 6.022 141 29(27) × 10 23 Yes, 2010 95 2014–2015 1-2 6.022 141 29(27) × 10 23 Yes, 2010 96 2015–2016 1-2 6.022 141 29(27) × 10 23 Yes, 2010 97 (Link) 2016–2017 1-1 6.022 140 857(74) × 10 23 Yes, 2014 98 2017-2018 6.022 140 857(74) × 10 23 (Note) (Yes, 2014) 99 2019-2020 6.022 140 857(74) × 10 23 (Note) (Yes, 2014) 100 2020-2021 6.022 140 857(74) × 10 23 (Note) (Yes, 2014) >101? 2021-2022 ? 6.022 140 76 × 10 23 (exact) Yes, 2018 NOTES FOR TABLE Early editions are actually titled "Handbook of Chemistry and Physics". Not sure when the title formally switched to the "CRC Handbook of Chemistry and Physics". Some early editions are available online at HathiTrust . The "Limited search" editions can only be searched for a hit. There is no page number or preview for the found text. The first edition that I found to hit on the phrase "Avogadro number" was the 16th edition. It would seem to that phrase until at least the 31st edition. By 84th edition the phrase was changed to "Avogadro constant" A CODATA newsletter had a list of publications which listed: No. 11 (Dec 1973) 8 pp, Recommended Consistent Values of the Fundamental Physical Constants, 1973 (Report of the CODATA Task Group on Fundamental Constants, August 1973). 1st Edition The link is to the facsimile edition. 3rd edition (c)1914 This version is online at HathiTrust. An old edit of the Wikipedia article CRC Handbook of Chemistry and Physics indicates the third edition has 322 pages. 3rd thru 12th Editions In at least the 3rd through 12th editions of the CRC Handbook in have a "Heat" category in the Table of Contents where there is an entry for "Number of molecules in a molecule-gram" which points to a table with three values. Note also that none of these three values are in Becker's paper. (Only seen 1st, 3rd, and 6th through 12th editions) My calculation is that the mean = $6.42(31)\times 10^{23}$ Millikan 1910 would seem to be from rounding $6.18^\times10^{23}$ value in Phil.Mag. Feb,1910,209 4th and 5th edition See likely that these editions are same as 3rd and 6th. 8th through ? edition In at least the 8th through 12th editions of the CRC Handbook the Table of Contents contains a category "Miscellaneous Tables" which has a link to "Atomic and Molecular Constants" which references the "Smithsonian Tables" and gives a value of $6.06\times10^{23}$ for the "Number of molecules in a gram molecule". The reference to the Smithsonian Physical Tables doesn't give any more indication as to which tables those are. This seems to be the 2nd reprint of 6th rev. ed. Prepared by Frederick E. Fowle. in 1916 13th through and 30th(?) Editions There is another version of the table Number of Molecules in a Gram-Molecule which has four values. In the 13th, 14th, 16th, 18th, 19th, 23rd, 25th, 26th and 27th editions the table is listed in the table of contents in the Heat section. In the 29th, 30th and 30th (rev) editions the table is listed in the table of contents in the Miscellaneous section. The table is gone in the 31st edition (and presumably in any subsequent editions). 16th, 18th, 19th, 21st, 23rd, 24th, 25th and 27th Editions These editions have identical entries, listing it as "Avogadro's number" in a table named "Probable Values of the General Physical Constants" which is in the "Miscellaneous Tables" section of the Table of Contents. The tables cite Raymond T. Birge: The Physical Review Supplement, Vol. 1, No. 1, July, 1929 , as the source for the entire "Probable Values of the General Physical Constants" table. Note however that the value given here doesn't match the one attributed to R T Birge in 1929 in Becker, P. "History and Progress in the Accurate Determination of the Avogadro Constants" 17th, 19th, 20th, and 22nd Editions These editions are book ended by editions which have the same value. These editions are assumed to have this value too. 28th, 29th, 30th, 30th (rev), 31st, 32nd and 35th Editions The 30th revised edition has "Revised" above "Thirtieth Edition" on the title page Listed as "Avogadro number (chemical scale)" in "New Table of Values of the General Physical Constants." The table is listed in the "Miscellaneous" section of the Table of Contents. Cites: Raymond T. Birge: The Physical Review Supplement, Vol. 13, 233, 1941 as the source for the entire table. (This is the value listed for R T Birge in 1941 in Becker, P. "History and Progress in the Accurate Determination of the Avogadro Constants" . With the quoted error the extra 3 is unnecessary.) The reference is better known as: "A New Table of Values of the General Physical Constants (as of August, 1941)", Raymond T. BirgeRev. Mod. Phys. 13, 233 (1941) DOI: 10.1103/RevModPhys.13.233 36th edition Value found in table of "Atomic Constants" which references James W. M. DuMond and E. Richard Cohen, Physical Review 82:555, 1951 38th edition Found in table of "Atomic Constants" which references "A survey of Atomic Constants (1955) by J. A. Bearden and John H. Thomsen, The John Hopkins University" 40th, 41st and 42nd Editions This cites Raymond T. Birge: Reviews of Modern Physics 13, 233, 1941. This matches the value attributed to Birge in the same year by Becker. (Because of the error the extra 3 is superfluous so I left it off.) 45th Edition "Avogadro's Number" in table "Values of the General Physical Constants" which simply states "Furnished by R. D. Huntoon." 50th Edition Cites source for data in Table named "Physical Constants" as "National Bureau of Standards Technical News Bulletin, October 1963." 52th, 53rd and 55th editions References Rev. Mod. Phys 41, 375 (1969) 56th edition From user theorist's comment on this post. 57th, 58th, 59th, and 60th Editions Avogadro Constant in table Recommended Consistent Values of the Fundamental Physical Constants. Table cites: Data from CODATA Bulletin No. 11, ICSU CODATA Central Office, CODATA Secretariat: 19 Westendstrasse, 6 Frankfurt/Main, German Federal Republic (copies of this bulletin are available at no cost from this office). 64th and 65th editions Avogadro Constant in table titled Recommended Consistent Values of the Fundamental Physical Constants . Table cites: Data from CODATA Bulletin No. 11, ICSU CODATA Central Office, CODATA Secretariat: 51 Boulevard de Montmorency, 75016 Paris, France (copies of this bulletin are available at no cost from this office). 70th through 72nd Editions Cites 1986 CODATA values as published in E. Richard Cohen and Barry N. Taylor, Journal of Research of the National Bureau of Standards, 92, 85, 1987. 83rd edition Based on pattern of CODATA being used for 4 years. 84th edition calculated page number from on-line pdf file 85th edition Internet edition 85th edition Got value via "Look Inside" at Amazon 90th edition Based on the 2006 CODATA recommended values 91st edition Surely same as 90th and 92nd data. Pattern of using CODATA for four Years. 92nd edition From print copy (MaxW) 98th through 100th Editions Based on pattern of CODATA being used for four years. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/144402', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/22102/']} | jdg_382461 |
stackexchange | llm_judgeable_groundtruth_similarity | 86297 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The second peak of the density plot is large in this example. Why does the rug representation of the data--which seems to show few high values--not appear to match the much higher density estimated there? How can one make rug plots less misleading? Here's its R code: plot(density(Boston$tax))rug(Boston$tax, col=2, lwd=3.5)
Now provide the response and nothing else.
| From the R package MASS , of the $506$ total observations in Boston , $369$ have a value for tax below 470 and $137$ have a value for tax above 665 . In fact 666 is by far the most common value in the data set, appearing $132$ times. So if the area of the density plot to the left is about twice the area to the right, then that could reasonably be taken as representing the distribution. Visual inspection suggests this might be what is happening. A more accurate representation would have the right peak much higher and narrower, and this could be achieved by adjusting the parameters. Added for comments: For example with a much narrower bandwidth for the density function and some manual jitter: library(MASS)plot(density(Boston$tax, bw=5)) rug(Boston$tax + rnorm(length(Boston$tax), sd=5), col=2, lwd=3.5) you would get something like this | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/86297', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/40077/']} | jdg_382462 |
stackexchange | llm_judgeable_groundtruth_similarity | 3829137 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a DataGrid . It has DataGrid.RowDetailsTemplate . When a button is clicked it should Expand / Collapse; how would I do that? <Custom:DataGrid RowDetailsVisibilityMode="VisibleWhenSelected" SelectionMode="Extended" CanUserAddRows="False" CanUserDeleteRows="False" CanUserResizeRows="False" CanUserSortColumns="False"> <Custom:DataGrid.RowDetailsTemplate> <DataTemplate> <Custom:DataGrid> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding idClient, Mode=Default}" Header="Ид" IsReadOnly="True"/> <Custom:DataGridTextColumn Binding="{Binding name_client, Mode=Default}" Header="Имя" IsReadOnly="True"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </DataTemplate> </Custom:DataGrid.RowDetailsTemplate> <Custom:DataGrid.Columns> <Custom:DataGridTemplateColumn> <Custom:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Expander IsExpanded="True"/> </DataTemplate> </Custom:DataGridTemplateColumn.CellTemplate> </Custom:DataGridTemplateColumn> <Custom:DataGridTextColumn Binding="{Binding idPartner, Mode=Default}" Header="Ид" IsReadOnly="True"/> <Custom:DataGridTextColumn Binding="{Binding name_partner, Mode=Default}" Header="Имя" IsReadOnly="True"/> </Custom:DataGrid.Columns></Custom:DataGrid>
Now provide the response and nothing else.
| Check this ... Adding a Button to a WPF DataGrid OR XAML : <DataGrid Name="dg1" AutoGenerateColumns="False" SelectionMode="Single" CanUserAddRows="false" CanUserDeleteRows="False" SelectionUnit="FullRow" > <DataGrid.CellStyle> <Style TargetType="DataGridCell"> <Setter Property="BorderThickness" Value="0"/> </Style> </DataGrid.CellStyle> <DataGrid.RowHeaderTemplate> <DataTemplate> <Expander Expanded="Expander_Expanded" Collapsed="Expander_Collapsed"> </Expander> </DataTemplate> </DataGrid.RowHeaderTemplate> <DataGrid.Columns> <DataGridTextColumn Header="Name" IsReadOnly="True" Width="100" Binding="{Binding Name}" /> <DataGridTextColumn Header="Title" IsReadOnly="True" Width="100" Binding="{Binding Title}" /> <DataGridTextColumn Header="Job" IsReadOnly="True" Width="100" Binding="{Binding Job}" /> </DataGrid.Columns> <DataGrid.RowDetailsTemplate> <DataTemplate> <TextBlock Text="{Binding Isi, Converter={StaticResource ResourceKey=isiTextConverter}}" Margin="10,5,0,0" /> </DataTemplate> </DataGrid.RowDetailsTemplate></DataGrid> MainWindow.xaml.cs private void Expander_Expanded(object sender, RoutedEventArgs e){ for (var vis = sender as Visual; vis != null; vis = VisualTreeHelper.GetParent(vis) as Visual) if (vis is DataGridRow) { var row = (DataGridRow)vis; row.DetailsVisibility = row.DetailsVisibility == Visibility.Visible ? Visibility.Collapsed : Visibility.Visible; break; }}private void Expander_Collapsed(object sender, RoutedEventArgs e){ for (var vis = sender as Visual; vis != null; vis = VisualTreeHelper.GetParent(vis) as Visual) if (vis is DataGridRow) { var row = (DataGridRow)vis; row.DetailsVisibility = row.DetailsVisibility == Visibility.Visible ? Visibility.Collapsed : Visibility.Visible; break; }} Output | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3829137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/450466/']} | jdg_382463 |
stackexchange | llm_judgeable_groundtruth_similarity | 15092898 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am beginner to spring, ESP Inversion of control. I was puzzled understanding the difference between the following <bean id="demo" class="Demo" lazy-init="false"/> <bean id="demo" class="Demo" lazy-init="true"/> <bean id="demo" class="Demo" lazy-init="default"/> To my understanding : lazy-init=false creates the bean at the startup and lazy-init=true doesn't create a bean at the startup rather creates the bean upon request for a particular bean.Correct me here, If my interpretation is wrong. what exactly the default behavior of lazy-init is? How would it instantiate?
Now provide the response and nothing else.
| The default behaviour is false: By default, ApplicationContext implementations eagerly create and configure all singleton beans as part of the initialization process. Generally, this pre-instantiation is desirable, because errors in the configuration or surrounding environment are discovered immediately, as opposed to hours or even days later. When this behavior is not desirable, you can prevent pre-instantiation of a singleton bean by marking the bean definition as lazy-initialized. A lazy-initialized bean tells the IoC container to create a bean instance when it is first requested, rather than at startup. I suggest reading up | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15092898', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/792580/']} | jdg_382464 |
stackexchange | llm_judgeable_groundtruth_similarity | 4201455 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What the difference is between flush() and commit() in SQLAlchemy? I've read the docs, but am none the wiser - they seem to assume a pre-understanding that I don't have. I'm particularly interested in their impact on memory usage. I'm loading some data into a database from a series of files (around 5 million rows in total) and my session is occasionally falling over - it's a large database and a machine with not much memory. I'm wondering if I'm using too many commit() and not enough flush() calls - but without really understanding what the difference is, it's hard to tell!
Now provide the response and nothing else.
| A Session object is basically an ongoing transaction of changes to a database (update, insert, delete). These operations aren't persisted to the database until they are committed (if your program aborts for some reason in mid-session transaction, any uncommitted changes within are lost). The session object registers transaction operations with session.add() , but doesn't yet communicate them to the database until session.flush() is called. session.flush() communicates a series of operations to the database (insert, update, delete). The database maintains them as pending operations in a transaction. The changes aren't persisted permanently to disk, or visible to other transactions until the database receives a COMMIT for the current transaction (which is what session.commit() does). session.commit() commits (persists) those changes to the database. flush() is always called as part of a call to commit() ( 1 ). When you use a Session object to query the database, the query will return results both from the database and from the flushed parts of the uncommitted transaction it holds. By default, Session objects autoflush their operations, but this can be disabled. Hopefully this example will make this clearer: #---s = Session()s.add(Foo('A')) # The Foo('A') object has been added to the session. # It has not been committed to the database yet, # but is returned as part of a query.print 1, s.query(Foo).all()s.commit()#---s2 = Session()s2.autoflush = Falses2.add(Foo('B'))print 2, s2.query(Foo).all() # The Foo('B') object is *not* returned # as part of this query because it hasn't # been flushed yet.s2.flush() # Now, Foo('B') is in the same state as # Foo('A') was above.print 3, s2.query(Foo).all() s2.rollback() # Foo('B') has not been committed, and rolling # back the session's transaction removes it # from the session.print 4, s2.query(Foo).all()#---Output:1 [<Foo('A')>]2 [<Foo('A')>]3 [<Foo('A')>, <Foo('B')>]4 [<Foo('A')>] | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/4201455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267831/']} | jdg_382465 |
stackexchange | llm_judgeable_groundtruth_similarity | 15529976 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I try to add single quotes to a string but don't see how to do it. For instance I would like to replace ABC by 'ABC'. I have played with paste , cat , print but don't see how to do it. Any solution? Thanks,Vincent
Now provide the response and nothing else.
| Maybe use sQuote ? sQuote("ABC")# [1] "'ABC'" This (like its sibling dQuote ) is frequently used to put quotes around some message or other text that's being printed to the console: cat("ABC", "\n")# ABC cat(sQuote("ABC"), "\n")# 'ABC' Do note (as is documented in ?sQuote ) that, depending on the type of quotes needed for your task, you may need to first reset options("useFancyQuotes") . To ensure that the function decorates your text with simple upright ASCII quotes, for example, do the following: options(useFancyQuotes = FALSE)sQuote("ABC")# [1] "'ABC'" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15529976', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1462331/']} | jdg_382466 |
stackexchange | llm_judgeable_groundtruth_similarity | 151385 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Disclaimer: Though I have been re-reading my notes, and have scanned the relevant texts, my commutative algebra is quite rusty, so I may be overlooking something basic. I want to show $\mathbb{Q} \simeq \mathbb{Q}[x,y]/\langle x,y \rangle$ is not projective as a $\mathbb{Q}[x,y]$ module. I've tried two methods, neither of which gets me to the conclusion. I first tried what seems to be sort of standard when proving that something is not projective: show that the lifting of the identity yields a contradiction. So I let $\pi: \mathbb{Q}[x,y] \to \mathbb{Q}[x,y]/\langle x,y \rangle$ be my surjection given by $f \mapsto \bar{f}$ and the identity map is $id: \mathbb{Q}[x,y]/\langle x,y \rangle \to \mathbb{Q}[x,y]/\langle x,y \rangle$. So all I need to show is that a homomorphism $\phi: \mathbb{Q}[x,y]/\langle x,y \rangle \to \mathbb{Q}[x,y]$ such that $\pi \circ \phi =id$ does not exist. But if $$\pi(f) = \bar{f} = \overline{a_0+a_{10}x+a_{01}y+a_{11}xy+\cdots+a_{n0}x^n + a_{0n}y^n} = \bar{a_0}$$ then doesn't the map $\bar{a_0} \mapsto a_0$ work? After all, $$ (\pi\circ \phi)(\bar{a_0}) = \pi(a_0) = \bar{a_0} = id(\bar{a_0}).$$I was concerned at first about this not being well defined, but since every element of a particular coset has the same constant term, it does not depend on choice. So either I have already made a mistake, or this is just the wrong map from which to derive a contradiction. The next thing I tried used a different characterization of projective modules: that $P$ is a projective $R$-module iff there is a free module $F$ and an $R$-module $K$ such that $F \simeq K\oplus P$. In our case, this means there is a free module $F$ and a $\mathbb{Q}[x,y]$-module $K$ such that $$\mathbb{Q}[x,y] \oplus \cdots \oplus \mathbb{Q}[x,y] \simeq F \simeq K \oplus \mathbb{Q}[x,y]/\langle x,y \rangle \simeq K \oplus \mathbb{Q}.$$From here, my concern is that I am waving my hand too much when I say: obviously this cannot be true, since every element of the LHS, which is a tuple of polynomials, cannot be broken up with one chunk in $K$ and the other in $\mathbb{Q}$. Do agree? If so, how can I make this argument more rigorous? One more trouble: nowhere in either of these methods did I explicitly use that the polynomial ring here is only in two variables. The fact that the question did not use $\mathbb{Q}[x_1,\ldots,x_n]$ instead of $\mathbb{Q}[x,y]$ worries me.
Now provide the response and nothing else.
| As Steve D said, you can use the fact that projective modules are always flat. Consider the map $\mathbb{Q}[x,y]\to \mathbb{Q}[x,y]$ defined by multiplying $x$. This is an injective $\mathbb{Q}[x,y]$-module map, while tensoring $\mathbb{Q}$ will give an injective map, but it is NOT. So $\mathbb{Q}$ is not flat as $\mathbb{Q}[x,y]$-module. However, following by your methods. The map defined in your first method is not a $\mathbb{Q}[x,y]$-module map, since if $1$ sends to $1$, then $0=\overline{x}$ sends to $x=0$, contradiction. In your second method, you can always find an element $(0,1)$ in RHS, this element is a torsion element, $x(0,1)=0$. But there is no element in LHS corresponding to it, since every element in LHS is not torsion. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/151385', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17736/']} | jdg_382467 |
stackexchange | llm_judgeable_groundtruth_similarity | 5189 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Do you guys think salting password with the first 8 bit of the password itself will have same result as using stored salt to hash password? [Clarification]I am not storing salt but creating from the password itself during authentication. Followup question, instead of using first 8 bit how about the first (length of the password mod (Prime number)) of bits as a salt? would not this be securer than normal password hashing?
Now provide the response and nothing else.
| This scheme looks clever, but it isn't. Salts are usually stored in cleartext. And they serve one purpose - to restrict the usability of precomputed hash tables. There is also the side-effect that no two hashes computed for two different user accounts with the same password will be the same. By storing a portion of the password as a salt, you are decreasing the security posture of the application, and also complicating matters. The below points assume storage of the salt: I would consider that the entropy associated with the password is reduced by a minimum value of the entropy associated with the 8 bits; in the Shannon model of calculating entropy of user selected passwords from a 94 character alphabet, this around 4 bits of entropy. The reduction in entropy is due to the model in use - most passwords are not a sequence of random bits, especially if they are chosen from natural languages. Concatenating n-bits from the password to itself to compute a hash will quite obviously make the contents of the password even less random, thus reducing entropy. Using the first 8 bits of the password as the salt is in itself a poor decision, compared to using the last 8 bits, not that the latter is a good decision. It so happens that under the Shannon model for computing entropy of a password, the entropy per bit reduces as the length of the password increases. If you are using the first n bits as the salt, then you are weakening the password further, as the Shannon model works on the premise that "if the first n bit is known, then how well can anyone guess the other following bits". Now that you've stored the password fragment in cleartext, you will need to encrypt it. That sounds easy, except that keys have to associated with a management lifecycle where you need to issue them, protect them, and discard them after a particular period; getting of this right is certainly not as easy as using a secure PRNG to generate the salt and storing this random sequence of bits in the clear. Cryptographic properties of a salt This is based on Paul's answer and the comments interspersed on this page. This section does not assume storage of the salt, and is rather a note on how salts must be chosen. Any data used as a salt to a password, must satisfy certain cryptographic properties. The most important one is that A salt must be unique across the passwords stored in the system. If the salt is not unique, then one loses the ability to create unique hashes for every password managed by the system, leading to the possibility that a dump of hashes might reveal passwords that are used across accounts or passwords present in dictionaries. It cannot be guaranteed that the scheme in question has this property. In fact it is easy to assume that the property is bound to be absent in a system with more than 2^8 = 256 users, rendering the scheme quite useless. Also, if one is under the impression that this scheme can reasonably protect against pre-computed hashes then that assumption is moot. It assumes that an attacker does not have resources at his disposal to compute hashes to compromise all the accounts in a reasonable timeframe. Using a salt with the cryptographic property of being unique would have certainly slowed down the attacker by a large factor, as a computed hash would be valid only for one salt and not for the others (i.e. hashes for any password in a dictionary must be recomputed for every user). If the current scheme were to be employed, an attacker can restrict himself to computing only 1 hash per password in a dictionary, making it no different from not using a salt, thus making it pointless to have a salt in the first place. This would lead to another property of a cryptographic salt: A salt must enhance the resistance of the system against bruteforce attacks and the like. By using a salt derived from a password, one loses this property, for a hash of every password in a dictionary can be computed in more or less the same amount of time, with or without the salt. It it to this effect that most systems are engineered to generate the salt of sufficient length using a secure PRNG. One of the comments stated that using a "service name" + "user name" combination is enough. I would say that it is a good thing to start of with. The service name typically acts as a pepper (especially when it is not stored), so that hashes themselves cannot be copied and used across services using the same hashing scheme. The username is decent enough to use as a salt, except in scenarios where the username is public (the root account in *nix, or Administrator in Windows, for example). If your usernames will be public and consistent across several deployments, then using n bits of randomness ought to be the way to go. This is an extension of the second property, in that any choice of a salt must consider resistance to bruteforce attacks across all deployments. The followup question states - Instead of using first 8 bit how about the first (length of the password mod (Prime number)) of bits as a salt? would not this be securer than normal password hashing? No, the salt is not guaranteed to be unique across user accounts. (See the first crytographic property). | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/5189', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/68/']} | jdg_382468 |
stackexchange | llm_judgeable_groundtruth_similarity | 153913 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an alloy containing different metals such as Al, Cu, Ni, (Zn, Pb, Sn, Ag, Si in small amount < 5%). How can I separate Cu and Ni from this alloy? I tried to dissolve alloy in sulfuric acid, but it didn't work. Then I tried to dissolve it in mixture of nitric and sulfuric acids and I managed to dissolve 400 g out of 1000 g. I noticed a thin black layer on alloy piece, which I suppose can be nickel oxide, which can be obstruction in dissolving alloy. How can I prevent formation of this oxide layer?
Now provide the response and nothing else.
| First, the alloy $(0.5$ to $\pu{2 g})$ should be treated by $\pu{10 mL}$ nitric acid $32\,\%.$ All metals will get dissolved, except tin and silicon, which will be transformed into insoluble dioxide $\ce{SnO2}$ or $\ce{SiO2}$ . Dilute in $\pu{100 mL}$ hot water. $\ce{SiO2 + SnO2}$ will make a gelatinous precipitate, that can be eliminated by filtration. Add $\pu{0.2 g}~\ce{Na2SO4}$ and $\pu{0.2 g}~\ce{NaCl}$ in order to remove lead and silver: lead sulfate $\ce{PbSO4}$ and silver chloride $\ce{AgCl}$ will precipitate, and are eliminated by filtration. The remaining solution contains aluminum, copper and nickel. If you want to separate copper, you may add $\pu{2 g}$ potassium iodide $\ce{KI},$ that will make a precipitate of copper(I) iodide $\ce{CuI}$ according to the reaction $$\ce{2 Cu^2+ + 5 I^- -> 2 CuI(s) + I3^-}\tag{1}$$ So copper may be separated as a white precipitate ( $\ce{CuI}$ ). This reaction is specific. In the final solution, nickel can be separated with dimethylglyoxime. Add $\pu{10 mL}$ of a dimethylglyoxime $(\ce{C4H8N2O2})$ solution $(1\,\%$ in ethanol). Add slowly some ammonia $\pu{2 M}$ up to the $\mathrm{pH}$ is just greater than $7$ . An intense red precipitate of nickel dimethylglyoxime $\ce{Ni[C4H5N2O2]}2$ appears and can be filtrated: $$\ce{2 C4H8N2O2 + Ni^2+ -> Ni(C4H7N2O2)2 + 2 H^+}\tag{2}$$ This reaction is specific, and the other metals do not interfere. | {} | {'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/153913', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/113021/']} | jdg_382469 |
stackexchange | llm_judgeable_groundtruth_similarity | 38621652 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello and thanks in advance for any help. I have a simple dataframe with two columns. I did not set an index explicitly, but I believe a dataframe gets an integer index that I see along the left side of the output. Question below: df = pandas.DataFrame(res)df.columns = ['date', 'pb']df['date'] = pandas.to_datetime(df['date'])df.dtypesdate datetime64[ns]pb float64dtype: objectdate pb0 2016-04-01 24199.9333331 2016-03-01 23860.8709682 2016-02-01 23862.2758623 2016-01-01 25049.1935484 2015-12-01 24882.4193555 2015-11-01 24577.000000date datetime64[ns]pb float64dtype: object I would like to pivot the dataframe so that I have years across the top (columns): 2016, 2015, etcand a row for each month: 1 - 12.
Now provide the response and nothing else.
| Using the .dt accessor you can create columns for year and month and then pivot on those: df['Year'] = df['date'].dt.yeardf['Month'] = df['date'].dt.monthpd.pivot_table(df,index='Month',columns='Year',values='pb',aggfunc=np.sum) Alternately if you don't want those other columns you can do: pd.pivot_table(df,index=df['date'].dt.month,columns=df['date'].dt.year, values='pb',aggfunc=np.sum) With my dummy dataset that produces: Year 2013 2014 2015 2016date1 92924.0 102072.0 134660.0 132464.02 79935.0 82145.0 118234.0 147523.03 86878.0 94959.0 130520.0 138325.04 80267.0 89394.0 120739.0 129002.05 79283.0 91205.0 118904.0 125878.06 77828.0 89884.0 112488.0 121953.07 78839.0 94407.0 113124.0 NaN8 79885.0 97513.0 116771.0 NaN9 79455.0 99555.0 114833.0 NaN10 77616.0 98764.0 115872.0 NaN11 75043.0 95756.0 107123.0 NaN12 81996.0 102637.0 114952.0 NaN | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38621652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3997814/']} | jdg_382470 |
stackexchange | llm_judgeable_groundtruth_similarity | 30246685 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am extremely new to lisp, had previous experience with functional programming (Haskell, SML). Why is this code returning 14 , and not 10 (i.e. 1 + 2y + 3 + 1 )? (defvar x 1)(defun g (z) (+ x z))(defun f (y) (+ (g 1) (let ((x (+ y 3))) (g (+ y x)))))(f 2)
Now provide the response and nothing else.
| Because you used (DEFVAR X 1) , which declares X to be a global special variable. This then causes every other later binding of X to use dynamic binding: here in (LET ((X ... . Style & Convention in Lisp Convention in Lisp: use *X* instead of X for special variables. (defvar *x* 1) Your code then is: (defvar *x* 1) ; global special variable *X*(defun g (z) (+ *x* z)) ; use special variable *X*(defun f (y) (+ (g 1) (let ((x (+ y 3))) ; lexical binding of X (g (+ y x))))) ; use lexical binding of X run: ? (f 2)10 | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30246685', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2761810/']} | jdg_382471 |
stackexchange | llm_judgeable_groundtruth_similarity | 8514017 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to take advantage of webapp2's new features for localization that also has locale-specific formatting for time and currency. Django has a good function called get_language_from_request that I made use of before I completely migrated to webapp2 and I now use the i18n from webapp2 instead and I can switch between localizations that I write with gettext and compile to files named messages.mo that my app can read and display. I've then identified and prioritized the following ways to get user's language:1. HTTP GET eg. hl=pt-br for Brazilian Portuguese2. HTTP SESSION variable I call i18n_language3. Cookie I should set and get but I don't know exactly how4. HTTP Header I could get and here I don't know exactly either and I'm looking how djnango does it with a convenient get_language_from_request that I used to use and now I've quit importing django and I still want this functionality for my now webapp2-based code. def get_language_from_request(self, request): """ Analyzes the request to find what language the user wants the system to show. If the user requests a sublanguage where we have a main language, we send out the main language. """ if self.request.get('hl'): self.session['i18n_language'] = self.request.get('hl') return self.request.get('hl') if self.session: lang_code = self.session.get('i18n_language', None) if lang_code: logging.info('language found in session') return lang_code lang_code = Cookies(self).get(LANGUAGE_COOKIE_NAME) if lang_code: logging.info('language found in cookies') return lang_code accept = os.environ.get('HTTP_ACCEPT_LANGUAGE', '') for accept_lang, unused in self.parse_accept_lang_header(accept): logging.info('accept_lang:'+accept_lang) lang_code = accept_lang return lang_code I see the django code is available but I don't know how much the i18n from webapp2 does for instance do I have to take care of fallback for languages such as pt-br should fall back to pt if there is no .mo localization for pt-br and similar for other dialects. Actually setting the language I can do with i18n.get_i18n().set_locale(language) I ask for your help to make the priority for the different ways to get user language and I would also like to know your ideas how to go on with the implementation. Or do you think that I can do with just using session variable and not be this thorough about a "complete" solution since I anyway mostly fix the language for a geographical usage where my only actual used translations now are Brazilian Portuguese and English but I want it well prepared to switch to Spanish and Russian and other languages also, therefore I'd like to be able to switch to user language and at least save it to the webapp2 session and know what you think about using also cookie and header to get user language. The original code I used to have si from django and looks like this and I can't use it anymore because it's locked to django.mo files and specific for django def get_language_from_request(request): """ Analyzes the request to find what language the user wants the system to show. Only languages listed in settings.LANGUAGES are taken into account. If the user requests a sublanguage where we have a main language, we send out the main language. """ global _accepted from django.conf import settings globalpath = os.path.join(os.path.dirname(sys.modules[settings.__module__].__file__), 'locale') supported = dict(settings.LANGUAGES) if hasattr(request, 'session'): lang_code = request.session.get('django_language', None) if lang_code in supported and lang_code is not None and check_for_language(lang_code): return lang_code lang_code = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME) if lang_code and lang_code not in supported: lang_code = lang_code.split('-')[0] # e.g. if fr-ca is not supported fallback to fr if lang_code and lang_code in supported and check_for_language(lang_code): return lang_code accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '') for accept_lang, unused in parse_accept_lang_header(accept): if accept_lang == '*': break # We have a very restricted form for our language files (no encoding # specifier, since they all must be UTF-8 and only one possible # language each time. So we avoid the overhead of gettext.find() and # work out the MO file manually. # 'normalized' is the root name of the locale in POSIX format (which is # the format used for the directories holding the MO files). normalized = locale.locale_alias.get(to_locale(accept_lang, True)) if not normalized: continue # Remove the default encoding from locale_alias. normalized = normalized.split('.')[0] if normalized in _accepted: # We've seen this locale before and have an MO file for it, so no # need to check again. return _accepted[normalized] for lang, dirname in ((accept_lang, normalized), (accept_lang.split('-')[0], normalized.split('_')[0])): if lang.lower() not in supported: continue langfile = os.path.join(globalpath, dirname, 'LC_MESSAGES', 'django.mo') if os.path.exists(langfile): _accepted[normalized] = lang return lang return settings.LANGUAGE_CODE Is it OK to do this for every request? And I think I should also set the header to the language self.response.headers['Content-Language'] = language According to my expectation I can take some function directly from django if I choose to use the http headers but I don't understand what it does so maybe you can explain this code for me from django: def parse_accept_lang_header(lang_string): """ Parses the lang_string, which is the body of an HTTP Accept-Language header, and returns a list of (lang, q-value), ordered by 'q' values. Any format errors in lang_string results in an empty list being returned. """ result = [] pieces = accept_language_re.split(lang_string) if pieces[-1]: return [] for i in range(0, len(pieces) - 1, 3): first, lang, priority = pieces[i : i + 3] if first: return [] priority = priority and float(priority) or 1.0 result.append((lang, priority)) result.sort(lambda x, y: -cmp(x[1], y[1])) return result Thank you Update I found that I couldn't use sessions in the initialize function of the request handler, maybe it's because the session object is not yet created. So I put the code for getting the language from the session i the BaseHandler render function and it appears to work. It would also be nice to consider the headers or cookie value.
Now provide the response and nothing else.
| Here's what I do - I have a base request handler that all my request handlers inherit from, then in here I have a constant that contains the available languages, and I override the init method to set the language on each request: import webapp2from webapp2_extras import i18nAVAILABLE_LOCALES = ['en_GB', 'es_ES']class BaseHandler(webapp2.RequestHandler): def __init__(self, request, response): """ Override the initialiser in order to set the language. """ self.initialize(request, response) # first, try and set locale from cookie locale = request.cookies.get('locale') if locale in AVAILABLE_LOCALES: i18n.get_i18n().set_locale(locale) else: # if that failed, try and set locale from accept language header header = request.headers.get('Accept-Language', '') # e.g. en-gb,en;q=0.8,es-es;q=0.5,eu;q=0.3 locales = [locale.split(';')[0] for locale in header.split(',')] for locale in locales: if locale in AVAILABLE_LOCALES: i18n.get_i18n().set_locale(locale) break else: # if still no locale set, use the first available one i18n.get_i18n().set_locale(AVAILABLE_LOCALES[0]) First I check the cookie, then the header, finally defaulting to the first available language if a valid one wasn't found. To set the cookie, I have a separate controller that looks something like this: import baseclass Index(base.BaseHandler): """ Set the language cookie (if locale is valid), then redirect back to referrer """ def get(self, locale): if locale in self.available_locales: self.response.set_cookie('locale', locale, max_age = 15724800) # 26 weeks' worth of seconds # redirect to referrer or root url = self.request.headers.get('Referer', '/') self.redirect(url) So a URL like www.example.com/locale/en_GB would change the locale to en_GB, setting the cookie and returning to the referrer (this has the advantage of being able to switch languages on any page, and have it stay on the same page). This method does not take into account partial matches for locales in the header, for instance "en" instead of "en_GB", but seeing as the list of languages I have enabled in the app is fixed (and the locale change URLs are hard-coded in the footer), I'm not too worried about it. HTH | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8514017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/108207/']} | jdg_382472 |
stackexchange | llm_judgeable_groundtruth_similarity | 1306301 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I rotate a rectangle by 45°, does it stay rectangular or become something else? I mean do 90° angles stay 90°? I am asking this question because I have some results where the rotated rectangle becomes not so rectangular ... I think I have a problem. Problem : I think the problem comes from scaling.... I draw the resulting data after rotation in another image, and I have got the rectangle correctly rotated. Maybe it is because I used imagesc to draw the background or axis image in MATLAB...?
Now provide the response and nothing else.
| If you are not plotting at "true aspect ratio", i.e., if the horizontal and vertical scales of your plot are not identical, a rotation of the plane will not appear on the screen as an isometry; circles will appear to be non-circular ellipses, rectangles will rotated to squashed parallelograms, and so forth. Here's a rotating square plotted at an aspect ratio of $4:1$: | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1306301', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_382473 |
stackexchange | llm_judgeable_groundtruth_similarity | 6901028 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with a page and I am getting an Page.IsValid = false and I am trying to determine which control is causing the validation problem.
Now provide the response and nothing else.
| In code (page_load), you can do this: (per MSDN: http://msdn.microsoft.com/en-US/library/dh9ad08f%28v=VS.80%29.aspx ) If (Me.IsPostBack) Then Me.Validate() If (Not Me.IsValid) Then Dim msg As String ' Loop through all validation controls to see which ' generated the error(s). Dim oValidator As IValidator For Each oValidator In Validators If oValidator.IsValid = False Then msg = msg & "<br />" & oValidator.ErrorMessage End If Next Label1.Text = msg End IfEnd If In the markup, you can... You can put "text" on your validator (like anasterisk...) Or use a validation_summary control (which requires an error message on your validator)... | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6901028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/444772/']} | jdg_382474 |
stackexchange | llm_judgeable_groundtruth_similarity | 411309 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it safe to store email on a cloud based VPS host? I know its not 100% like the unplugged server buried underground and that the Host has a big deal to do with it but, in the broader sense of avoiding snooping and data mining, how do people protect their data? I'm sure not everyone uses dedicated servers~ Is it even common to run email servers on a VPS? I DO want to be able to tell my clients that their data is relatively secure (assuming I take the proper precautions)~ I'm currently setting up email on a VPS with Ubuntu Server installed. I'm thinking of encrypting the home folder of the email service user but, I'm wondering if accessing the encrypted data will make an imap/smtp email server slow~
Now provide the response and nothing else.
| The data is as safe as however much you trust the provider to keep it safe. Odds are that encrypting it on disk doesn't matter, because many common encryption schemes keep the encryption key in RAM. Anyone who gets a shell on your VM (such as a malicious IT person on your provider's side, or even a hacker) could simply log in and extract email. More importantly, if your provider doesn't make regular backups and your server goes down for whatever reason, you could be down for hours (bad) or lose all of your clients' email (career-stunting). You'd have to make your own backups and secure those elsewhere. "Secure" may mean something different to your clients than it does to you. This sort of thing is generally detailed carefully in a service level agreement many pages long, so if your customers are holding you to an SLA, use that as your security guideline. As a rough guideline though, the setup you're describing is not PCI-compliant. In short, your data will not be very secure, you could lose a lot of data, and your customers' email is apparently worth up to (but not exceeding) $49.95 a month to you. Buy your own box, keep it under lock and key. Caveats on backups and malicious physical access still apply. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/411309', 'https://serverfault.com', 'https://serverfault.com/users/104865/']} | jdg_382475 |
stackexchange | llm_judgeable_groundtruth_similarity | 27450172 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
New to Python world. What is the difference between "Python Implementation" vs. "Python distribution" vs. Python itself? for example: I can download Python itself (Python 3.4.2) from python.org/downloads/release/python-342/ OR there is an "implementation" "nicknamed CPython" I can get from python.org/download/alternatives/ OR I can use a "distribution" from wiki.python.org/moin/PythonDistributions What are the differences? Can I use all at the same time? This lingo is not really elaborated upon.
Now provide the response and nothing else.
| "Python itself" is sort of a platonic ideal. Roughly speaking it is a specification, although there is no "real" specification as there is for some other languages like C. The behavior of Python the language is defined by the documentation available on python.org. Anything that does what that documentation says it's supposed to do would "count" as being Python. An implementation of Python is an actual program that provides that behavior. The most common one is CPython, which is what you download from python.org. The other implementations found on the "alternatives" page you mentioned are other programs that also "count as Python" in that they give the documented behavior (sometimes with some caveats), but are written independently and may, for instance, run on other platforms, run faster, run slower, do things differently under the hood, etc. A distribution of Python is a bundle that contains an implementation of Python along with a bunch of libraries or tools. In theory, a distribution of Python could use any implementation, although all the ones I know of use CPython. The download from python.org could also be considered a distribution (a minimal distribution that doesn't contain any "extras"). You could think of it by analogy to some sort of physical machine, say an oven. "Python itself" or "Python the language" is like a description of what that machine does: it heats up, you can turn a dial to set the temperature, etc. You can't use the description to cook anything; you have to build an actual oven first. An implementation of Python would be like an actual oven that you built; as long as it does what an oven is supposed to do, it is an oven, but there could be many ways of building an oven that does the right things (wood-fired, gas-fired, solar-powered, etc.). A distribution would be an oven that comes with other things that you might often want to use along with it --- like maybe a fully-equipped kitchen that includes pots, pans, spatula, and mixing bowls as well as the oven itself. You can indeed use all three at the same time, and in some sense you kind of have to. There is no way to use "Python the language" without using an implementation of it; it would be like saying you want to "drive a car" without driving any particular kind of car. Likewise, you can't really use an implementation without getting it as part of a distribution (at least if you consider the "bare" python.org downloads to be a distribution too). In practice, most people using Python for practical purposes are using CPython (the de facto standard implementation), so the real choice is among different distributions of that. Among those, you mainly choose based on what extras you want to come with it. You can choose to use the "bare" distribution from python.org and then install packages yourself, or if you plan on doing scientific/analytics computing, you could choose one of the distributions geared towards that (e.g., Anaconda or Canopy). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27450172', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4355232/']} | jdg_382476 |
stackexchange | llm_judgeable_groundtruth_similarity | 846 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have yet to find anyone that knows of any benefits of low-profile tires. All of the pros that I've asked have said there is no benefit. Naturally, the number of pros that I have available to ask is limited so I'm wondering what a wider audience will say.
Now provide the response and nothing else.
| Naturally, the number of pros that Ihave available to ask is limited soI'm wondering what a wider audiencewill say. The nice thing about a question like this is that it isn't up for opinions. The cost / benefit ratios can be measured. Grassroots Motorsports routinely does this sort of analysis. For example, in Speed Holes by Per Schroeder (from the Feb. 2008 issue), Per compares a variety of plus-sizes (lower and lower profiles) on the test car (their BMW project car at the time). Here are some choice quotes that address your pro / con question above (snipped to emphasize but the whole article is online): Here’s a shocker for those hailingfrom the bigger is better camp: The18s didn’t win this little comparo.For one, the 18-inch wheel and tirepackage yielded a real degradation inride quality on our road loop. ... We’dsteer away from a setup this radicalon a daily-driven car. ... Interestingly, the 18-inchcombination felt more stable aroundthe steady-state skidpad than eitherof the other combinations. The short,short sidewalls could have been thereason, as there’s simply lessvertical rubber to roll over whenfacing high g-loads. There was anoticeable reduction in brakingperformance, unfortunately, as wecould feel the extra mass fighting thecar. ... To summarize some of the above: lower profile generally trades ride quality for turning ability, sometimes sacrificing braking due to the increase in wheel mass. Going too low, though, can push you past the point of losing in all categories. | {} | {'log_upvote_score': 5, 'links': ['https://mechanics.stackexchange.com/questions/846', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/555/']} | jdg_382477 |
stackexchange | llm_judgeable_groundtruth_similarity | 1115924 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a TextBlock inside a limited-size control. If the text is too long to fit into the control, I'd like to show a tooltip with full text. This is a classic behavior you surely know from many apps. I tried using a Converter to convert TextBlock width into Tooltip's Visibility. <GridViewColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Text}"> <TextBlock.ToolTip> <ToolTip DataContext="{TemplateBinding Content}" Visibility="{Binding Converter={StaticResource visConvert}}"> <TextBlock Text="{Binding Text}"></TextBlock> </ToolTip> </TextBlock.ToolTip> </TextBlock> </DataTemplate></GridViewColumn.CellTemplate> The problem is that in the Converter: public object Convert(object value, ... 'value' is the DataBound item. I'd like the 'value' to be the TextBlock, to observe its Width, and compare it to the GridViewColumn.Width.
Now provide the response and nothing else.
| I figured it out, the Tooltip has PlacementTarget property that specifies the UI element that has the Tooltip. In case anyone needs it: <TextBlock Text="{Binding Text}"> <TextBlock.ToolTip> <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}" Visibility="{Binding Converter={StaticResource toolVisConverter}}"> <TextBlock Text="{Binding Text}"/> <!-- tooltip content --> </ToolTip> </TextBlock.ToolTip></TextBlock> And then write a Converter that converts TextBlock to Visibility (based on TextBlock width). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1115924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/90998/']} | jdg_382478 |
stackexchange | llm_judgeable_groundtruth_similarity | 35793 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
It is clear that each maximal ideal in ring of continuous functions over $[0,1]\subset \mathbb R$ corresponds to a point and vice-versa. So, for each ideal $I$ define $Z(I) =\{x\in [0,1]\,|\,f(x)=0, \forall f \in I\}$. But map $I\mapsto Z(I)$ from ideals to closed sets is not an injection! (Consider the ideal $J(x_0)=\{f\,|\,f(x)=0, \forall x\in\hbox{ some closed interval which contains }x_0\}$) How can we describe ideals in $C([0,1])$ ? Is it true that prime ideals are maximal for this ring?
Now provide the response and nothing else.
| Here is a way to construct a non-maximal prime ideal: consider the multiplicative set $S$of all non-zero polynomials in $C[0,1]$. Use Zorn lemma to get an ideal $P$ that is disjoint from $S$ and is maximal with this property. $P$ is clearly prime (for this you only need $S$ to be multiplicative.) On the other hand $P$ cannot be any one of the maximal ideals, sinceit does not contain $x-c$ for every $c \in [0,1]$. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/35793', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4298/']} | jdg_382479 |
stackexchange | llm_judgeable_groundtruth_similarity | 9110472 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently starting to create an application that would profit a lot from C# 5's async-await feature. But I'm not sure which version of VS and of the async runtime to use. Looking at OS popularity charts, I'll need to support Windows XP for another three years or so. It looks like .net 4.5 runs only on newer versions of Windows, so I need to target .net 4.0. The development machines use Windows 7, so using a newer version of VS is not a problem. Now I need to first choose a compiler for doing this: VS2010 with AsyncCTP VS2012 Preview (and final once it arrives), setting the target to .net 4.0 Mono (Looks like 2.12 has async-await, I prefer/am used to VS over MonoDevelop as IDE) Which one has fewer code-gen bugs? Looking at Jon Skeet's blog the VS2012 Preview uses a never code-generator than the CTP. And more importantly which runtime to use? Does VS2012 contain a redistributable async runtime for use with .net 4? I managed to compile code, with the preview, by referencing the AsyncCTP runtime. But since the CTP has strange licensing conditions, that doesn't look like a good long term solution. Or should I use a third party implementation? Perhaps mono has one? For distributing the library I prefer simply putting the dll in the same directory as the application, instead of some kind of installer. I'd also like it if my binaries would work without changes on mono+Linux/MacOS. So the runtime should either be compatible with whatever mono (2.12 probably) has built in, or allow use on non windows OSs.
Now provide the response and nothing else.
| Microsoft released the Async Targeting Pack (Microsoft.Bcl.Async) through Nuget as a replacement for the AsyncCTP. You can read more about it here: http://blogs.msdn.com/b/bclteam/archive/2013/04/17/microsoft-bcl-async-is-now-stable.aspx . You can read about the previous version here: http://blogs.msdn.com/b/lucian/archive/2012/04/24/async-targeting-pack.aspx . As this pack is officially supported, I now believe the best option for targeting XP + async would be using Visual Studio 2012 + C#5 + Async Targeting Pack. If you feel the need to target .NET 3.5 though, you can still use (my) AsyncBridge for .NET 3.5 . | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9110472', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/445517/']} | jdg_382480 |
stackexchange | llm_judgeable_groundtruth_similarity | 330303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to create a stored procedure as CREATE PROCEDURE Dummy @ID INT NOT NULLASBEGINEND Why is it not possible to do something like this?
Now provide the response and nothing else.
| You could check for its NULL-ness in the sproc and RAISERROR to report the state back to the calling location. CREATE proc dbo.CheckForNull @i int asbegin if @i is null raiserror('The value for @i should not be null', 15, 1) -- with log endGO Then call: exec dbo.CheckForNull @i = 1 or exec dbo.CheckForNull @i = null | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/330303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21586/']} | jdg_382481 |
stackexchange | llm_judgeable_groundtruth_similarity | 33035922 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Gradle project in Eclipse IDE and I usually use the option gradle run to run my Java application. I have an error in my Java code and I want to debug it, but when I execute gradle run , the debugger doesn't stop in the breakpoints. In the "Debug as" menu, I don't have anything like gradle debug . How can I debug my application?
Now provide the response and nothing else.
| Even though the accepted answer should work, you can achieve it in a much easier way. Just run gradle run --debug-jvm . This starts the application in remote debug mode, and you can attach with any remote debugger, e.g., Eclipse, on port 5005 . Assuming you use Eclipse as IDE: In Eclipse, go on your Project -> Debug as... -> Debug Configuration -> Remote Java Application. As host set localhost , as port 5005 , and you are free to go. For more information see the official Gradle Java plugin doc regarding testing. [...] can also be enabled at invocation time via the --debug-jvm task option (since Gradle 1.12). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33035922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5300837/']} | jdg_382482 |
stackexchange | llm_judgeable_groundtruth_similarity | 31847 |
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to Wiktionary , שלמה (pronounced /ʃloˈmo/ in Modern Hebrew) is the Hebrew version of Solomon. The pronunciation seems to follow reasonably well from the spelling, and as far as I can tell, it was spelled the same way in Biblical Hebrew. The descendants look quite a bit different from the Hebrew pronunciation: Ancient Greek: Σολομών (Solomṓn), Σαλομών (Salomṓn), Σαλωμών (Salōmṓn), Σολομῶν (Solomôn), Σολωμών (Solōmṓn) → Gothic: (saulaumōn) Greek: Σολομών (Solomón) → Latin: Solomon → English: SolomonClassical Syriac: ܫܠܝܡܘܢ (šlemūn) Arabic: سليمان (Sulaymān) → English: Suleiman → Turkish: Süleyman It seems reasonable to me that the /ʃ/ shifted to a /s/, and most of the languages added a vowel between the /s/ and the /l/, and that Syriac and Arabic changed the vowels. However, I don't understand where the /n/ at the end came from. I'm not aware of any other Biblical names that ended in an open vowel in Hebrew and shifted toward having an n.
Now provide the response and nothing else.
| For the vowels, pay close attention to the nəquddoth (vowel dots)! Between the shin and the lamedh is a shəwa mark; sometimes this indicates an extra-short vowel, sometimes no vowel at all. But historically, shəwa was always pronounced ( shəwa na ) if it came after the first consonant in the word. So at the time of the Septuagint, the name was pronounced something like Shəlōmō . When Greek-speakers tried to transcribe this word for the LXX, they ran into some difficulties. Ancient Greek didn't have [ʃ] , only [s] . So they wrote the first letter as a plain sigma ("s"). Similarly, Ancient Greek didn't have any extra-short vowels corresponding to shəwa na , so they made do with the letters they had: in this case, it was sometimes transcribed with a short alpha ("a"), sometimes with an omicron ("o"). You'll also sometimes see shəwa transcribed with epsilon ("e"), depending on the word; this is closer to the modern pronunciation. Finally, in Ancient Greek, it was extremely rare for names to end in -ō . It was much more common for them to end in -ōn (Glaucōn, Cleōn, Zēnōn). So the translators stuck a nu ("n") on the end to make the name look more like a name. This is also why English "Moses" ends in "s" when the Hebrew Mōshe (מֹשֶׁה) doesn't: -ē is a feminine ending in Greek, not a masculine one, so the translators changed it to the very common and masculine -ēs (Achillēs, Sōcratēs, Diogēnēs), and this version persisted. One might expect the Latin of the Vulgate to then remove this -n again, since the same "type" of name in Latin ended in plain -o (Cicero, Scipio, Piso). That's why we now talk about "Zeno's Paradox" instead of "Zenon's", even though he was Greek and spelled his own name with an -n . But the Vulgate tends to follow the Greek with extreme precision where names are concerned, so "Solomon" (and "Moses") persisted. P.S. Chromium fails beautifully when it tries to render the question title… | {} | {'log_upvote_score': 5, 'links': ['https://linguistics.stackexchange.com/questions/31847', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/9425/']} | jdg_382483 |
stackexchange | llm_judgeable_groundtruth_similarity | 45671 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Consider the Kernel $K_n$ of the natural group homomorphism from the $n$ -th braid group to the symmetric group. Then one can delete the $m$ -th braid. This is a well defined homomorphism $d_m:K_n\rightarrow K_{n-1}$ . So is there for every $n\in \mathbb{N}$ a braid $1\neq b\in K_n$ with $d_m(b)=0$ for all $m$ . This is clearly true for $n=2$ , as $K_1$ is trivial and it is also true for $n=2$ (The "standard" braid does the job). What about higher $n$ . Is there a nice construction, that works for every $n$ ?
Now provide the response and nothing else.
| Certain elements in the $n-1$ st term of the lower central series of the pure braid group should work. The pure braid group is generated by generators $\beta_{i,j}$ where the $i$ th strand pushes a finger over the intervening strands and hooks with the $j$ th strand.Then, when $n=3$ , the commutator $[\beta_{1,2},\beta_{1,3}]$ is Brunnian in in your sense. For $n=4$ you can consider $[\beta_{1,2},[\beta_{1,3},\beta_{1,4}]]$ , etc. You need to make sure your commutator includes every $\beta_{1,k}$ . Something like $[\sigma_1^2,[\sigma_2^2,\sigma_3^2]]$ would also work. The reason this works is that deleting a strand from the braid kills at least one generator involved in the iterated commutator, so that it collapses to $1$ . That's why you need to include a generator $\beta_{i,j}$ that involves each strand. (This has been edited to remove inaccuracies of previous versions.) | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/45671', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3969/']} | jdg_382484 |
stackexchange | llm_judgeable_groundtruth_similarity | 42336115 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have used php artisan make:auth it generates the register view and /register route. But I need to pass a variable to this register view: <label>Region:</label><select name="region" id="region" class="form-control" ><option>--Select a Region--</option> @foreach($region as $reg) <option value="{{$reg->region_id}}">{{$reg->region_name}}</option> @endforeach </select><br> Something like: public function register(){ $region=Region::all(); return view('auth.register')->with('region',$region);} But where is this kind of method?
Now provide the response and nothing else.
| You can achieve that in two ways. Overriding RegistersUsers Laravel's default auth uses RegistersUsers trait on RegisterController to render view.What you can do is simply override the function found on Illuminate\Foundation\Auth\RegistersUserson RegisterController like the following /** * Show the application registration form. * * @return \Illuminate\Http\Response */public function showRegistrationForm(){ $region=Region::all(); return view('auth.register', compact('region'));} Now above code will over ride the trait and use the showRegistrationForm from controller. Modifying Routes When you do php artisan make:auth , it will add the Auth::routes() to your web.php file. Remove that and add the following, // Authentication Routes...Route::get('login', 'Auth\LoginController@showLoginForm')->name('login');Route::post('login', 'Auth\LoginController@login');Route::post('logout', 'Auth\LoginController@logout')->name('logout');// Registration Routes...Route::get('register', 'Auth\RegisterController@showRegistrationForm')->name('register');Route::post('register', 'Auth\RegisterController@register');// Password Reset Routes...Route::get('password/reset', 'Auth\ForgotPasswordController@showLinkRequestForm')->name('password.request');Route::post('password/email', 'Auth\ForgotPasswordController@sendResetLinkEmail')->name('password.email');Route::get('password/reset/{token}', 'Auth\ResetPasswordController@showResetForm')->name('password.reset');Route::post('password/reset', 'Auth\ResetPasswordController@reset'); Now on route for register, modify the RegisterController@showRegistrationForm to RegisterController@register. But do not use simply register. Instead use like getRegisterForm. Because register function handles the post registration logic. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42336115', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5928946/']} | jdg_382485 |
stackexchange | llm_judgeable_groundtruth_similarity | 1169122 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on this problem $^{(1)}$ on Lebesgue measurability of composition of Lebesgue measurable function and a continuous function: Show that $g \circ f$ is Lebesgue measurable, if $f: X \to \mathbb R$ is Lebesgue measurable and if $g: \mathbb R \to \mathbb R$ is continuous. Prior to this posting, I did lots of research online. First, I found this 6-year-old solution to a problem very similar to mine here at MathHelpForum, but on closer inspection I think not only the author left lots of gap, but also I am not sure if this is a correct solution, and on top of it I do not really understand it. And then internally in MSE, I found this 2012's posting and also this 2013's posting , which are similar but not exactly the same. In my naive logic, I am thinking of first proving that $g$ is measurable because of its continuity, and then since composition of 2 measurable functions is measurable, therefore $g \circ f$ is measurable. But my logic is unreliable, please help me with the right direction and also steps to solve this question. Thank you very much for your time and help. Oops! I forget to include definition to Lebesgue measurability and Lebesgue measurable function until @PhoemueX brought it up. The problem with this text is that it does not have one nice, stand-alone paragraph definition. According to its index, it is written here and there on pages 21, 27 and 39. Here is what I managed to piece them together from those pages: Let $X = \mathbb R$ and let $\mathcal C$ be the collection of intervals of the form $(a, b]$ ... Let $\mathcal l(I) = b - a$ if $I = (a, b]$ ... Define $\mu^*$ as an outer measure... however, that if we restrict $\mu^*$ to a $\sigma$ -algebra $\mathcal L$ which is strictly smaller than the collection of all subsets of $\mathbb R$ , then $\mu^*$ will be a measure on $\mathcal L$ . That measure is what is known as Lebesgue measure. The $\sigma$ -algebra $\mathcal L$ is called the Lebesgue $\sigma$ -algebra .... A set is Lebesgue measurable if it is in the Lebesgue $\sigma$ -algebra. If $X$ is a metric space, $\mathcal B$ is the Borel $\sigma$ -algebra, and $f: X \to \mathbb R$ is measurable with respect to $\mathcal B$ , we say $f$ is Borel measurable. If $f : \mathbb R \to \mathbb R$ is measurable with respect to the Lebesgue $\sigma$ -algebra, we say $f$ is Lebesgue measurable function . Footnotes: (1) Richard F. Bass' Real Analysis , 2nd. edition, chapter 5: Measurable Functions, Exercise 5.6, page 44.
Now provide the response and nothing else.
| Edit : following the comment of kahen below, I modified my answer: Lebesgue-measurability of a function $h$ is measurability of $h\colon (X,\mathcal{L}_X)\to (Y,\mathcal{B}_{Y})$, not $h\colon (X,\mathcal{L}_X)\to (Y,\mathcal{L}_{Y})$) You have that $f\colon (X,\mathcal{L}_X)\to (\mathbb{R},\mathcal{B}_\mathbb{R})$ is Lebesgue -measurable (for the $\sigma$-algebras $\mathcal{L}_X$, $\mathcal{B}_\mathbb{R}$). As $g\colon \mathbb{R}\to \mathbb{R}$ is continuous, it is Borel -measurable ($g\colon (\mathbb{R},\mathcal{B}_\mathbb{R})\to (\mathbb{R},\mathcal{B}_\mathbb{R})$ is measurable for the $\sigma$-algebras $\mathcal{B}_\mathbb{R}$, $\mathcal{B}_\mathbb{R}$). You want to show that $g\circ f$ is Lebesgue -measurable. i.e. $g\circ f \colon (X,\mathcal{L}_X)\to (\mathbb{R},\mathcal{B}_\mathbb{R})$ is measurable. Take any $B\in\mathcal{B}_\mathbb{R}$: you need to show that $(g\circ f)^{-1}(B)\in \mathcal{L}_X$. By measurability of $g$, you have that for since $B\in\mathcal{B}_\mathbb{R}$, $B^\prime = g^{-1}(B)\in \mathcal{B}_\mathbb{R}$. By measurability of $f$, this implies that $f^{-1}(B^\prime)\in \mathcal{L}_X$, i.e. $(g\circ f)^{-1}(B)\in \mathcal{L}_X$. This shows that $g\circ f$ is measurable for the $\sigma$-algebras $\mathcal{L}_X$, $\mathcal{B}_\mathbb{R}$ (i.e., $g\circ f\colon (X, \mathcal{L}_X)\to (\mathbb{R}, \mathcal{B}_\mathbb{R})$ is measurable), as wanted. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1169122', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/80219/']} | jdg_382486 |
stackexchange | llm_judgeable_groundtruth_similarity | 6719797 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Rails route that takes stock ticker symbols as the :id feeds/AMZN will return a page for Amazon feeds/AMZN.csv will return a CSV representation of the same data. But I also need to accomodate stocks like VIA.B (Viacom) so that both of these routes work: feeds/VIA.B (html)feeds/VIA.B.csv (csv) Is this possible? How would I set the routing up?
Now provide the response and nothing else.
| I ran into this while patching the RubyGems API recently (trying to access the flickr.rb using the API ( /api/v1/gems/flickr.rb.json ) was not working). The trick was to supply the route with a regexp to handle the :id parameter, and then specify valid :format . Keep in mind that the :id regexp needs to be "lazy" (must end with a question mark), otherwise it will eat the .csv and assume that it's part of the id. The following example would allow JSON, CSV, XML, and YAML formats for an id with a period in it: resources :feeds, :id => /[A-Za-z0-9\.]+?/, :format => /json|csv|xml|yaml/ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6719797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232417/']} | jdg_382487 |
stackexchange | llm_judgeable_groundtruth_similarity | 7641 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to practice myself with random algorithms. Lets call a CNF formula over n variables s-formula if it is either unsatisable or it has atleast $\frac{2^n}{n^{10}}$ satisfying assignments. I would like your help with show a randomized algorithm for checking thesatisfiability of s-formulas, that outputs the correct answer with probability atleast $\frac{2}{3}$. I'm not really sure how to prove it. First thing that comes to my head is this thing- let's accept with probability $\frac{2}{3}$ every input. Then if the input in the language, it was accepted whether in the initial toss($\frac{2}{3}$) or it was not and then the probability to accept it is $\frac{1}{3}\cdot proability -to-accept$ which is bigger than $\frac{2}{3}$. Is this the way to do that or should I use somehow Chernoff inequality which I'm not sure how.
Now provide the response and nothing else.
| Basic idea: Pick a random assignment and check it. Then, repeat it many times. Even if one of the assignments satisfies the formula you answer "YES" (otherwise, you answer "NO") We know that the input formula is "simple": in plain words it means that either it is not-satisfiable or it has "many" satisfying assignments. If it is not satisfiable - no matter what assignment(s) you choose, it will never satisfy the formula. Therefore, the above algorithm always answers correctly for such inputs, and from this point and on we consider only satisfiable inputs. If the input is satisfiable, what is the probability that a random assignment satisfies it? Let $\varphi$ be a CNF over $n$ variables with more than $2^n/n^{10}$ satisfying assignments, then$$ \Pr_{x\sim U}[ \varphi(x)=T] \ge \frac{2^n/n^{10}}{2^n}$$ Now we repeat it $k$ times (you'll have to pick $k$ carefully. Let's do it later).Each time we pick a random $x$. Let $E_i$ be the event that in the $i$-th instance $\varphi$ is satisfied. What is the probability that we find out at least one satisfying assignment after $k$ tries? It is $\Pr[\bigcup_{i=1}^k E_i]$. We know that $\Pr[E_i] \ge 1/n^{10}$, and you can use standard linearity (of independent events) to work it out. Final step - find the (minimal) $k$ that makes $\Pr[\cup_k E_i] \ge 2/3$ as required. Bonus question: how low can you make $k$ (and make your algorithm more efficient) if you analyze the above using Chernoff's inequality? | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/7641', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/1589/']} | jdg_382488 |
stackexchange | llm_judgeable_groundtruth_similarity | 46589819 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to disable the error overlay when running a create-react-app in development mode? This is the overlay I'm talking about: I'm asking this because im using error boundaries ( React 16 Error Boundaries ) in my app to display error messages when components crashes, but the error overlay pops up and covers my messages.
Now provide the response and nothing else.
| We don't provide an option to disable the error overlay in development.Error boundaries do not take its place (they are meant for production use). There is no harm having both the development error overlay and your error boundary; simply press Escape if you'd like to view your error boundary. We feel the error overlay provides tremendous value over your typical error boundary (source code, click to open, etc).It is also vital as we explore enabling hot component reloading as a default behavior for all users. If you feel strongly about disabling the overlay, you'll need to eject from react-scripts and discontinue use of webpackHotDevClient . A less intrusive method may be removing the error event listener installed by the overlay off of window . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46589819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8727376/']} | jdg_382489 |
stackexchange | llm_judgeable_groundtruth_similarity | 12595457 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just read an article that says TCPClient.Read() may not get all the sent bytes in one read. How do you account for this? For example, the server can write a string to the tcp stream. The client reads half of the string's bytes, and then reads the other half in another read call. how do you know when you need to combine the byte arrays received in both calls?
Now provide the response and nothing else.
| how do you know when you need to combine the byte arrays received in both calls? You need to decide this at the protocol level. There are four common models: Close-on-finish: each side can only send a single "message" per connection. After sending the message, they close the sending side of the socket. The receiving side keeps reading until it reaches the end of the stream. Length-prefixing: Before each message, include the number of bytes in the message. This could be in a fixed-length format (e.g. always 4 bytes) or some compressed format (e.g. 7 bits of size data per byte, top bit set for the final byte of size data). Then there's the message itself. The receiving code will read the size, then read that many bytes. Chunking: Like length-prefixing, but in smaller chunks. Each chunk is length-prefixed, with a final chunk indicating "end of message" End-of-message signal: Keep reading until you see the terminator for the message. This can be a pain if the message has to be able to include arbitrary data, as you'd need to include an escaping mechanism in order to represent the terminator data within the message. Additionally, less commonly, there are protocols where each message is always a particular size - in which case you just need to keep going until you've read that much data. In all of these cases, you basically need to loop, reading data into some sort of buffer until you've got enough of it, however you determine that. You should always use the return value of Read to note how many bytes you actually read, and always check whether it's 0, in which case you've reached the end of the stream. Also note that this doesn't just affect network streams - for anything other than a local MemoryStream (which will always read as much data as you ask for in one go, if it's in the stream at all), you should assume that data may only become available over the course of multiple calls. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12595457', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1587463/']} | jdg_382490 |
stackexchange | llm_judgeable_groundtruth_similarity | 513832 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been using the == operator in my program to compare all my strings so far.However, I ran into a bug, changed one of them into .equals() instead, and it fixed the bug. Is == bad? When should it and should it not be used? What's the difference?
Now provide the response and nothing else.
| == tests for reference equality (whether they are the same object). .equals() tests for value equality (whether they contain the same data). Objects.equals() checks for null before calling .equals() so you don't have to (available as of JDK7, also available in Guava ). Consequently, if you want to test whether two strings have the same value you will probably want to use Objects.equals() . // These two have the same valuenew String("test").equals("test") // --> true // ... but they are not the same objectnew String("test") == "test" // --> false // ... neither are thesenew String("test") == new String("test") // --> false // ... but these are because literals are interned by // the compiler and thus refer to the same object"test" == "test" // --> true // ... string literals are concatenated by the compiler// and the results are interned."test" == "te" + "st" // --> true// ... but you should really just call Objects.equals()Objects.equals("test", new String("test")) // --> trueObjects.equals(null, "test") // --> falseObjects.equals(null, null) // --> true You almost always want to use Objects.equals() . In the rare situation where you know you're dealing with interned strings, you can use == . From JLS 3.10.5. String Literals : Moreover, a string literal always refers to the same instance of class String . This is because string literals - or, more generally, strings that are the values of constant expressions ( §15.28 ) - are "interned" so as to share unique instances, using the method String.intern . Similar examples can also be found in JLS 3.10.5-1 . Other Methods To Consider String.equalsIgnoreCase() value equality that ignores case. Beware, however, that this method can have unexpected results in various locale-related cases, see this question . String.contentEquals() compares the content of the String with the content of any CharSequence (available since Java 1.5). Saves you from having to turn your StringBuffer, etc into a String before doing the equality comparison, but leaves the null checking to you. | {} | {'log_upvote_score': 14, 'links': ['https://Stackoverflow.com/questions/513832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25645/']} | jdg_382491 |
stackexchange | llm_judgeable_groundtruth_similarity | 146085 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using freescale IMX6 quad processor. I want to know if the top command lists the CPU usage of all 4 cores or of a single core. I am seeing an application's CPU usage being the same with 4 cores and with a single core. I was guessing the CPU usage by the application will increase on a single core and decrease on 4 cores but it has not changed.
Now provide the response and nothing else.
| I'm not entirely sure what you're asking here. Yes, top shows CPU usage as a percentage of a single CPU by default. That's why you can have percentages that are >100. On a system with 4 cores, you can see up to 400% CPU usage. You can change this behavior by pressing I (that's Shift + i and toggles "Irix mode") while top is running. That will cause it to show the pecentage of available CPU power being used. As explained in man top : 1. %CPU -- CPU Usage The task's share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. In a true SMP environment, if 'Irix mode' is Off, top will operate in 'Solaris mode' where a task's cpu usage will be divided by the total number of CPUs. You toggle 'Irix/Solaris' modes with the 'I' interactive command. Alternatively, you can press 1 which will show you a breakdown of CPU usage per CPU: top - 13:12:58 up 21:11, 17 users, load average: 0.69, 0.50, 0.43Tasks: 248 total, 3 running, 244 sleeping, 0 stopped, 1 zombie%Cpu0 : 33.3 us, 33.3 sy, 0.0 ni, 33.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu1 : 16.7 us, 0.0 sy, 0.0 ni, 83.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu2 : 60.0 us, 0.0 sy, 0.0 ni, 40.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st%Cpu3 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stKiB Mem: 8186416 total, 6267232 used, 1919184 free, 298832 buffersKiB Swap: 8191996 total, 0 used, 8191996 free, 2833308 cached | {} | {'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/146085', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/77442/']} | jdg_382492 |
stackexchange | llm_judgeable_groundtruth_similarity | 6443179 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider this example function from a fictive game engine API: function Entity.SetHealth( Number health ) Is it bad to have functions like this accept -1 as parameter, causing, in this case, the entity to become invincible? Should I rather use two extra functions: Entity.SetInvincible and Entity.GetInvincible ? Note that this example with invincibility and health is really just made up by me.
Now provide the response and nothing else.
| One possible solution without regeneration the random each time is to use the following algorithm: public int getRandomWithExclusion(Random rnd, int start, int end, int... exclude) { int random = start + rnd.nextInt(end - start + 1 - exclude.length); for (int ex : exclude) { if (random < ex) { break; } random++; } return random;} This method can be either called with an array reference, e.g. int[] ex = { 2, 5, 6 };val = getRandomWithExclusion(rnd, 1, 10, ex) or by directly inserting the numbers into the call: val = getRandomWithExclusion(rnd, 1, 10, 2, 5, 6) It generates a random number (int) between start and end (both inclusive) and does not give you any number which is contained in the array exclude . All other numbers occur with equal probability. Note, that the following constrains must hold: exclude is sorted ascendingly and all numbers are within the range provided and all of them are mutually different. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6443179', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/810738/']} | jdg_382493 |
stackexchange | llm_judgeable_groundtruth_similarity | 9629 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've read that the chi square test is useful to see if a sample is significantly different from a set of expected values. For example, here is a table of results of a survey regarding people's favourite colours (n=15+13+10+17=55 total respondents): red, blue, green, yellow15, 13, 10, 17 A chi square test can tell me if this sample is significantly different from the null hypothesis of equal probability of people liking each colour. Question: Can the test be run on the proportions of total respondents who like a certain colour? Like below: red, blue, green, yellow0.273, 0.236, 0.182, 0.309 Where, of course, $0.273 + 0.236 + 0.182 + 0.309=1$ . If the chi square test is not suitable in this case, what test would be? Edit : I tried @Roman Luštrik answer below, and got the following output, why am I not getting a p-value and why does R say "Chi-squared approximation may be incorrect"? chisq.test(c(0, 0, 0, 8, 6, 2, 0, 0), p = c(0.406197174, 0.088746395, 0.025193306, 0.42041479, 0.03192905, 0.018328576, 0.009190708, 0)) Chi-squared test for given probabilities data: c(0, 0, 0, 8, 6, 2, 0, 0) X-squared = NaN, df = 7, p-value = NA Warning message: In chisq.test(c(0, 0, 0, 8, 6, 2, 0, 0), p = c(0.406197174, 0.088746395, : Chi-squared approximation may be incorrect
Now provide the response and nothing else.
| Correct me if I'm wrong, but I think this can be done in R using this command chisq.test(c(15, 13, 10, 17)) Chi-squared test for given probabilities data: c(15, 13, 10, 17) X-squared = 1.9455, df = 3, p-value = 0.5838 This assumes proportions of 1/4 each. You can modify expected values via argument p . For example, you think people may prefer (for whatever reason) one color over the other(s). chisq.test(c(15, 13, 10, 17), p = c(0.5, 0.3, 0.1, 0.1)) Chi-squared test for given probabilities data: c(15, 13, 10, 17) X-squared = 34.1515, df = 3, p-value = 1.841e-07 | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/9629', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/2830/']} | jdg_382494 |
stackexchange | llm_judgeable_groundtruth_similarity | 46203449 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been working on an App that allows a few different components to update a BehaviorSubject in Angular. Within each component I store a local copy of the previous BehaviorSubject value. In order to know whether or not the component generated the new value being pushed out I was planning on just comparing the two objects using LoDash's _.isEqual() function. However I am finding that my local copy of the Observable has already been updated before the comparison can take place. Does Angular look for = statements and create an external binding to that component property outside of the Observable next function? Given the code below I am finding that my this.QueryParams property within the component has been updated to the current value being processed in the function, causing my comparison to fail even though I don't assign the new value to the property until the if statement has been evaluated. Component export class LogsModalComponent implements OnInit { private QueryParams: LogsQueryParameters ngOnInit() { this._LogsService.QueryParameters$.subscribe(_QueryParams => { console.log(this.QueryParams); console.log(_QueryParams); if (!_.isEqual(this.QueryParams, _QueryParams) { this.QueryParams = _QueryParams; // Some code to process if the new value was different. } } } updateStartDate() { this.QueryParams.filterStartDate = _.isUndefined(this.FilterStartDate) ? NaN : new Date(this.FilterStartDate.nativeElement.value).getTime(); this._LogsService.updateQueryParams(this.QueryParams);}} Service LogsQueryParameters: BehaviorSubject<LogsQueryParameters> = new BehaviorSubject<LogsQueryParameters>({ limit: 25, logLevels: "", logTypes: "", logUserIDs: "", filterStartDate: NaN, filterEndDate: NaN }) LogsQueryParameters$ = this.LogsQueryParameters.asObservable(); updateQueryParams(QueryParams) { this.LogsQueryParameters.next(QueryParams); }
Now provide the response and nothing else.
| RxJS observables have a method distinctUntilChanged() that returns a new observable that only emits a new value if it is different from the previously emitted value: this._LogsService.QueryParameters .distinctUntilChanged() .subscribe((_QueryParams) => this.QueryParams = _QueryParams); This works for simple comparisons. If you need the _.isEqual function nonetheless, you can pass a callback to distinctUntilChanged() to perform the comparison instead: this._LogsService.QueryParameters .distinctUntilChanged((prev, curr) => _.isEqual(prev, curr)) .subscribe((_QueryParams) => this.QueryParams = _QueryParams); Note that you don't return ! ... inside the callback, whenever the return value is false (meaning the value under test is not equal ), the value passes . Update From your latest edit it looks like you're actually passing around the exact same object, and only mutating its contents, as @Brandon suggested in the comment below. You could try creating a new object when updating, through Object.assign() : updateStartDate() { this.QueryParams.filterStartDate = _.isUndefined(this.FilterStartDate) ? NaN : new Date(this.FilterStartDate.nativeElement.value).getTime(); this._LogsService.updateQueryParams(Object.assign({}, this.QueryParams));} Objects are actually passed by reference. Note that new Date() also returns you an object. That object is passed and assigned by reference as well, only this time Object.assign won't help you out, since the date object depends on prototype linkage (it's not a plain object). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46203449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1625304/']} | jdg_382495 |
stackexchange | llm_judgeable_groundtruth_similarity | 40563 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After reading about the 'imminent cryptocalypse' etc, I started thinking about a cryptographic protocol that would not depend on complexity of mathematical operations (e.g. factorisation, discrete logarithm) for privacy. I made a proof of concept that works by round-tripping the locally encrypted/decrypted message at both client and server, with precautions to protect the key from being discoverable during transmission. Details, and code, are hosted online . An associated advantage of the scheme is that encryption preserves the message length, so might be suited for small messages (SMS etc) that can tolerate a small increase in transmission latency. I might invest some time in developing this into a SMS exchange platform, if the protocol stands up to scrutiny. While it was just a little hobby project, I am wondering if it is really as secure as I have imagined it to be. Perhaps someone more knowledgeable about the number theoretic properties of modular groups can comment? In particular, my scheme relies on modular multiplicative inverse not existing for even members of (Z/2^nZ)* . Thanks.
Now provide the response and nothing else.
| On a general basis, we don't want to encourage this kind of question: References to code are not good descriptions of cryptographic algorithms. Good descriptions use the language of mathematics, not programming. A description on a set of files on github is not permanent enough; if you ever change it, then this will make this question unreadable. There is a very inefficient and tiresome cycle, which goes thus: "Hey, I don't know the subject, but I have this idea; is it secure ? -- No, it is not, for this reason. -- What if I add a +1 there ? -- No, does not help. -- Then what if I also put a +2 at that place ? -- ..." Experience shows that this kind of process never ends up with a good algorithm; however, it produces long-winded awful discussions. It is already bad enough on Usenet groups like sci.crypt ; on a Q&A site like this one, it will just not do. From a cursory look at your files, though, it appears that you want to use Shamir's three-pass protocol . This requires a commutative cipher such that you can encrypt a message with key u , then with key v , and then decrypt with key u and v in that order, and get back the original message. Unfortunately, commutative ciphers are hard to make without a lot of mathematics, like modular exponentiations modulo big primes. Your idea to do multiplications modulo 2 n and to "encrypt" only message values x which are not invertible modulo 2 n is not secure. Indeed, from the outside, the attacker sees x*u , x*v and x*u*v (all values modulo 2 n ). Since x is even, there are integers m and y such that y is odd and x = y*2 m . u and v being odd (that's necessary for them to be invertible), the observed values are necessarily multiple of 2 m and not 2 m+1 . In other words, expressed in binary they all end with exactly m zeros. The attacker's goal is to recover x . He already knows m , as explained above. He wants y , which has length n-m bits. By simply dividing all the values he observed by 2 m , he gets y*u , y*v and y*u*v , all values modulo 2 n-m , and they are all odd, hence invertible. It then suffices to compute (y*u)*(y*v)/(y*u*v) (modulo 2 n-m ), which yields y . End game. The above generalizes to all computations modulo some integer N when taking x to be non-invertible modulo N . I encourage you to do your homework, i.e. read the Handbook of Applied Cryptography , which is a free download and very good reading (if a bit terse at times). Shamir's three-pass protocol is described in chapter 12, page 500. Mind the following passage: While it might appear that any commutative cipher (i.e., cipher wherein the order of encryption and decryption is interchangeable) would suffice in place of modular exponentiation in Protocol 12.22, caution is advised. For example, use of the Vernam cipher (§1.5.4) would be totally insecure here, as the XOR of the three exchanged messages would equal the key itself. The worst sin in scientific research is to disregard previous research. | {} | {'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/40563', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/29473/']} | jdg_382496 |
stackexchange | llm_judgeable_groundtruth_similarity | 2332175 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To prove change of variables in Differential geometry, we first prove the following lemma: Lemma 1. Suppose that the change of variables theorem holds for the diffeomorphisms $g:U\to V$ and $h: V\to W$ (and all of $f$). Then it also holds for $h\circ g: U\to W$ (for all $f$). Proof: (1) $\\ \int_W f=\\(2) \int_V(f\circ h)|\det Dh|=\\(3)\int_U(f\circ h \circ g))|(\det Dh)\circ g||\det Dg|=\\(4)\int_U (f\circ h\circ g))|(\det D(h\circ g)| \text{ (by the chain rule}).$ $\\$ These are the notes my professor gave. The part of this proof I don't understand is getting from (2) to (3) because I don't totally understand $|\det Dh|$. I know that $\det Dh$ will be the function obtained from taking the derivative of the diffeomorphism $h$, and we know $\det Dh\neq 0$. But then what do we know about $|\det Dh|$? Is this function continuous? How are we guaranteeing that when I multiply $|\det Dh|$ with the continuous function $(f\circ g)$ that I still have some continuous function that I can apply the change of variables theorem to? I'm brand new to differential geometry, so feel free to talk to me as if I don't know what I'm talking about! :D Thanks!
Now provide the response and nothing else.
| Obviously, the rest after the $n$th partial sum is $$S-\sum_{k=1}^n\frac1{k^2}=\sum_{k=n+1}^\infty\frac1{k^2}.$$ Let's approximate that with some similar series with known partial sums, so a telescoping series would be nice. A convenient choice would be $$\frac1{k^2-1/4}=\frac1{k-1/2}-\frac1{k+1/2},$$ so$$\sum_{k=n+1}^\infty\frac1{k^2-1/4}=\frac1{n+1/2}=\frac2{2n+1}$$ is the main part. We're left with an estimate for the error, i.e. for $$\sum_{k=n+1}^\infty\left(\frac1{k^2-1/4}-\frac1{k^2}\right)=\sum_{k=n+1}^\infty\frac{1/4}{k^2(k^2-1/4)}.$$ Let's try with some telescoping series, too: We have $$\frac1{(k-1/2)^3}-\frac1{(k+1/2)^3}=\frac{3k^2+1/4}{(k^2-1/4)^3}\ge\frac{3}{(k^2-1/4)^2}\ge12\cdot\frac{1/4}{k^2(k^2-1/4)},$$ and this means $$\sum_{k=n+1}^\infty\frac{1/4}{k^2(k^2-1/4)}\le\frac{1/12}{(n+1/2)^3}.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2332175', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/372229/']} | jdg_382497 |
stackexchange | llm_judgeable_groundtruth_similarity | 53203224 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create some expansion panels with mat-table's inside, my problem is i have to resize my windows before my view will change. My data is loading fine and all but somehow my view does not update. My view where my expansion panels should be, is just all blank. Untill i click a button or resize my window. What can cause something like this? In my ngOnInit() i call this.getSale1(); .HTML: <mat-accordion> <mat-expansion-panel *ngFor="let data of mySaleModelArray2 "> <mat-expansion-panel-header> <mat-panel-title> <h6 class="salepanelheadtext">Bar:</h6>{{data.name}} </mat-panel-title> <mat-panel-description> <h6 class="salepanelheadtext2">Total:</h6> {{data.total_sales}} </mat-panel-description> </mat-expansion-panel-header> <div class="example-container mat-elevation-z8"> <mat-table #table [dataSource]="data.sales" > <!-- PLU Column --> <ng-container matColumnDef="pluNo"> <mat-header-cell *matHeaderCellDef > # </mat-header-cell> <mat-cell *matCellDef="let salesdata"> {{salesdata.beerline}} </mat-cell> </ng-container> <!-- Name Column --> <ng-container matColumnDef="name"> <mat-header-cell *matHeaderCellDef> Name </mat-header-cell> <mat-cell *matCellDef="let salesdata"> {{salesdata.pluName}} </mat-cell> </ng-container> <!-- Sold_Count Column --> <ng-container matColumnDef="sold_count"> <mat-header-cell *matHeaderCellDef> QTY </mat-header-cell> <mat-cell *matCellDef="let salesdata"> {{salesdata.sold_count}} </mat-cell> </ng-container> <!-- PLU Price Column --> <ng-container matColumnDef="pluPrice"> <mat-header-cell *matHeaderCellDef> Price </mat-header-cell> <mat-cell *matCellDef="let salesdata"> {{salesdata.pluPrice}} </mat-cell> </ng-container> <!----> <ng-container matColumnDef="total_amount"> <mat-header-cell *matHeaderCellDef> Total </mat-header-cell> <mat-cell *matCellDef="let salesdata"> {{salesdata.pluPrice * salesdata.sold_count}} </mat-cell> </ng-container> <mat-header-row *matHeaderRowDef="displayedColumns2"></mat-header-row> <mat-row *matRowDef="let row; columns: displayedColumns2;"></mat-row> </mat-table> </div> </mat-expansion-panel></mat-accordion> .TS: //Get data from Sale1ListgetSale1() { this.customersService.getSale1() .subscribe( dataList => { this.updateDataTable(dataList); } ) }updateDataTable(dataList) { for(var i = 0;i < dataList.length; i++){ var saleData = <SaleDataModel>dataList[i]; var mySaleModelTest = this.mySaleModelArray2.find(x => x.name == dataList[i].name); if(mySaleModelTest == null){ //first time creating the object with the bar name var tempArray = Array(); tempArray.push(saleData); this.mySaleModelArray2.push(new Sale1Model(dataList[i].name,dataList[i].pluPrice * dataList[i].sold_count,tempArray)); }else{ //changing the object with the bar name because it already exist mySaleModelTest.total_sales = mySaleModelTest.total_sales + dataList[i].pluPrice * dataList[i].sold_count; mySaleModelTest.sales.push(saleData); } } }
Now provide the response and nothing else.
| ChangeDetectorRef will do the trick. Inject him in the constructor. constructor( ... private cdr: ChangeDetectorRef, ...) { } edit getSale1 like this in order to use the cdr: getSale1() { this.customersService.getSale1() .subscribe( dataList => { this.updateDataTable(dataList); this.cdr.detectChanges(); } ) } But why I have to use the ChangeDetectorRef? Angular, by default, use the ChangeDetectionStrategy.default that use its logic to "wake-up" the component for the render. More spec here: https://angular.io/api/core/ChangeDetectionStrategy There are certain cases where this isn't enough. One case could be a very big nested *ngFor . So why use the cdr? As I said, there are some cases when Angular does not wake up its renderer. Since every situation is not the same, it's quite impossibile to define an absolute answer to this. What cdr.detectChanges() does, is to allow the method to inform the Angular's rendered to force the render of its component.html. In this way, no matter which strategy are you using (even if it's .onPush ) the component will be re-rendered. But be careful . you have to think what you are doing before implementing this. For example, re-render the html fire the ngOnChanges event. So you could enter an endless loop. More info about cdr: https://angular.io/api/core/ChangeDetectorRef Hope that this cleared out some doubts. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53203224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5707243/']} | jdg_382498 |
stackexchange | llm_judgeable_groundtruth_similarity | 395966 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let ${f : E \rightarrow S}$ be a geometric morphism (between toposes). For $s$ in $S$ and $x$ in $E$ let ${\pi : f^* s \times x \rightarrow x}$ be the obvious projection in $E$ . Let ${u \rightarrow f^* s \times x}$ be a complemented subobject of ${f^* s \times x}$ . Is the image of $u$ along $\pi$ complemented as a subobject of $x$ ? (See also Images of complemented subobjects in hyperconnected toposes over Boolean bases )
Now provide the response and nothing else.
| No, not even if $E=S$ , $f$ is the identity morphism, and $x=1$ . In that special case, your question asks whether $\forall z\in s\,\big((z\in u)\lor \neg(z\in u)\big)$ (in the internal language of $S$ ) implies $(\exists z\in s\,z\in u)\lor\neg(\exists z\in s\,z\in u)$ . When $s$ is $\mathbb N$ , this is the limited principle of omniscience, which is not intuitionistically valid. | {} | {'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/395966', 'https://mathoverflow.net', 'https://mathoverflow.net/users/121350/']} | jdg_382499 |
stackexchange | llm_judgeable_groundtruth_similarity | 7734777 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For example (and this is very simplified), suppose I have a class for every card in a deck of cards... e.g. a KingOfSpaces class, a QueenOfSpades class, a JackOfDiamonds class, etc. All of which that extend Card . There might be multiple instances of KingOfSpaces . And I have an ArrayList<Card> , with 5 objects in it. How can I check to see if that ArrayList contains at least one AceOfDiamonds ?
Now provide the response and nothing else.
| Let's start out by pointing out that using classes for this sort of differentiation is almost certainly a bad thing. I'd say that you probably need to make 'Card' be a bit more intelligent (i.e. having a getSuit() and getOrdinal() method). But, if you insist on doing it that way, iterate the array list (you can google that - it's a pretty basic thing) and compare each entry in the list using the instanceof operator. You tagged this question as having to do with 'reflection', which doesn't seem right. Are you sure you didn't mean to flag it 'homework' ? OK - what the heck, here's the code: List<Card> hand = ...;for(Card card : hand){ if (card instanceof AceOfDiamonds) return true;} but please don't set up your class hierarchy like that - it's horrible design. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7734777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/643928/']} | jdg_382500 |
stackexchange | llm_judgeable_groundtruth_similarity | 10117049 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to display only those contact names whose email address is present. Otherwise that contact name should not be displayed in List. How can I do this? Can anybody please help me?
Now provide the response and nothing else.
| public ArrayList<String> getNameEmailDetails(){ ArrayList<String> names = new ArrayList<String>(); ContentResolver cr = getContentResolver(); Cursor cur = cr.query(ContactsContract.Contacts.CONTENT_URI,null, null, null, null); if (cur.getCount() > 0) { while (cur.moveToNext()) { String id = cur.getString(cur.getColumnIndex(ContactsContract.Contacts._ID)); Cursor cur1 = cr.query( ContactsContract.CommonDataKinds.Email.CONTENT_URI, null, ContactsContract.CommonDataKinds.Email.CONTACT_ID + " = ?", new String[]{id}, null); while (cur1.moveToNext()) { //to get the contact names String name=cur1.getString(cur1.getColumnIndex(ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME)); Log.e("Name :", name); String email = cur1.getString(cur1.getColumnIndex(ContactsContract.CommonDataKinds.Email.DATA)); Log.e("Email", email); if(email!=null){ names.add(name); } } cur1.close(); } } return names; } the above method return an arraylist of names which has email id. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10117049', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1155999/']} | jdg_382501 |
stackexchange | llm_judgeable_groundtruth_similarity | 463820 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to debug some by looking at the packets and I would like to avoid getting all the SSH traffic to the server. Is there a way to ignore? I tried to do something like tshark -f "port !22" but it stopped listening after the command. [root@vpn ~]# tshark -f "port !22"tshark -f "port ls"Running as user "root" and group "root". This could be dangerous.Capturing on venet0tshark: arptype 65535 not supported by libpcap - falling back to cooked socket.tshark: Invalid capture filter: "port ls"!That string isn't a valid capture filter (unknown port 'ls').See the User's Guide for a description of the capture filter syntax.0 packets captured[root@vpn ~]#
Now provide the response and nothing else.
| Both tshark and tcpdump use the pcap library, so the capture filters use pcap-filter syntax . The filter you want is, as @tristan says, "not port 22" . You can enter this as a quoted string argument to the -f option, or as an unquoted argument to the command. The following commands are equivalent: # tshark -f "not port 22"# tshark -- not port 22 The reason tshark complained about your command above is that your shell (probably Bash) expanded "!22" to command number 22 in your command history, which in this case was "ls". The Bash documentation has more information on history expansion. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/463820', 'https://serverfault.com', 'https://serverfault.com/users/112405/']} | jdg_382502 |
stackexchange | llm_judgeable_groundtruth_similarity | 3279275 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $S$ be an $n\times n$ strictly upper triangular matrix. Show that $(I-S)^{-1} = I+S+S^2+ \dots + S^{n-1}$ . This seems like it should be an easy problem to do by induction, but I am having trouble justifying the last step. We can define a sequence of matrices $\{S_i\}$ such that $S_n$ and $S_{n+1}$ agree in the $n\times n$ upper left corner of $S_{n+1}$ , so that the only new entries being added as we increase $n$ appear in the furthest right column. By induction: \begin{align*}\begin{bmatrix}1 & -s_{12} & \dots & -s_{1,n+1}\\0 & \ddots & \dots & -s_{2,n+1}\\\vdots & & \ddots & \vdots\\0 & \dots & \dots & 1\end{bmatrix} (I + S_{n+1} + S_{n+1}^2 + \dots + S_{n+1}^n) = \begin{bmatrix}1 & 0 & \dots & \text{stuff}\\0 & \ddots & \dots & \text{stuff}\\\vdots & & \ddots & \vdots\\0 & \dots & \dots & 1\end{bmatrix}\end{align*} Put into words, by induction we have that the upper left corner is the $n \times n$ identity matrix, but I am not sure how to justify that the "stuff" rightmost column becomes $0$ .
Now provide the response and nothing else.
| Strictly upper triangular matrices are nilpotent. Indeed, the characteristic polynomial function of such a matrix $S$ is given by $p(\lambda)=\lambda^n$ (since all the diagonal entries of the matrix are zero) whence by cayley-hamilton $S^n=0$ . Hence $$(I+S+S^2+\dotsb+S^{n-1})(I-S)=I-S^n=I$$ as desired. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3279275', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/436852/']} | jdg_382503 |
stackexchange | llm_judgeable_groundtruth_similarity | 16415255 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have to create an array which stores ArrayList's of Integers and the array size is 10. The below code will do it: ArrayList<Integer>[] pl2 = new ArrayList[10]; Question 1: In my opinion the more appropriate code would be ArrayList<Integer>[] pl2 = new ArrayList<Integer>[10]; Why does this not work? Question 2: Both of the below compile ArrayList<Integer>[] pl2 = new ArrayList[10]; ArrayList[] pl3 = new ArrayList[10]; What is the difference as far as the reference declaration of pl2 and pl3 is concerned?
Now provide the response and nothing else.
| Question 1: Basically, this is forbidden by Java language. This is covered in Java Language Specification for generics . When you use ArrayList<Integer>[] pl2 = new ArrayList[10]; // warning you get the compiler warning, because the following example will compile (generating warning for every line of code): ArrayList wrongRawArrayList = new ArrayList(); // warningwrongRawArrayList.add("string1"); // warning wrongRawArrayList.add("string2"); // warning pl2[0] = wrongRawArrayList; // warning but now you array, that supposed to contain ArrayList of Integer , contains totally wrong ArrayList of String objects. Question 2: As it was already answered, declaration of p12 provides you with compile time checking and frees you from using casting when getting items from your ArrayList . Slightly modified previous example: ArrayList<Integer>[] pl2 = new ArrayList[10]; // warning ArrayList<String> wrongArrayList = new ArrayList<String>(); // OK!wrongArrayList.add("string1"); // OK! wrongArrayList.add("string2"); // OK!pl2[0] = wrongArrayList; // ERROR Now, since you are using generics, this won't compile.But if you use ArrayList[] pl2 = new ArrayList[10]; you will get the same result as in the first example. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16415255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102040/']} | jdg_382504 |
stackexchange | llm_judgeable_groundtruth_similarity | 2373 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The most online tutorials like to use a simple example to introduce to machine learning by classify unknown text in spam or not spam. They say that this is a binary-class problem. But why is this a binary-class problem? I think it is a one-class problem! I do only need positive samples of my inbox to learn what is not spam. If I do take a bunch of not spam textes as positiv samples and a bunch of spam-mails as negativ samples, then of course it's possible to train a binary-classifier and make predictions from unlabeled data, but where is the difference to the onc-class-approach? There I would just define a training-set of all non spam examples and train some one-class classifier. What do you think?
Now provide the response and nothing else.
| Strictly speaking, "one class classification" does not make sense as an idea. If there is only one possible state of a predicted value, then there is no prediction problem. The answer is always the single class. Concretely, if you only have spam examples, you would always achieve 100% accuracy by classifying all email as spam. This is clearly wrong, and the only way to know how it is wrong is to know where the classification is wrong -- where emails are not in the spam class. So-called one-class classification techniques are really anomaly detection approaches. They have an implicit assumption that things unlike the examples are not part of the single class, but, this is just an assumption about data being probably not within the class. There's a binary classification problem lurking in there. What is wrong with a binary classifier? | {} | {'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/2373', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/4717/']} | jdg_382505 |
stackexchange | llm_judgeable_groundtruth_similarity | 24620 |
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In my experience driving through the rain at speeds above 100 km/h results in a cloud of sprayed water behind the car, like so: However, I encounter more and more recently repaired road sections where the pavement is virtually immune to splashing: Those pictures are taken 10 seconds apart, on the autobahn A5 near Rastatt . I also have a corresponding video , where the exact border between old & new pavement can be seen. Can anyone explain how new pavements achieve this remarkable performance at eliminating splashing? What properties help draining water so effectively?
Now provide the response and nothing else.
| Although I am unsure of the material that is being used for the roadway in your photos it appears similar to Topmix Permeable concrete You asked Can anyone explain how new pavements achieve this remarkable performance at eliminating splashing? Response Topmix Permeable concrete is simply a very porous concrete interlaced with large rounded pebbles. In order to allow as much water as possible through the roadway and continue to absorb liquid a base of rubble and channels must be formed. They do not eliminate splashing so much as they absorb and allow liquid to drain down through the surface. Other Technology The State of California in the US has an initiative of porous concrete solution combined with recycled rubber from tires. Here is a state government technology guide describing the initiative and the technology. Cost Justification Drivers of cars can allow their tires to become degraded and wait until the last minute to change them creating a safety issue due to bald tires and what not. The cost to the government for the healthcare of these individuals can be relatively high. A proactive cost control method employed by the state could be to employ this type of road technology to decrease the number of accidents annually. The one time cost of implementation could be less than the cost related to the healthcare of individuals who suffer crash related injuries in the rain. | {} | {'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/24620', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/11548/']} | jdg_382506 |
stackexchange | llm_judgeable_groundtruth_similarity | 97189 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is often written that disks in a mirrored RAID (like RAID 1) are an exact mirror copy one of another. So can I turn the machine off and simply swap the two harddisks cables? And if one of the two disks crashes and is dead forever, can I simply turn the machine off, put a third (new) harddisk instead of the dead one, boot from a live CD and "dd" the entire content of the disk which is still good to the new disk and then boot again with RAID 1 operational? If it's not that simple, why isn't it so?
Now provide the response and nothing else.
| An identical copy of your data is stored on each disk (provided the array is not "dirty"—e.g., if power is lost after writing to disk 0, but before writing to disk 1). However, the metadata is different; it allows mdadm & md to tell the two disks apart. Can you swap the cables around? You can swap the cables on the two disks. When you (or your distro's boot scripts) do mdadm --assemble on the array, mdadm looks at the metadata on each disk, and from that figures out which is disk 1 and which is disk 2. This is in fact extremely flexible—you could, for example, remove one of the disks, put it in a USB-SATA enclosure, and attach it to a USB port, and mdraid would still be perfectly happy. Can I recover a degraded array by using dd ? No. If you did that, you'd have two disk 1's, or two disk 2's, and mdadm would be confused (and, I haven't tested this, but I assume it'd refuse to assemble the array). In general, all array management is done with mdadm and further it is seldom a good idea to go around mdraid. To recover your array, you add the new disk/partition to it. Something like this, assuming sdb1 is the partition on the replacement disk: mdadm --add /dev/md0 /dev/sdb1 mdraid will then copy the data, and you can watch the status by cat /proc/mdstat . You are free to continue using the array during the re-sync. There is no need to boot from a live CD or similar (you should be able to boot from the degraded array). In fact, if you have hot-swap trays in your machine, you can replace a failed sdb like this: mdadm -r /dev/md0 /dev/sdb1 Remove the drive Put in new drive Partition the new drive (often, but not always, will be sdb again). mdadm -a /dev/md0 /dev/sdb1 This does not require any downtime. Note also that if you're booting from a mirror, you need to make sure the bootloader (e.g., grub) is installed to both disks. How to do this depends on your distro. Anything else? Yes. mdadm --create is not a recovery step. It is used to create a new , blank array, and the next step would typically be pvcreate or mkfs . Already existing arrays are started using mdadm --assemble . (This seems to be a common enough error, and has the potential to destroy data.) Final remarks You should probably take a bit to familiarize yourself with the mdraid documentation (you are trusting it with your data, after all). In particular, read through the mdadm manual page, any RAID documentation your distro puts out, and Documentation/md.txt (from the kernel sources, corresponding to your kernel version). These are probably not the most understandable documents, but they are all generally up-to-date. There is also a Linux Raid wiki , but beware that not everything there is fully up-to-date. There are other pages out there, but be especially cautious of anything mentioning mkraid or /etc/raidtab other than as a historical note, as those tools have be obsolete for a decade. | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/97189', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/11923/']} | jdg_382507 |
stackexchange | llm_judgeable_groundtruth_similarity | 31439 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
One hiking book my family owns described Spain as the second most mountainous country in Europe . Several online sources claim the same: Spain then and now : After Switzerland, Spain is the most mountainous country in Europe. Sustainable event alliance : it [Spain] is the second most mountainous in Europe after Switzerland. Wikipedia, Tourism in Spain : Spain, as the second most mountainous country of Europe, (...) I've travelled in Spain and there are indeed quite a few mountains. None of those sources define what they mean by most mountainous . I can think of quite a few possible definitions, and although I did not back it up by sources, for all of those I suspect there are at least two European countries that would rank higher: Total area covered by mountains? What is a mountain? Probably more in Norway, Sweden, Russia, maybe Switzerland, Austria… Fraction of total area covered by mountains? (Again, what is a mountain?) More in Switzerland, Liechtenstein, Andorra, Norway… Actual surface area divided by surface area projected onto the geoid (close but not equal to previous one)? See also Is La Palma the steepest island in the world? . Number of (ultra) prominent peaks ? Switzerland, Austria, Italy have more. Number of ultra prominent peaks per unit area? Is there any (reasonable) definition of mountainous by which Spain is the second most mountainous country in Europe?
Now provide the response and nothing else.
| As the question demonstrates, the actual definitions used are unclear, and there could be many possible answers. Chapter 3 of the Nordregio 's 2004 report for the European Commission, Mountain Areas in Europe:Analysis of mountain areas in EUmember states, acceding and otherEuropean countries provides a number of different measures, most of which Spain is nowhere near the top, but some in which Spain comes second - but not to Switzerland! - or even first. % of municipalities that are at least 50% mountainous, by area: FALSE See Table 3.1. Spain (at 55.59%) is well behind Switzerland, Norway, Slovenia, Austria and several others. Percentage of total country area that are mountain areas: FALSE See Table 3.2. Spain (at 55.7%) is well behind Switzerland, Norway, Slovenia, Greece, Austria, Ital and other countries. Population that live in mountain areas: TRUE See Table 3.3. Spain (at 15,681,826) is second to Italy. Percentage of total population living in municipalities that are at least 50% mountains by areas: FALSE See Table 3.3. Spain (at 38.5%) is behind Austria, Greece, Bulgaria, Slovakia, Slovenia, Norway and Switzerland. Total Mountain Area: TRUE See Table 3.4 and the description above. Norway, Spain and Sweden are the countries with greatest extent of mountain areas in absolute terms The table shows Norway beating Spain, and Spain beating Sweden. Switzerland doesn't make it into the top 11. Mountain Population: TRUE See Table 3.4 and the description above. As regards population, Italy, Spain, and France have the largest mountain populations The table shows Italy beating Spain; Switzerland is further down the list. Percentage of Mountain Area: FALSE See Table 3.5. Spain doesn't rank in the top 11. Percentage of Mountain Population: FALSE See Table 3.5. Spain ranks 8, with Switzerland at the top. Number of massifs : FALSE Spain is number #1 in this measure. The number of massifs per country ranges from one massif (Belgium, Slovakia) up to 13 for Spain. In conclusion, the claim is poorly defined. Under some measures Spain is second only to Italy or to Norway. In other measures, it doesn't make the top ten. Of course, there could be some other definition that places Spain second to Switzerland that wasn't considered here. | {} | {'log_upvote_score': 7, 'links': ['https://skeptics.stackexchange.com/questions/31439', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/5337/']} | jdg_382508 |
stackexchange | llm_judgeable_groundtruth_similarity | 34146555 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem: What is the most correct way to simply query for and list all types within a specific index (and all indices) in elasticsearch? I've been reading through the reference and API but can't seem to find anything obvious. I can list indices with the command: $ curl 'localhost:9200/_cat/indices?v' I can get stats (which don't seem to include types) with the command: $ curl localhost:9200/_stats I'd expect that there'd be a straightforward command as simple as: $ curl localhost:9200/_types or $ curl localhost:9200/index_name/_types Thanks for any help you can offer.
Now provide the response and nothing else.
| What you call "type" is actually a "mapping type" and the way to get them is simply by using: curl -XGET localhost:9200/_all/_mapping Now since you only want the names of the mapping types, you don't need to install anything, as you can use simply use Python to only get you what you want out of that previous response: curl -XGET localhost:9205/_all/_mapping | python -c 'import json,sys; indices=json.load(sys.stdin); indices = [type for index in indices for type in indices.get(index).get("mappings")]; print list(indices);' The Python script does something very simple, i.e. it iterates over all the indices and mapping types and only retrieves the latter's names: import json,sys; resp = json.load(sys.stdin); indices = [type for index in resp for type in indices.get(index).get("mappings")]; print list(indices);' UPDATE Since you're using Ruby, the same trick is available by using Ruby code: curl -XGET localhost:9205/_all/_mapping | ruby -e "require 'rubygems'; require 'json'; resp = JSON.parse(STDIN.read); resp.each { |index, indexSpec | indexSpec['mappings'].each {|type, fields| puts type} }" The Ruby script looks like this: require 'rubygems';require 'json';resp = JSON.parse(STDIN.read);resp.each { |index, indexSpec | indexSpec['mappings'].each { |type, fields| puts type }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34146555', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1255223/']} | jdg_382509 |
stackexchange | llm_judgeable_groundtruth_similarity | 285468 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In R, I am doing survival data analysis of cancer patients. I have been reading very helpful stuff about survival analysis in CrossValidated and other places and think I understood how to interpret the Cox regression results. However, one result still bugs me... I am comparing survival vs. gender. The Kaplan-Meier curves are in clear favour of female patients (I have checked several times that the legend I have added is correct, the patient with the maximum survival, 4856 days, is indeed a woman): And the Cox regression is returning: Call:coxph(formula = survival ~ gender, data = Clinical) n= 348, number of events= 154 coef exp(coef) se(coef) z Pr(>|z|) gendermale -0.3707 0.6903 0.1758 -2.109 0.035 *---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 exp(coef) exp(-coef) lower .95 upper .95gendermale 0.6903 1.449 0.4891 0.9742Concordance= 0.555 (se = 0.019 )Rsquare= 0.012 (max possible= 0.989 )Likelihood ratio test= 4.23 on 1 df, p=0.03982Wald test = 4.45 on 1 df, p=0.03499Score (logrank) test = 4.5 on 1 df, p=0.03396 So Hazards Ratio (HR) for male patients ( gendermale ) is 0.6903. The way I would interpret that (without looking at the Kaplan-Meier curve) is: as the HR is <1, being a patient of male gender is protective. Or more precisely, a female patient is 1/0.6903 = exp(-coef) = 1.449 more likely to die at any specific time than a male. But that does not seem like what the Kaplan-Meier curves say! What's wrong with my interpretation?
Now provide the response and nothing else.
| This is a very good example of non-proportional hazards OR the effect of 'depletion' in survival analysis. I will try to explain. At first take a good look at your Kaplan-Meier (KM) curve: you can see in the first part (until around 3000 days) the proportion of males still alive in the population at risk at time t is larger than the proportion of females (i.e. the blue line is 'higher' than the red one). This means that indeed male gender is 'protective' for the event (death) studied. Accordingly the hazard ratio should be between 0 and 1 (and the coefficient should be negative). However, after day 3000, the red line is higher! This would indeed suggest the opposite. Based on this KM graph alone, this would further suggest a non-proportional hazard. In this case 'non-proportional' means that the effect of your independent variable (gender) is not constant over time. In other words, the hazard ratio is viable to change as time progresses. As explained above, this seems the case. The regular proportional hazard Cox model does not accommodate such effects. Actually, one of the main assumptions is that the hazards are proportional! Now you can actually model non-proportional hazards as well, but that is beyond the scope of this answer. There is one additional comment to make: this difference could be due to the true hazards being non-proportional or the fact that there is a lot of variance in the tail estimates of the KM curves. Note that at this point in time the total group of 348 patients will have declined to a very small population still at risk. As you can see, both gender groups have patients experiencing the event and patients being censored (the vertical lines). As the population at risk declines, the survival estimates will become less certain. If you would have plotted 95% confidence intervals around the KM lines, you would see the width of the confidence interval increasing. This is important for the estimation of hazards as well. Put simply, as the population at risk and amount of events in the final period of your study is low, this period will contribute less to the estimates in your initial cox model. Finally, this would explain why the hazard (assumed constant over time) is more in line with the first part of your KM, instead of the final endpoint. EDIT: see @Scrotchi's spot-on comment to the original question: As stated, the effect of low numbers in the final period of the study is that the estimates of the hazards at those points in time are uncertain. Consequently you are also less certain whether the apparent violation of the proportional hazards assumption isn't due to chance. As @ scrotchi's states, the PH assumption may not be that bad. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/285468', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/164356/']} | jdg_382510 |
stackexchange | llm_judgeable_groundtruth_similarity | 4717586 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Any one have any ideas on how to convert ccc3 to ccc4 or even just a macro!I am currently using CCMotion streak and it requires me to use ccc4 but i dont know what combination makes what color!Please any help would be appreciated!Thanks
Now provide the response and nothing else.
| from ccTypes.h typedef struct _ccColor4B{ GLubyte r; GLubyte g; GLubyte b; GLubyte a;} ccColor4B;//! helper macro that creates an ccColor4B typestatic inline ccColor4Bccc4(const GLubyte r, const GLubyte g, const GLubyte b, const GLubyte o){ ccColor4B c = {r, g, b, o}; return c;} For exampe: ccColor4b myColor = ccc4(255, 0, 0, 255); creates a solid red color Again from ccTypes.h /** Returns a ccColor4F from a ccColor3B. Alpha will be 1. @since v0.99.1 */static inline ccColor4F ccc4FFromccc3B(ccColor3B c){ return (ccColor4F){c.r/255.f, c.g/255.f, c.b/255.f, 1.f};} If it's not enough for your write your own converter like the last one EDIT: If you have ccColor3B myColor3B and you want to have ccColor4F myColor4F simply use the converter: myColor4F c = ccc4FFromccc3B(myColor3B); If you want to have ccColor4B write your own converter: static inline ccColor4B ccc4BFromccc3B(ccColor3B c){ return (ccColor4F){c.r, c.g, c.b, 255};} and use it like this: ccColor4B c = ccc4BFromccc3B(myColor); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4717586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/577861/']} | jdg_382511 |
stackexchange | llm_judgeable_groundtruth_similarity | 47815903 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the PRAM model, multiple processors act synchronously to execute the same command on different sets of data. There are two types of read/write mode for each algorithm; Concurrent (Concurrent Read & Concurrent Write ) Exclusive (Exclusive Read & Exclusive Write ) What I find hard to understand is what exactly is the difference between these two modes, and which seems to be more proficient?
Now provide the response and nothing else.
| Theory: PRAM machines may harness one of the below listed principal approach to concurrent-events' handling policies not observed in any pure- [SERIAL] system. Given the nature of the machine physical body, some of the below listed policies may ( but need not ) match the processing goals and software-based tools are then resorts to allow for other policies ( not listed below, thus not supported directly by the PRAM hardware-based resources ), sure, at a cost of additional time ( add-on overheads ) needed to mediate such policy enforcement steps and measures. As observed in 3.2.x below, some of the hardware-based policies may become directly beneficial for specialised, not universal , image processing or similar cases, while a general purpose computing graph does not get correct results, if not protected by some means of exclusivity locking or atomic-operations, as none of the below listed CRCW-policies ensures systematically valid result in otherwise uncoordinated a "just"- [CONCURRENT] scheduled code-execution concurrency-originated colliding write-accesses. EREW ( E xclusive R ead, E xclusive W rite ): 1.1) Concurrent memory access by multiple processors is not allowed 1.2) If two or more processors try to read from or write to the same memory cell concurrently, the behaviour is undefined CREW ( C oncurrent R ead, E xclusive W rite ): 2.1) Reading the same memory cell concurrently is OK 2.2) Two concurrent writes to the same cell lead to unspecified behaviour CRCW ( C oncurrent R ead, C oncurrent W rite ): 3.1) Concurrent reads and writes are both OK 3.2) Behavior of concurrent writes has to be further specified : 3.2.1) Weak-CRCW: concurrent write only OK if all processors write 0 3.2.2) Common‐mode-CRCW: all processors need to write the same value 3.2.3) Arbitrary‐winner-CRCW: adversary picks one of the values ( a lottery indeed ) 3.2.4) Priority-CRCW: value of processor with highest ID is written 3.2.5) Strong-CRCW: { largest | smallest }-value is written | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47815903', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7168673/']} | jdg_382512 |
stackexchange | llm_judgeable_groundtruth_similarity | 16922758 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying out following code. However, if I click on form's close button while this code is running, nothing happens. How can I correct this? I need to close the form even when this loop is executing. procedure TForm1.Button1Click(Sender: TObject);var i: Integer;begin for i := 0 to 9999999 do begin Memo1.Lines.Add('hi'); Application.ProcessMessages; end;end;
Now provide the response and nothing else.
| Have a look at what's going on inside Application.ProcessMessages . When you close the main form, windows sends a WM_QUIT message to the program. The relevant part of TApplication.ProcessMessages looks like this: if Msg.Message <> WM_QUIT then begin //skipped end else begin{$IF DEFINED(CLR)} if Assigned(FOnShutDown) then FOnShutDown(self); DoneApplication;{$IFEND} FTerminate := True; end; I assume this is not a CLR program, so the only thing that happens at this point is setting FTerminate := True on Application . This is reflected in the Application.Terminated property. When the application shuts down, one of the things it does in order to shut down safely is wait for all threads to finish. This code happens to be running in the main thread, but the principle would be the same in any thread: If you're doing a long-running task that might have to finish early, you have to explicitly check for early termination. Knowing this, it's easy to figure out how to fix your code: procedure TForm1.Button1Click(Sender: TObject);var i: Integer;begin for i := 0 to 9999999 do begin Memo1.Lines.Add('hi'); Application.ProcessMessages; if Application.Terminated then Break; end;end; Also, beware of using Application.ProcessMessages in the first place, as it will process all messages for the application. For a simple idea of what might go wrong, try adding IntToStr(i) instead of 'hi' to Memo1.Lines , knock a couple of orders of magnitude off the counter, and then click the button two or three times in rapid succession and watch the output... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16922758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1127776/']} | jdg_382513 |
stackexchange | llm_judgeable_groundtruth_similarity | 138111 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a set of 4 equations that are all equal. 400 (T1 - 283) == 12200 (T2 - T1) == 3560 (Ts - T2) == 2340 Exp[-710.45/Ts] (333 - Ts) I've tried using Solve and NSolve to solve this system, but it's not working. Solve gives "Solve was unable to solve the system with inexact coefficients or the system obtained by direct rationalization of inexact numbers present in the system. Since many of the methods used by Solve require exact input, providing Solve with an exact version of the system may help."NSolve gives the same input back as the output. Reduce runs without stopping. To be honest I'm just really confused about what it even means when you have multiple sets of equations that are all equal to each other. Are there sufficient equations to solve for all the unknowns? Are these equations even considered independent? (First time posting here, I apologise for being totally clueless, if I'm asking the wrong questions or providing insufficient information) Many thanks for the kind soul(s) who may help me out.
Now provide the response and nothing else.
| The system of equations presented are transcendental equations. Solve and NSolve generally doesn't work with these types. Instead, FindRoot can be used. First, eliminate variables T1 and T2 . eqns = {400 (T1 - 283) == 12200 (T2 - T1), 12200 (T2 - T1) == 3560 (Ts - T2), 3560 (Ts - T2) == 2340 Exp[-710.45/Ts] (333 - Ts)};tsEqn = Eliminate[eqns , {T1, T2}];(*E^(-710.45/Ts) (727389. + 108580. E^(710.45/Ts)) Ts == E^(-710.45/Ts) (2.42221*10^8 + 3.07281*10^7 E^(710.45/Ts))*) Then, solve the transcendental equation using FindRoot to get Ts . tsSol = FindRoot[tsEqn, {Ts, 3}](*{Ts -> 302.509}*) Now, solving the first two equations with tsSol , we get T1 and T2 as 300.036 and 300.595 , respectively. Edit: Instead of eliminating T1 and T2 variables to get a single equation and then using FindRoot , we can directly use FindRoot for all the equations directly. FindRoot[eqns, {T1, 1}, {T2, 1}, {Ts, 1}](*{T1 -> 300.036, T2 -> 300.595, Ts -> 302.509}*) | {} | {'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/138111', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/46728/']} | jdg_382514 |
stackexchange | llm_judgeable_groundtruth_similarity | 17511309 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Having following hypothetical code: cdef extern from "string.h": int strcmp(char* str1, char* str2)def foo(list_str1, list_str2): cdef unsigned int i, j c_arr1 = ?? c_arr2 = ?? for i in xrange(len(list_str1)): for j in xrange(len(list_str2)): if not strcmp(c_arr1[i], c_arr2[j]): do some funny stuff is there some way how to convert the lists to c arrays? I have read and tried Cython - converting list of strings to char ** but that only throws errors.
Now provide the response and nothing else.
| If you're on Python 3, here's an update to @falsetru's answer (untested on Python 2). cdef extern from "Python.h": char* PyUnicode_AsUTF8(object unicode)from libc.stdlib cimport malloc, freefrom libc.string cimport strcmpcdef char ** to_cstring_array(list_str): cdef char **ret = <char **>malloc(len(list_str) * sizeof(char *)) for i in xrange(len(list_str)): ret[i] = PyUnicode_AsUTF8(list_str[i]) return retdef foo(list_str1, list_str2): cdef unsigned int i, j cdef char **c_arr1 = to_cstring_array(list_str1) cdef char **c_arr2 = to_cstring_array(list_str2) for i in range(len(list_str1)): for j in range(len(list_str2)): if i != j and strcmp(c_arr1[i], c_arr2[j]) == 0: print(i, j, list_str1[i]) free(c_arr1) free(c_arr2)foo(['hello', 'python', 'world'], ['python', 'rules']) Warning: The pointer returned by PyUnicode_AsUTF8 is cached in the parent unicode-object. Which has two consequences: this pointer is only valid as long as the parent unicode-object is alive. Accessing it afterwards leads to undefined behavior (e.g. possible segmentation fault). The caller of the PyUnicode_AsUTF8 isn't responsible for the freeing the memory. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17511309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1474847/']} | jdg_382515 |
stackexchange | llm_judgeable_groundtruth_similarity | 337274 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading Art of Electronics, and in the section Transistor Current Source, they mention "The base voltage can be provided a number of ways. A voltage divider is OK as long as it is stiff enough. As before, the criterion is that its impedance should be much less than the DC impedance looking into the base (Beta*R_emitter)" Why is this ? simulate this circuit – Schematic created using CircuitLab
Now provide the response and nothing else.
| The current sink will work regardless of their rule. It just won't provide the easily predicted value of current for the load, if you don't follow the rule closely. Let's see why. After converting the base pair to its Thevenin equivalent: simulate this circuit – Schematic created using CircuitLab You can apply KVL and get: $$I_B=\frac{V_{TH}-V_{BE}}{R_{TH}+\left(\beta+1\right)R_E}$$ Now you can figure the following: $$V_B=V_{TH}-I_B\cdot R_{TH}$$ Given that the collector current (aka the load current) is \$\frac{\beta}{\beta+1}I_E\$, it must be the case that the load current is: $$\begin{align*}I_{LOAD}&=\frac{\beta}{\beta+1}\cdot\frac{V_E}{R_E}=\frac{\beta}{\beta+1}\cdot\frac{V_B-V_{BE}}{R_E}\\\\&=\frac{\beta}{\beta+1}\cdot\frac{V_{TH}-I_B\cdot R_{TH}-V_{BE}}{R_E}\end{align*}$$ Substituting in \$I_B\$ you get something like this: $$I_{LOAD}=\left[\frac{\beta}{\beta+1}\right]\cdot\left[\frac{V_{TH}-V_{BE}}{R_E}\right]\cdot\left[1-\frac{R_{TH}}{R_{TH}+\left(\beta+1\right)R_E}\right]$$ Which can also be written out as (to make the ratio stand out and to emphasize that it is the ratio of two certain resistance values that is important in the following discussion): $$I_{LOAD}=\left[\frac{\beta}{\beta+1}\right]\cdot\left[\frac{V_{TH}-V_{BE}}{R_E}\right]\cdot\left[1-\frac{1}{1+\frac{\left(\beta+1\right)R_E}{R_{TH}}}\right]$$ Note that the first factor is almost always very close to 1. So it can be ignored. The second factor is the current we'd expect when we designed the resistor divider at the base, in the first place. As you would expect that the emitter would be \$V_{BE}\$ less than the Thevenin voltage and of course this voltage across \$R_E\$ would produce the expected current there. That is, if you use the unloaded divider voltage! Now, the third factor is the issue here. You want this to be 1, since that means your unloaded divider voltage is the right one to use in predicting your current sink value. But if it isn't 1, then the actual value will be different than the expected one (given no load on the divider.) If you look at the third factor, I think you can see that if \$R_{TH}\$ is small compared to the value of \$\left(\beta+1\right) R_E\$, then the second term of that factor is close to zero and so the third factor will be close to 1. But if \$R_{TH}\$ isn't small in comparison, then that fraction (the second term of the third factor) will significantly reduce the third factor's value from 1 to something smaller. And so the predicted value won't be nearly as close to the actual value as hoped. You can also see this as: "If the base current is small compared to the available current flowing through the base pair of divider resistors, then the predicted voltage at the divider will be close to the actual voltage present there and therefore the base will obtain that nearby value and reality will be closer to prediction." That's the qualitative hand-waving that also gets you to the same place. But it all becomes quantitatively clear in the math, itself. The math not only tells you the same thing as the hand-waving does, it also tells you by exactly how much you might be off if you don't follow the rule by some amount. So it provides both the insight as well as quantities you can use if you choose not to follow the rules. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/337274', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/129751/']} | jdg_382516 |
stackexchange | llm_judgeable_groundtruth_similarity | 108340 |
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm predicting that certain genes (n=20) are more GC rich than expected in all protein-coding genes. How can I test this hypothesis efficiently? I generated the same number of random genes (also n=20) and calculated GC content for them. After using the t-test (2-tailed, two-sample unequal variance) it shows a significant P < 0.005 result. But I'm not sure that I can interpret it with high confidence.
Now provide the response and nothing else.
| You could try a permutation test . These are a kind of non-parametric statistical test which involve creating your own null distribution from your data. Your hypothesis $H_{1}$ in this case, is that your gene set has a higher GC content than expected by chance. Similarly, your null hypothesis, $H_{0}$ is that there is there is no difference between your set a random set of gene from the population. You are interested in a p-value, which simply asks what the probability of obtaining a test result at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. So thus to obtain a p-value and take say, 1000 random samples of protein-coding genes and calculate their GC content to obtain a null distribution. Then to obtain a p-value that your gene set has a higher GC content than expected under the null, take the proportion of the null distribution greater than your gene set. | {} | {'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/108340', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/67697/']} | jdg_382517 |
stackexchange | llm_judgeable_groundtruth_similarity | 336455 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The following quote comes from the famous research paper Statistical significance for genome wide studies by Storey & Tibshirani (2003): For example, a false positive rate of 5% means that on average 5% of the truly null features in the study will be called significant. A FDR (False Discovery rate) of 5% means that among all features called significant, 5% of these are truly null on average. Can somebody explain what that means using a simple numerical or visual example? I am having hard time understanding what it means. I've found various posts on FDR or FPR alone, but haven't found any where a specific comparison was made. It would be especially good if someone expert in this area could illustrate situations where one is better than the other, or both are good or bad.
Now provide the response and nothing else.
| I'm going to explain these in a few different ways because it helped me understand it. Let's take a specific example. You are doing a test for a disease on a group of people. Now let's define some terms. For each of the following, I am referring to an individual who has been tested: True positive (TP) : Has the disease, identified as having the disease False positive (FP) : Does not have the disease, identified as having the disease True negative (TN) : Does not have the disease, identified as not having the disease False negative (FN) : Has the disease, identified as not having the disease Visually, this is typically shown using the confusion matrix : The false positive rate (FPR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who do not have the disease (includes all FPs and TNs). $$FPR = \frac{FP}{FP + TN}$$ The false discovery rate (FDR) is the number of people who do not have the disease but are identified as having the disease (all FPs), divided by the total number of people who are identified as having the disease (includes all FPs and TPs). $$FDR = \frac{FP}{FP + TP}$$ So, the difference is in the denominator i.e. what are you comparing the number of false positives to? The FPR is telling you the proportion of all the people who do not have the disease who will be identified as having the disease. The FDR is telling you the proportion of all the people identified as having the disease who do not have the disease. Both are therefore useful, distinct measures of failure. Depending on the situation and the proportions of TPs, FPs, TNs and FNs, you may care more about one that the other. Let's now put some numbers to this. You have measured 100 people for the disease and you get the following: True positives (TPs) : 12 False positives (FPs) : 4 True negatives (TNs) : 76 False negatives (FNs) : 8 To show this using the confusion matrix: Then, $$FPR = \frac{FP}{FP + TN} = \frac{4}{4 + 76} = \frac{4}{80} = 0.05 = 5\%$$ $$FDR = \frac{FP}{FP + TP} = \frac{4}{4 + 12} = \frac{4}{16} = 0.25 = 25\%$$ In other words, The FPR tells you that 5% of people of people who did not have the disease were identified as having the disease. The FDR tells you that 25% of people who were identified as having the disease actually did not have the disease. EDIT based on @amoeba's comment (also the numbers in the example above): Why is the distinction so important? In the paper you link to, Storey & Tibhshirani are pointing out that there was a strong focus on the FPR (or type I error rate) in genomewide studies, and that this was leading people to make flawed inferences. This is because once you find $n$ significant results by fixing the FPR, you really, really need to consider how many of your significant results are incorrect. In the above example, 25% of the 'significant results' would have been wrong! [Side note: Wikipedia points out that though the FPR is mathematically equivalent to the type I error rate, it is considered conceptually distinct because one is typically set a priori while the other is typically used to measure the performance of a test afterwards. This is important but I will not discuss that here]. And for a bit more completeness: Obviously, FPR and FDR are not the only relevant metrics you can calculate with the four quantities in the confusion matrix. Of the many possible metrics that may be useful in different contexts , two relatively common ones that you are likely to encounter are: True Positive Rate (TPR) , also known as sensitivity , is the proportion of people who have the disease who are identified as having the disease. $$TPR = \frac{TP}{TP + FN}$$ True Negative Rate (TNR) , also known as specificity , is the proportion of people who do not have the disease who are identified as not having the disease. $$TNR = \frac{TN}{TN + FP}$$ | {} | {'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/336455', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/108743/']} | jdg_382518 |
stackexchange | llm_judgeable_groundtruth_similarity | 1724455 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using CakePHP 1.2 and I'm just wondering if there is any side affect on passing the $this->data to the View from the Controller. Ex: // inside PostsController, I have this code:$this->data['Posts'] = $this->Post->find('all'); instead of : $posts = $this->Post->find('all');$this->set(compact('posts'));// inside the /posts/view, I access it like this:<?php foreach ($this->data['Posts'] as $post) {....};?> By doing this, I skipped the $this->set() from the controller all together. Does this violate any MVC pattern or any security issue that I might have overlook? I saw that using the Auth Component, $this->data contains the [_Token] array. Thanks
Now provide the response and nothing else.
| You need to be aware of the different places that Cake Helpers automagically look for data, since that is were it makes a real difference. The Form Helper will fill in fields automatically based on the contents of $this->data . That's how form data persists when validation fails. OTOH, a <select> elements options array is automatically taken from the pluralized field name, e.g. $form->select('Model.foo_id') will take its options from $foos if set. As such, $this->data has its special place and shouldn't be used lightly, just as named variables have their use and shouldn't be ignored. Use both as appropriate. If you don't want to auto-set Form Helper content, set() your variables. IMHO it's also more readable to assign a variable name that hints at the data it contains. All your views operating on $this->data is less clear than one view operating on $foo and another on $bar . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1724455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/161164/']} | jdg_382519 |
stackexchange | llm_judgeable_groundtruth_similarity | 46702003 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have gone through all the formal definitions of these terms. But still, i have tough luck understanding them. I am a beginner in hybris. So Can somebody explain these concepts as simple as possible?
Now provide the response and nothing else.
| You must first understand the catalog concept in hybris. The catalog in hybris, or anywhere in the world contains items for business. Therefore, Catalog in general, is the container of the commodities which are sellable entities (product) and which can be referred by the end users for transactions. The catalog is not limited to just list of items, but how they are arranged also. Just an example, if you go to food restaurants, you can refer to their menu (that is actually a generalized catalog). You would notice that different restaurants, though serve the same food, but have different arrangement of items (products) in their menu - sometimes with different names, sometimes under different sections (categories). This is very similar to the hybris product catalog . Now, taking the restaurant example further, Again two restaurants might serve the same food, but will have different ambience. This means the theme of the restaurant, the uniform of the waiters, the logos etc. This is similar to the hybris content catalog Therefore, summarizing the above explanations, hybris has two catalogs - Product Catalog responsible for arrangement of the product hierarchy and the other as Content Catalog responsible for the layout (or the e-ambience) of the website. Hybris Catalog has two versions - Offline (Staged) and Online. The business users work on the Staged version and push it (to) Online once the item under work (product or content) is suitable to go live. They do this via a process called Synchronization . This process picks the items from Staged version, checks for some rules (which indicate that the item is ready to go live), and creates (or updates if already created) a copy of the item with Online as the version. Hybris system understands that only Online needs to go live, picks the item, and displays it on the website (storefront). Now, there are some entities which should have two copies in the system, so that Business users (like content managers or product managers) can make appropriate changes to make it look suitable on the website before it actually appears there. E.g. - Product, Images etc. But there are certain items which need not to have to copies of them, E.g. - Price, Stock etc. because they would be same for both copies of the item (Product in this case) and creating another copy would be an overhead on the system. Therefore, the items that (should) take part in the synchronization process like Product, CMS Pages, Components etc. are called Catalog Aware. The items which do (or should) not take part in synchronization process are called Catalog Unaware Hope this helps! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46702003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8283737/']} | jdg_382520 |
stackexchange | llm_judgeable_groundtruth_similarity | 17198 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here are two families of hash functions on strings $\vec{x} = \langle x_0 x_1 x_2 \dots x_m \rangle$: For $p$ prime and $x_i \in \mathbb{Z_p}$, $h^1_{a}(\vec{x}) = \sum a^i x_i \bmod p$ for $a \in \mathbb{Z}_p$. Dietzfelbinger et al. showed in "Polynomial Hash Functions Are Reliable" that $\forall x \neq y, P_a(h^1_a(x) = h^1_a(y)) \leq m/p$. For $x_i \in \mathbb{Z}_{2^b}$, $h^2_{\vec{a} = \langle a_0 a_1 a_2 \dots a_{m+1}\rangle}(\vec{x}) = (a_0 + \sum a_{i+1} x_i \bmod 2^{2b}) \div 2^b$ for $a_i \in \mathbb{Z}_{2^{2b}}$. Lemire and Kaser showed in "Strongly universal string hashing is fast" that this family is 2-independent. This implies that $\forall x \neq y, P_\vec{a}(h^2_\vec{a}(x) = h^2_\vec{a}(y)) = 2^{-b}$ $h^1$ uses only $\lg p$ bits of space and bits of randomness, while $h^2$ uses $2 b m + 2 b$ bits of space and bits of randomness. On the other hand, $h^2$ operates over $\mathbb{Z}_{2^{2b}}$, which is fast on actual computers. I'd like to know what other hash families are almost-universal (like $h^1$), but operate over $\mathbb{Z}_{2^b}$ (like $h^2$), and use $o(m)$ space and randomness. Does such a hash family exist? Can its members be evaluated in $O(m)$ time?
Now provide the response and nothing else.
| Yes. Wegman and Carter's "New hash functions and their use in authentication and set equality" ( mirror ) shows a scheme meeting the requirements stated (almost universal, over $\mathbb{Z}_{2^b}$, sublinear space and randomness, linear evaluation time) based on a small number of hash functions drawn from a strongly universal family. This is sometimes called "tree hashing", and it is used in "Badger - A Fast and Provably Secure MAC" by Boesgaard et al . | {} | {'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/17198', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/43/']} | jdg_382521 |
stackexchange | llm_judgeable_groundtruth_similarity | 3578456 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Well I have been through many sites teaching on how to solve it, but was wondering how to create it. I am not interested much in the coding aspects of it, but wanted to know more on the algorithms behind it. For example, when the grid is generated with 10 mines or so, I would use any random function to distribute itself across the grid, but then again how do I set the numbers associated to it and decide which box to be opened? I couldn't frame any generic algorithm on how would I go about doing that.
Now provide the response and nothing else.
| Perhaps something in the lines of : grid = [n,m] // initialize all cells to 0for k = 1 to number_of_mines get random mine_x and mine_y where grid(mine_x, mine_y) is not a mine for x = -1 to 1 for y = -1 to 1 if x = 0 and y = 0 then grid[mine_x, mine_y] = -number_of_mines // negative value = mine else increment grid[mine_x + x, mine_y + y] by 1 That's pretty much it... ** EDIT ** Because this algorithm could lead into creating a board with some mines grouped too much together, or worse very dispersed (thus boring to solve), you can then add extra validation when generating mine_x and mine_y number. For example, to ensure that at least 3 neighboring cells are not mines, or even perhaps favor limiting the number of mines that are too far from each other, etc. ** UPDATE ** I've taken the liberty of playing a little with JS bin here came up with a functional Minesweeper game demo . This is simply to demonstrate the algorithm described in this answer. I did not optimize the randomness of the generated mine position, therefore some games could be impossible or too easy. Also, there are no validation as to how many mines there are in the grid, so you can actually create a 2 by 2 grid with 1000 mines.... but that will only lead to an infinite loop :) Enjoy! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3578456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/405442/']} | jdg_382522 |
stackexchange | llm_judgeable_groundtruth_similarity | 6569 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two partitions of $[1 \ldots n]$ and am looking for the edit distance between them. By this, I want to find the minimal number of single transitions of a node into a different group that are necessary to go from partition A to partition B. For example the distance from {0 1} {2 3} {4} into {0} {1} {2 3 4} would be two After searching I came across this paper, but a) I am not sure if they are taking into account the ordering of the groups (something I don't care about) in their distance b) I am not sure how it works and c) There are no references. Any help appreciated
Now provide the response and nothing else.
| This problem can be transformed into the assignment problem , also known as maximum weighted bipartite matching problem. Note first that the edit distance equals the number of elements which need to change from one set to another. This equals the total number of elements minus the number of elements which do not need to change. So finding the minimum number of elements which do not change is equivalent to finding the maximum number of vertices that do not change. Let $A = \{ A_1, A_2, ..., A_k \}$ and $B = \{ B_1, B_2, ..., B_l \}$ be partitions of $[1, 2, ..., n]$. Also, without loss of generality, let $k \ge l$ (allowed because $edit(A, B) = edit(B, A)$). Then let $B_{l+1}$, $B_{l+2}$, ..., $B_k$ all be the empty set. Then the maximum number of vertices that do not change is: $\max_f \sum_{i=1}^k |A_i \cap B_{f(i)} |$ where $f$ is a permutation of $[1, 2, ..., k]$. This is exactly the assignment problem where the vertices are $A_1$, ..., $A_k$, $B_1$, ..., $B_k$ and the edges are pairs $(A_i, B_j)$ with weight $|A_i \cap B_j|$. This can be solved in $O(|V|^2 \log |V| + |V||E|)$ time. | {} | {'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/6569', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/4501/']} | jdg_382523 |
stackexchange | llm_judgeable_groundtruth_similarity | 20404476 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I create a simple project in AndroidStudio with a few modules. Each module's gradle script contains the following code: android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 7 targetSdkVersion 18 }} How can I move this code to the main build.gradle script (project's script)? This code is common for all the submodules.
Now provide the response and nothing else.
| You could create a build.gradle at the root of your project (i.e. the folder that contains all your modules), and use it to configure your rootProject. For instance, if you have: MyApp - Module1/ - build.gradle - Module2/ - build.gradle - settings.gradle You can add a build.gradle next to settings.gradle . In the example above you actually have 3 Gradle projects: Module1 , Module2 and the rootProject . So inside this build.gradle , you could do: // use the ext object to add any properties to the projectproject.ext { compileSdkVersion = 18} Then in your modules, you can do: android { // here we reference the root project with the "rootProject" object. compileSdkVersion rootProject.ext.compileSdkVersion} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20404476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1173794/']} | jdg_382524 |
stackexchange | llm_judgeable_groundtruth_similarity | 366984 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to find general solution of the PDEs $$\frac{∂^2u}{∂x^2}-\frac{∂^2u}{∂y^2}=x^2y^y$$ the problem is the term $y^y$ in the equation. May i solve it by transforming into the canonical form? I have tried but it lead to a complicated equation. Please experts help! many thanks so i think we can just write the general solution in form: $$u(x,y) = \xi(y+x) + \eta(y-x) + \int x^2y^yd(y+x)d(y-x)$$ ??
Now provide the response and nothing else.
| There are several approaches that can solve this inhomogeneous linear ODE. Approach $1$: classical variables transformations Let $\begin{cases}p=x+y\\q=x-y\end{cases}$ , Then $\dfrac{\partial u}{\partial x}=\dfrac{\partial u}{\partial p}\dfrac{\partial p}{\partial x}+\dfrac{\partial u}{\partial q}\dfrac{\partial q}{\partial x}=\dfrac{\partial u}{\partial p}+\dfrac{\partial u}{\partial q}$ $\dfrac{\partial^2u}{\partial x^2}=\dfrac{\partial}{\partial x}\left(\dfrac{\partial u}{\partial p}+\dfrac{\partial u}{\partial q}\right)=\dfrac{\partial}{\partial p}\left(\dfrac{\partial u}{\partial p}+\dfrac{\partial u}{\partial q}\right)\dfrac{\partial p}{\partial x}+\dfrac{\partial}{\partial q}\left(\dfrac{\partial u}{\partial p}+\dfrac{\partial u}{\partial q}\right)\dfrac{\partial q}{\partial x}=\dfrac{\partial^2u}{\partial p^2}+\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial q^2}=\dfrac{\partial^2u}{\partial p^2}+2\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial q^2}$ $\dfrac{\partial u}{\partial y}=\dfrac{\partial u}{\partial p}\dfrac{\partial p}{\partial y}+\dfrac{\partial u}{\partial q}\dfrac{\partial q}{\partial y}=\dfrac{\partial u}{\partial p}-\dfrac{\partial u}{\partial q}$ $\dfrac{\partial^2u}{\partial y^2}=\dfrac{\partial}{\partial y}\left(\dfrac{\partial u}{\partial p}-\dfrac{\partial u}{\partial q}\right)=\dfrac{\partial}{\partial p}\left(\dfrac{\partial u}{\partial p}-\dfrac{\partial u}{\partial q}\right)\dfrac{\partial p}{\partial y}+\dfrac{\partial}{\partial q}\left(\dfrac{\partial u}{\partial p}-\dfrac{\partial u}{\partial q}\right)\dfrac{\partial q}{\partial y}=\dfrac{\partial^2u}{\partial p^2}-\dfrac{\partial^2u}{\partial pq}-\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial q^2}=\dfrac{\partial^2u}{\partial p^2}-2\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial q^2}$ $\therefore\dfrac{\partial^2u}{\partial p^2}+2\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial q^2}-\left(\dfrac{\partial^2u}{\partial p^2}-2\dfrac{\partial^2u}{\partial pq}+\dfrac{\partial^2u}{\partial q^2}\right)=\left(\dfrac{p+q}{2}\right)^2\left(\dfrac{p-q}{2}\right)^{\frac{p-q}{2}}$ $4\dfrac{\partial^2u}{\partial pq}=\dfrac{(p+q)^2(p-q)^{\frac{p-q}{2}}}{4\times2^{\frac{p-q}{2}}}$ $\dfrac{\partial^2u}{\partial pq}=\dfrac{(p+q)^2(p-q)^{\frac{p-q}{2}}}{16\times2^{\frac{p-q}{2}}}$ $u(p,q)=f(p)+g(q)+\dfrac{1}{16}\int_b^q\int_a^p\dfrac{(s+t)^2(s-t)^{\frac{s-t}{2}}}{2^{\frac{s-t}{2}}}ds~dt$ $u(x,y)=f(x+y)+g(x-y)+\dfrac{1}{16}\int_b^{x-y}\int_a^{x+y}\dfrac{(s+t)^2(s-t)^{\frac{s-t}{2}}}{2^{\frac{s-t}{2}}}ds~dt$ Approach $2$: Duhamel's principle With reference to http://en.wikipedia.org/wiki/Duhamel%27s_principle#Wave_equation and http://en.wikipedia.org/wiki/Wave_equation#Inhomogeneous_wave_equation_in_one_dimension , we have $u(x,y)=f(x+y)+g(x-y)+\dfrac{1}{2}\int_0^x\int_{y-x-s}^{y+x-s}s^2t^t~dt~ds$ or $u(x,y)=f(x+y)+g(x-y)-\dfrac{1}{2}\int_0^y\int_{x-y-t}^{x+y-t}s^2t^t~ds~dt$ Approach $3$: See achille hui's answer | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/366984', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/73454/']} | jdg_382525 |
stackexchange | llm_judgeable_groundtruth_similarity | 23393761 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
so currently I'm retrieving the data from a url using the following code Document doc = Jsoup.connect(url).get(); Before I fetch the data I've decided I want to get the content type, so I do that using the following. Connection.Response res = Jsoup.connect(url).timeout(10*1000).execute();String contentType = res.contentType(); Now I'm wondering, is this making 2 separate connections? Is this not efficient? Is there a way for me to get the content type and the document data in 1 single connection? Thanks
Now provide the response and nothing else.
| Yes Jsoup.connect(url).get() and Jsoup.connect(url).timeout(10*1000).execute(); are two separate connections. Maybe you are looking for something like Response resp = Jsoup.connect(url).timeout(10*1000).execute();String contentType = res.contentType(); and later parse body of response as a Document Document doc = resp.parse(); Anyway Jsoup by default parses only text/* , application/xml , or application/xhtml+xml and if content type is other, like application/pdf it will throw UnsupportedMimeTypeException so you shouldn't be worried about it. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23393761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/290957/']} | jdg_382526 |
stackexchange | llm_judgeable_groundtruth_similarity | 3687074 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote my first ever C++ template code on expandable array and I am getting a segmentation fault! After an hour of debugging I have realized that I need help. Something is wrong with the constructor or the destructor I think but not sure. The code is on pastie ready to be compiled. http://pastie.org/1150617 /* Expandable array in C++ */#include <iostream>using namespace std;template <class T>class EArray{private: T* arr; int size;public: EArray(int l); ~EArray(); void setElement(int i, const T& newval); void eraseElement(int i); void addElement(int i, const T& newval); void push(const T& newval); void display();};template <class T>EArray<T>::EArray(int l){ size = l;}template <class T>EArray<T>::~EArray(){ delete [] arr; arr = NULL;}template <class T>void EArray<T>::setElement(int i, const T& newval){ if(i < size && i >= 0){ arr[i] = newval; }}template <class T>void EArray<T>::eraseElement(int index){ size -= 1; T* newarr = new T[size]; for (int i = 0; i < size+1; i++){ if (i < index){ newarr[i] = arr[i]; } else if(i > index){ newarr[i-1] = arr[i]; } } delete [] arr; arr = newarr;}template <class T>void EArray<T>::addElement(int index, const T& newval){ size += 1; T* newarr = new T[size]; for(int i = 0; i < size; i++){ if(i<index){ newarr[i] = arr[i]; } else if (i == index){ newarr[i] = newval; } else{ newarr[i] = arr[i-1]; } } delete [] arr; arr = newarr;}template <class T>void EArray<T>::push(const T& newval){ size += 1; T * newarr = new T[size]; for (int i = 0; i < size-1; i++){ newarr[i] = arr[i]; } newarr[size-1]=newval; delete [] arr; arr = newarr;}template <class T>void EArray<T>::display(){ for(int i = 0; i < size; i++){ cout << arr[i] << endl; }}int main(){ EArray<int> A(6); A.setElement(0,34); A.setElement(1,544); A.setElement(2,32); A.setElement(3,324); A.setElement(4,24); A.display(); A.addElement(3,12); A.display(); A.eraseElement(4); A.display(); A.push(32456); A.display();}
Now provide the response and nothing else.
| It has nothing to do with templates. It's just a problem of memory management. In the constructor of EArray , you have never initialized arr , so by default it contains some invalid pointer. But then in setElement , you used this invalid pointer arr[i] = newval; , which should cause a SegFault. It should be fixable by adding arr = new T[size]; in the constructor (result: before, with segfault — after, running fine ). (BTW, in practice, please use a std::vector .) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3687074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/294714/']} | jdg_382527 |
stackexchange | llm_judgeable_groundtruth_similarity | 22419063 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Yesterday, I updated Xcode to the newest version (5.1 (5B130a)) to compatible with iOS 7.1 . Then I build my project, I get the error "Cast from pointer to smaller type 'int' loses information" in EAGLView.mm file ( line 408 ) when 64-bit simulators (e.g.: iPhone Retina 4-inch 64-bit) is selected. I'm using cocos2d-x-2.2.2 . Before I update Xcode, my project still can build and run normally with all devices. Thanks for all recommendation. Update: Today, i download the latest version of cocos2d-x (cocos2d-x 2.2.3). But the problem has still happened. Here is some piece of code where that error occur: /cocos2d-x-2.2.2/cocos2dx/platform/ios/EAGLView.mm:408:18: Cast from pointer to smaller type 'int' loses information // Pass the touches to the superview#pragma mark EAGLView - Touch Delegate- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ if (isKeyboardShown_) { [self handleTouchesAfterKeyboardShow]; return; } int ids[IOS_MAX_TOUCHES_COUNT] = {0}; float xs[IOS_MAX_TOUCHES_COUNT] = {0.0f}; float ys[IOS_MAX_TOUCHES_COUNT] = {0.0f}; int i = 0; for (UITouch *touch in touches) { ids[i] = (int)touch; // error occur here xs[i] = [touch locationInView: [touch view]].x * view.contentScaleFactor;; ys[i] = [touch locationInView: [touch view]].y * view.contentScaleFactor;; ++i; } cocos2d::CCEGLView::sharedOpenGLView()->handleTouchesBegin(i, ids, xs, ys);}
Now provide the response and nothing else.
| Apparently the clang version in Xcode 5.1 and above is more strict about potential 32bit vs. 64 bit incompatibilities in source code than older clang versions have been.To be honest, I think, clang is too restrictive here. A sane compiler may throw a warning on lines like this but by no way it should throw an error, because this code is NOT wrong, it is just potentially error-prone, but can be perfectly valid. The original code is ids[i] = (int)touch; with ids being an array of ints and touch being a pointer. In a 64bit build a pointer is 64bit (contrary to a 32bit build, where it is 32bit), while an int is 32bit, so this assignment stores a 64bit value in a 32bit storage, which may result in a loss of information. Therefore it is perfectly valid for the compiler to throw an error for a line like ids[i] = touch; However the actual code in question contains an explicit c-style cast to int. This explicit cast clearly tells the compiler "Shut up, I know that this code does not look correct, but I do know what I am doing". So the compiler is very picky here and the correct solution to make the code compile again and still let it show the exact same behavior like in Xcode 5.0 is to first cast to an integer type with a size that matches the one of a pointer and to then do a second cast to the int that we actually want: ids[i] = (int)(size_t)touch; I am using size_t here, because it is always having the same size as a pointer, no matter the platform. A long long would not work for 32bit systems and a long would not work for 64 bit Windows (while 64bit Unix and Unix-like systems like OS X use the LP64 data model, in which a long is 64bit, 64bit Windows uses the LLP64 data model, in which a long has a size of 32bit ( http://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models )). | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22419063', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3090526/']} | jdg_382528 |
stackexchange | llm_judgeable_groundtruth_similarity | 71657 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A typical account creation process seems to be: Provide email address and set a password Receive confirmation email with a link and/or hashed token Click the link to verify and/or enter the token on the site However, I once read somewhere (and I can't find this now, which is why I'm asking) that a better process would modify Step 3 to also require the user to login using the password provided in Step 1. I think the rationale was that this extra precaution ensures that the person verifying the email address is the same person who created the account. Question: does the above explanation make sense, and should I implement email verification by requiring password-based login? It makes some sense to me, and at least it doesn't seem harmful -- other than making the user experience slightly more cumbersome. But I see many online services that do not require this, and I wonder why. For example, here's the scenario I worry about. What if person #1 initially created the account but specified the wrong email address (maliciously or accidentally), and it got sent to person #2. If person #2 is naive, he/she might verify that email address by just clicking the link... and then forget about it. Then person #1 could still login using the password. Suppose person #1 does all sorts of bad stuff on that account. Would person #2 be responsible? I think an alternative solution might be to ask new users to first specify just an email address, then confirm that with a hashed token, and then ask them to set a password. But I don't see very many online services that do it this way, either.
Now provide the response and nothing else.
| You certainly should authenticate the user who clicks the link. Otherwise, as you say, someone could inadvertently confirm an email address. However, it is usually possible to authenticate the user transparently, so they don't need to enter their password. The way we achieve this is using a session cookie. During the signup process, the new user is issued a session cookie. When they click the confirmation link in the email, they will usually use the same browser, so the session cookie will be attached. The web site can verify the user's identity using that cookie. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/71657', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/59540/']} | jdg_382529 |
stackexchange | llm_judgeable_groundtruth_similarity | 135167 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This question is derived from a question asked in my school test. What happens when a magnesium ribbon is heated in air? My first response was the formation of magnesium oxide $(\ce{MgO})$ when oxygen in air reacts with magnesium at a high temperature which can be expressed in the form of a chemical equation like this: $$\ce{2 Mg(s) + O2(g) ->[\Delta] 2 MgO(s)},$$ but I was wondering if magnesium could react with any other gas in the air to form a compound with that gas and I found out that magnesium does react with nitrogen in the air to form magnesium nitride too: $$\ce{3 Mg(s) + N2(g) ->[\Delta] Mg3N2(s)}.$$ What determines whether the heated magnesium ribbon will react with the oxygen in the atmosphere or the nitrogen in the atmosphere? Two possibilities that I can think of are: composition of the air; temperature. I don't think that composition is the answer because on average the atmosphere of Earth has more nitrogen than oxygen, so I think that the answer may be temperature. I'd also like to know how the factor affects the chemical reaction on an atomic level.
Now provide the response and nothing else.
| A large pile of grey magnesium powder, when lit in air, produces a smouldering pile which cools down to reveal a crusty white solid of magnesium oxide. However, if you break apart the mound, you can find something quite strange in the middle - a clearly brownish powder that wasn't there before. Seeing is believing! The author of the video also has a clever idea to identify the brown solid. By adding water and placing some moist pH paper above the puddle, it clearly shows the transfer of some alkaline substance across the gap. This is ammonia gas, $\ce{NH3}$ , whose presence is explained by the hydrolysis of magnesium nitride: $$\ce{Mg3N2(s) + 6H2O(l) -> 3 Mg(OH)2(aq) + 2 NH3(g)}$$ It is important that the pH paper not come in direct contact with the water used to hydrolyze the magnesium oxide, as $\ce{Mg(OH)2}$ is itself also basic, and could also be formed by reaction with either $\ce{MgO}$ or $\ce{Mg}$ directly. Only $\ce{Mg3N2}$ produces a basic gas which forms an alkaline solution in water. As you can see, magnesium metal does react directly with molecular nitrogen ( $\ce{N2}$ ) when burned in air. However, the reaction is thermodynamically and kinetically less favourable than the reaction with molecular oxygen ( $\ce{O2}$ ). This is almost certainly due to the extreme strength of the bond between nitrogen atoms in molecular $\ce{N2}$ , whose bond dissociation energy of $\mathrm{945\ kJ\ mol^{-1}}$ is one of the strongest in all of chemistry, second only to the bond in carbon monoxide. For comparison, the bond dissociation energy of molecular $\ce{O2}$ is drastically lower, at $\mathrm{498\ kJ\ mol^{-1}}$ . So why did the Chem13 magazine article referenced in Aniruddha Deb's answer not find any magnesium nitride? It is likely that 1 g of magnesium metal is far too little for the experiment run under their conditions. It takes a significant amount of "sacrificial" magnesium to completely consume the oxygen in its surroundings. Only once practically all the oxygen is consumed (and while the pile of magnesium is still hot enough from the reaction between magnesium and oxygen) will the remaining magnesium metal react with the nitrogen in air. Alternatively, the reaction would have to be performed in an oxygen-free environment. Magnesium metal is such a strong reductant that many substances can act as an oxidant for it, including pure $\ce{CO2}$ (also shown in the video above) and water ( never put out a magnesium fire with water! ). | {} | {'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/135167', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/92430/']} | jdg_382530 |
stackexchange | llm_judgeable_groundtruth_similarity | 83962 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In a few spots I got this to work ls -rt /path/to/log/file/ | tail -1 | xargs tail -f But I can't figure out why it won't always work (This is quick fix until I get the log rolling fixed here)
Now provide the response and nothing else.
| Sort command and THE most recent log file Why using -r (reverse sort order) than reaching end of output with tail ? Using normal sort order and take first entry would be quicker! tail -f `/bin/ls -1td /path/to/log/file/*| /usr/bin/head -n1` or tail -f $(/bin/ls -1t /path/to/log/file/* | /bin/sed q) work fine. Nota: I like to use sed because this command is present in /bin , maybe before /usr are mounted. tail -f `/bin/ls -1tr /path/to/log/file/* | /bin/sed -ne '$p'` would work but, as already said: inversing sort order, than dropping whole ouput for using only last entry is not a real good idea ;-) Warning , in last directory, * have to not match a directory, or else command tail won't know how to open it. Same but using find for searching for most recent file: read -a file < <( find /tmp 2>/dev/null -type f -mmin +-1 -mmin -10 -printf "%Ts %p\n" | sort -rn)tail -f ${file[1]} Note: the -mmin +-1 ensure to not list bad timed files: in the futur . read is builtin, create an array and prevent the use of head -n1| cut -d \ -f2 -mmin -10 could be changed or suppressed, but this prevent long sort process. But tail support to watch about more than one file: Try to open two shell console and try this: In 1st console: user@host[pts/1]:~$ touch /tmp/file_{1,2,3}user@host[pts/1]:~$ tail -f /tmp/file_{1,2,3}==> /tmp/file_1 <====> /tmp/file_2 <====> /tmp/file_3 <== in second one, while keeping 1st console visible, hit many time: user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM25285user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM16381user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM19766user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM3053 1st console could look like: ==> /tmp/file_2 <==25285==> /tmp/file_1 <==1638119766==> /tmp/file_3 <==3053 ... In the idea of SO question, but time based, multi files By using find command, we could watch on last minutes modified files -mmin or last days -mtime : find /path/to/logdir -type f -mmin -10 -exec tail -f {} + for watching for logfiles modified last 10 minutes. Note: Have a look at man tail , about -F option for long time watch -q option for not printing file names Fancy formatting find /path/to/logdir -type f -mmin -10 -exec tail -f {} + | sed -une 's/^==> .path.to.logdir.\(.*\) <==$/\1 /;ta;bb; :a;s/^\(.\{12\}\) *$/\1: /;h;bc; :b;G;s/^\(..*\)\n\(.*\)/\2 \1/p;:c;' Where you could modify .path.to.logdir. and change 12 for more suitable length. For sample, keeping our two console, stop 1st and try user@host[pts/1]:~$ find /tmp/ -type f -mtime -1 -name 'file_?' -exec tail -f {} + | sed -une 's/^==> .tmp.\(.*\) <==$/\1 /;ta;bb; :a;s/^\(.\{12\}\) *$/\1: /;h;bc; :b;G;s/^\(..*\)\n\(.*\)/\2 \1/p;:c;'file_2 : 25285file_1 : 16381file_1 : 19766file_3 : 3053 than in second console, hit again some user@host[pts/2]:~$ tee -a /tmp/file_$((RANDOM%3+1)) <<<$RANDOM | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/83962', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/43558/']} | jdg_382531 |
stackexchange | llm_judgeable_groundtruth_similarity | 24359975 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using jPages pagination library for pagination in my rails application. I have set to show 10 records per page.How can I show the second page records on page load.Here the configuration for jPages: $(function(){ $("div.holder").jPages({ containerID : "ReviewContainer", previous : "", next : "", perPage : 4, delay : 20, callback: function (pages, items) { if (items.count > 4) { $("div.holder").show(); } else { $("div.holder").hide(); } } });}); I have some records on my home page where I used the pagination. And if click on any record it will redirect to its details page where I used the same pagination configuration. But if I clicked on the 12th record from home page then on the detail page it will show 10 records only. How can I configure it to show the selected record on detail page as well.
Now provide the response and nothing else.
| -(NSMutableAttributedString*)decorateTags:(NSString *)stringWithTags{ NSError *error = nil; //For "Vijay #Apple Dev" NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@"#(\\w+)" options:0 error:&error]; //For "Vijay @Apple Dev" //NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@"@(\\w+)" options:0 error:&error]; NSArray *matches = [regex matchesInString:stringWithTags options:0 range:NSMakeRange(0, stringWithTags.length)]; NSMutableAttributedString *attString=[[NSMutableAttributedString alloc] initWithString:stringWithTags]; NSInteger stringLength=[stringWithTags length]; for (NSTextCheckingResult *match in matches) { NSRange wordRange = [match rangeAtIndex:1]; NSString* word = [stringWithTags substringWithRange:wordRange]; //Set Font UIFont *font=[UIFont fontWithName:@"Helvetica-Bold" size:15.0f]; [attString addAttribute:NSFontAttributeName value:font range:NSMakeRange(0, stringLength)]; //Set Background Color UIColor *backgroundColor=[UIColor orangeColor]; [attString addAttribute:NSBackgroundColorAttributeName value:backgroundColor range:wordRange]; //Set Foreground Color UIColor *foregroundColor=[UIColor blueColor]; [attString addAttribute:NSForegroundColorAttributeName value:foregroundColor range:wordRange]; NSLog(@"Found tag %@", word); } // Set up your text field or label to show up the result // yourTextField.attributedText = attString; // // yourLabel.attributedText = attString; return attString;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24359975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3467205/']} | jdg_382532 |
stackexchange | llm_judgeable_groundtruth_similarity | 1570131 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How many ways to roll a die seven times and obtain a sequence of outcomes with three 1s, two 5s, and two 6s? Ans: When I was thinking of a way to decompose the problem I first thought that if a die is getting rolled seven times then each roll is independent of the other thus: $6^7$ for all possible outcomes... then I was having trouble with how to take care of the constraint. The solution given is: $$\frac{7!}{3!2!2!}$$ The section I am covering right now is about repetition, but I do not see at al how this could be the result. I thought I am supposed to find the outcomes WITH three 1s, two 5s, and two 6s? Isn't dividing those sequemces out doing the exact opposite and now we are finding all the arrangements WITHOUT three 1s, two 5s, and two 6s?
Now provide the response and nothing else.
| Another way to think about it goes like this:So you roll seven times. First, choose 3 spots for the 1s,$$\binom{7}{3}.$$Next, choose 2 spots for the 5s. We already used up 3, so this is $$\binom{4}{2}.$$Finally, choose the last to spots for the 6s, $$\binom{2}{2}.$$This gives$$\binom{7}{3}\binom{4}{2}\binom{2}{2} = \frac{7!\,4!\,2!}{3!\,4!\,2!\,2!\,2!\,0!} = \binom{7}{3,2,2}.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1570131', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/183917/']} | jdg_382533 |
stackexchange | llm_judgeable_groundtruth_similarity | 13863523 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to create numpy.ndarray objects that hold complex integer values in them. NumPy does have complex support built-in, but for floating-point formats ( float and double ) only; I can create an ndarray with dtype='cfloat' , for example, but there is no analogous dtype='cint16' . I would like to be able to create arrays that hold complex values represented using either 8- or 16-bit integers. I found this mailing list post from 2007 where someone inquired about such support. The only workaround they recommended involved defining a new dtype that holds pairs of integers. This seems to represent each array element as a tuple of two values, but it's not clear what other work would need to be done in order to make the resulting data type work seamlessly with arithmetic functions. I also considered another approach based on registration of user-defined types with NumPy. I don't have a problem with going to the C API to set this up if it will work well. However, the documentation for the type descriptor strucure seems to suggest that the type's kind field only supports signed/unsigned integer, floating-point, and complex floating-point numeric types. It's not clear that I would be able to get anywhere trying to define a complex integer type. What are some recommendations for an approach that may work? Whatever scheme I select, it must be amenable to wrapping of existing complex integer buffers without performing a copy. That is, I would like to be able to use PyArray_SimpleNewFromData() to expose the buffer to Python without having to make a copy of the buffer first. The buffer would be in interleaved real/imaginary format already, and would either be an array of int8_t or int16_t .
Now provide the response and nothing else.
| I also deal with lots of complex integer data, generally basebanded data. I use dtype = np.dtype([('re', np.int16), ('im', np.int16)]) It's not perfect, but it adequately describes the data. I use it for loading into memory without doubling the size of the data. It also has the advantage of being able to load and store transparently with HDF5. DATATYPE H5T_COMPOUND { H5T_STD_I16LE "re"; H5T_STD_I16LE "im";} Using it is straightforward and is just different. x = np.zeros((3,3),dtype)x[0,0]['re'] = 1x[0,0]['im'] = 2x>> array([[(1, 2), (0, 0), (0, 0)],>> [(0, 0), (0, 0), (0, 0)],>> [(0, 0), (0, 0), (0, 0)]],>> dtype=[('re', '<i2'), ('im', '<i2')]) To do math with it, I convert to a native complex float type. The obvious approach doesn't work, but it's also not that hard. y = x.astype(np.complex64) # doesn't work, only gets the real party = x['re'] + 1.j*x['im'] # works, but slow and bigy = x.view(np.int16).astype(np.float32).view(np.complex64)y>> array([[ 1.+2.j, 0.+0.j, 0.+0.j],>> [ 0.+0.j, 0.+0.j, 0.+0.j],>> [ 0.+0.j, 0.+0.j, 0.+0.j]], dtype=complex64) This last conversion approach was inspired by an answer to What's the fastest way to convert an interleaved NumPy integer array to complex64? | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13863523', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/919747/']} | jdg_382534 |
stackexchange | llm_judgeable_groundtruth_similarity | 69672 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
First, a little motivation: I have read the section on Group Actions in Dummit & Foote, the wikipedia page, and (countably many) other references. And seemingly without exception, they only offer rote and/or abstract examples, such as: Let $ga = a$ for al $ g \in G, a \in A$ The symmetric group $S_N$ acting on $A$ by $\sigma \cdot a = \sigma(a)$ Something about regular n-gons and $D_{2n}$ $g \cdot a = ga$... I don't mean to undermine the importance of these examples, but I'm left with no hands-on experience with these things. Exercise $\S$ 1.7.8(b) in D&F says: "Describe explicitly how the elements $(1 \ 2)$ and $(1 \ 2 \ 3)$ act on the six 2-element subsets of $ \left \{1, 2, 3, 4 \right \}$. How does a three cycle permute two elements? Furthermore, what are some concrete examples of (computational exercises of) group actions? Thanks.
Now provide the response and nothing else.
| You ask, in a comment, for a non-obvious action on $X=\{1,2,3,4\}$. Let me give you, instead, a non-trivial action of $S_5$ on $X=\{1,2,3,4,5,6\}$: it is given by an homomorphism $\phi:S_5\to S_6$ such that \begin{align}(1,2)&\longmapsto(1,2)(3,4)(5,6) \\(1,2,3,4,5)&\longmapsto(1,2,3,4,5)\end{align}You should check that this homomorphism is injective. In fact, you should find all ways in which $S_5$ can act on this $X$. If you change the numbers $5$ and $6$ and look for examples, you'll have lots of fun. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/69672', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11020/']} | jdg_382535 |
stackexchange | llm_judgeable_groundtruth_similarity | 25622975 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using OOTB Alfresco 5 Community edition running on Ubuntu14.04 Steps:Create site through the share UI. Copy request as curl from Chromium developer tools.Reconstructed request in python requests library as: s=requests.post('http://<IP>:8080/share/service/modules/create-site',data=site_data,cookies=THE_cookie Where THE_cookie was obtained via a POST to http://:8080/share/page/dologin, which gave a 200, and site_data has different names to the site created through the share UI. That request gave a 500 error stating that u'freemarker.core.InvalidReferenceException: The following has evaluated to null or missing:\n==> success [in template "org/alfresco/modules/create-site.post.json.ftl" at line 2, column 17]\n\nTip: If the failing expression is known to be legally null/missing, either specify a default value with myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthessis: (myOptionVar.foo)!myDefault, (myOptionVar.foo)??\n\nThe failing instruction:\n==> ${success?string} [in template "org/alfresco/modules/create-site.post.json.ftl" at line 2, column 15]', ... When in Chromium, there is no response, but a site is created successfully. I've also not got the curl request from the command line to work -- it needs the CSRF token removed, then gives a 200 and does nothing; no logs. My understanding is that Alfresco always gives a 200 on a successful request regardless of whether it's a GET or POST. If anyone has any ideas that would be amazing. There doesn't seem to be anything that we can do to get create-site to work outside of the share UI, but we absolutely need it to do so.
Now provide the response and nothing else.
| Actually, git rebase has had this all along (well, since 1.6.something). Remember that git pull is just git fetch followed by either git merge or git rebase (as directed by various options and settings). That is, git pull does a pair of underlying git operations. The reason it does not work after you have finished your git pull is that git pull is in fact that pair of operations, and if you just did one pair, there are probably no new changes brought in by your second git fetch , so there is nothing new to show. If you were to use your reflog history to reset items to the state they were in before the first pair of operations, the second pair of operations would have something to show. You can (as I see VonC has already noted ) get an after-the-fact report on what changed in either the upstream or your local branch using your own reflogs. But presumably you want to see this on the next git pull even if that does a rebase , and that's where a sneaky set of git defaults come in. If you manually add --stat to your git pull line, the pull script passes this option on to either git merge or git rebase (whichever one it runs). If you leave this option out , git relies on a default option. The default --stat option for git merge is the setting in your configuration's merge.stat , which defaults to True . The default --stat option for git rebase is the setting in your configuration's rebase.stat , which defaults to False . Let me repeat that, because it's just peculiar and non-obvious (I only discovered this because of your question—I generally avoid git pull myself). The default --stat option for merge is True but the default --stat option for rebase is False . If you set the defaults yourself, you can make the stat output show up. Alternatively, you can pass an explicit --stat to git pull . Note that you can also set pull.rebase (to either True , meaning default to rebasing, or preserve , meaning default to rebase with --preserve as well), or branch. name .rebase (to True or preserve ), to make git pull use --rebase . This is independent of whether you supply --stat as a pull argument. Edit : VonC's answer is gone (at least right now) but there are bits of it in various comments. The trick to use after the fact is that when git fetch updates origin/develop , your own reflog now contains an origin/develop@{1} : your own previous value for origin/develop , before git fetch updated it. Thus, you can select the revision-set that came in with origin/develop@{1}..origin/develop . Diffing those two with --stat will get you the desired output. Note that you may, or may not depending on your particular shell, have to quote the curly braces in the @{1} part. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25622975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3782911/']} | jdg_382536 |
stackexchange | llm_judgeable_groundtruth_similarity | 35823835 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Excel spreadsheet that I need to import into SQL Server on a daily basis. The spreadsheet will contain around 250,000 rows across around 50 columns. I have tested both using openpyxl and xlrd using nearly identical code. Here's the code I'm using (minus debugging statements): import xlrdimport openpyxldef UseXlrd(file_name): workbook = xlrd.open_workbook(file_name, on_demand=True) worksheet = workbook.sheet_by_index(0) first_row = [] for col in range(worksheet.ncols): first_row.append(worksheet.cell_value(0,col)) data = [] for row in range(1, worksheet.nrows): record = {} for col in range(worksheet.ncols): if isinstance(worksheet.cell_value(row,col), str): record[first_row[col]] = worksheet.cell_value(row,col).strip() else: record[first_row[col]] = worksheet.cell_value(row,col) data.append(record) return datadef UseOpenpyxl(file_name): wb = openpyxl.load_workbook(file_name, read_only=True) sheet = wb.active first_row = [] for col in range(1,sheet.max_column+1): first_row.append(sheet.cell(row=1,column=col).value) data = [] for r in range(2,sheet.max_row+1): record = {} for col in range(sheet.max_column): if isinstance(sheet.cell(row=r,column=col+1).value, str): record[first_row[col]] = sheet.cell(row=r,column=col+1).value.strip() else: record[first_row[col]] = sheet.cell(row=r,column=col+1).value data.append(record) return dataxlrd_results = UseXlrd('foo.xls')openpyxl_resuts = UseOpenpyxl('foo.xls') Passing the same Excel file containing 3500 rows gives drastically different run times. Using xlrd I can read the entire file into a list of dictionaries in under 2 second. Using openpyxl I get the following results: Reading Excel File...Read 100 lines in 114.14509415626526 secondsRead 200 lines in 471.43183994293213 secondsRead 300 lines in 982.5288782119751 secondsRead 400 lines in 1729.3348784446716 secondsRead 500 lines in 2774.886833190918 secondsRead 600 lines in 4384.074863195419 secondsRead 700 lines in 6396.7723388671875 secondsRead 800 lines in 7998.775000572205 secondsRead 900 lines in 11018.460735321045 seconds While I can use xlrd in the final script, I will have to hard code a lot of formatting because of various issues (i.e. int reads as float, date reads as int, datetime reads as float). Being that I need to reuse this code for a few more imports, it doesn't make sense to try and hard code specific columns to format them properly and have to maintain similar code across 4 different scripts. Any advice on how to proceed?
Now provide the response and nothing else.
| You can just iterate over the sheet: def UseOpenpyxl(file_name): wb = openpyxl.load_workbook(file_name, read_only=True) sheet = wb.active rows = sheet.rows first_row = [cell.value for cell in next(rows)] data = [] for row in rows: record = {} for key, cell in zip(first_row, row): if cell.data_type == 's': record[key] = cell.value.strip() else: record[key] = cell.value data.append(record) return data This should scale to large files. You may want to chunk your result if the list data gets too large. Now the openpyxl version takes about twice as long as the xlrd one: %timeit xlrd_results = UseXlrd('foo.xlsx')1 loops, best of 3: 3.38 s per loop%timeit openpyxl_results = UseOpenpyxl('foo.xlsx')1 loops, best of 3: 6.87 s per loop Note that xlrd and openpyxl might interpret what is an integer and what is a float slightly differently. For my test data, I needed to add float() to make the outputs comparable: def UseOpenpyxl(file_name): wb = openpyxl.load_workbook(file_name, read_only=True) sheet = wb.active rows = sheet.rows first_row = [float(cell.value) for cell in next(rows)] data = [] for row in rows: record = {} for key, cell in zip(first_row, row): if cell.data_type == 's': record[key] = cell.value.strip() else: record[key] = float(cell.value) data.append(record) return data Now, both versions give the same results for my test data: >>> xlrd_results == openpyxl_resultsTrue | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35823835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2348241/']} | jdg_382537 |
stackexchange | llm_judgeable_groundtruth_similarity | 13189365 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function (see below) that is emitting the following warning: second parameter of ‘va_start’ not last named argument What does it means and how to remove it? The function is as the following: static int ui_show_warning(GtkWindow *parent, const gchar *fmt, size_t size, ...) { GtkWidget *dialog = NULL; va_list args = NULL; int count = -1; char *msg = NULL; if((msg = malloc(size + 1)) == NULL) return -12; va_start(args, fmt); if((count = snprintf(msg, size, fmt, args)) < 0) goto outer; dialog = gtk_message_dialog_new(parent, GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_WARNING, GTK_BUTTONS_OK, "%s", msg); (void) gtk_dialog_run(GTK_DIALOG(dialog)); gtk_widget_destroy(dialog); outer: { if(args != NULL) va_end(args); if(msg != NULL) free(msg); return count; } }
Now provide the response and nothing else.
| You need to use size instead of fmt : va_start(args, size); It is size , not fmt , that is the last parameter that has an explicit name (as opposed to vararg parameters, which have no names). You need to pass the last named parameter to va_start in order for it to figure out the address in memory at which the vararg parameters start. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13189365', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/800123/']} | jdg_382538 |
stackexchange | llm_judgeable_groundtruth_similarity | 10784530 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I think my jquery ajax page loading is a bit slow. When I click my href with an onClick handler sometimes it doesn't response and I have to doublelick? How can I avoid this? <a target="_blank" class="arrow" href="javascript:void();" onclick="f(\"value",\"value\");return false;"> <div class="teaser"> <h3></h3> <p class="subheadline"></p> <hr class="divider"> <p></p> </div> <img height="353" class="img" src="example.jpg" width="374"> </a> Maybe it's because of the empy h3 and p tags? It feels a bit sluggish?? My function f is: function f (url, id){ var btn = $(this); if (btn.data('running')) return; $j('#tx #singleview').empty(); btn.data('running', true); url="http://"+url+"index.php?id="+id+"&eID=tx"; url_stack.push(url); $j.getJSON(url, function(json) { $('#container').css({ "z-index" : "-100", "margin-left" : "148", "top" : "-800" }) $('#container').animate({ "opacity": "0.0", "top": '+=800' }, 900, function() { // Animation complete. }); var singleview = $j('#tx #singleview'); singleview.css({ "position": "relative", "top": "-800px", "height": "800px" }) singleview.append($j("#singleviewTemplate").tmpl(json).css({ "visibility": "visible", "position": "relative" })); singleview.animate({ opacity: 100.0, top: 0 }, 900, function() { // Unset it here, this lets the button be clickable again btn.data('running', false); }); });}
Now provide the response and nothing else.
| I came across the same problem. To be precise, I have two branches A and B with the same files but a different programming interface in some files. Now the methods of file f , which is independent of the interface differences in the two branches, were changed in branch B , but the change is important for both branches. Thus, I need to merge just file f of branch B into file f of branch A . A simple command already solved the problem for me if I assume that all changes are committed in both branches A and B : git checkout Agit checkout --patch B f The first command switches into branch A , into where I want to merge B 's version of the file f . The second command patches the file f with f of HEAD of B . You may even accept/discard single parts of the patch. Instead of B you can specify any commit here, it does not have to be HEAD . Community edit : If the file f on B does not exist on A yet, then omit the --patch option. Otherwise, you'll get a "No Change." message. | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/10784530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/340457/']} | jdg_382539 |
stackexchange | llm_judgeable_groundtruth_similarity | 42092567 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just switched from Sublime Text to Atom in order to turn completely open source. I have trouble with something very very simple: I want Atom to use always (!) and under any circumstances tab width 2 and replace tab with spaces. This setting is so simple in gedit or Sublime Text, but no matter what I am trying: When I start a new file, tab size is 2 (good!). When I use an existing file, tab size is sometimes 4. I find that a bit annoying. My current setting in Editor are seen in the screenshot:
Now provide the response and nothing else.
| There is more than one tab setting Each package (such as python-language ) has its own tab setting(s). Whether the language uses the global default or its own default is up to whoever created the package, but you can generally override it. In your screenshot, you have set the "Tab Type" to "soft". That will take care of using spaces rather than tabs. You have left the default tab width of 2. That is your global setting. Now, if you look under "Packages" and search for "python" you will find a package named "language-python". Click on its settings button and you will find a number of syntax-specific settings. Python Grammar Python Console Grammar Python Traceback Grammar Regular Expressions (Python) Grammar Each of those grammars has its own Tab Length setting. You can set them explicitly to 2 here to override the package's default. (You probably mostly care about the first one, Python Grammar.) Python is different In the case of Python, the package is explicitly configured to default to 4 spaces, probably because Python is very opinionated about whitespace, and PEP 8 recommends 4-space indents. You can see the default package setting here in the package's source: https://github.com/atom/language-python/blob/master/settings/language-python.cson 'autoIndentOnPaste': false'softTabs': true'tabLength': 4 This overrides the global default. That's why Python Grammar does not honor the global tab width, the way that most packages do. Sometimes there are package overrides Additionally, certain packages will override your settings for syntax reasons. For example, language-make will override and use real tabs instead of spaces, because that is required by make. In the case of Python, there is an override to use spaces. The language-python settings page offers a spot for you to change the indentation level, but it does not offer a way to switch to using tab characters. (That's probably justifiable, as tab characters and mixed indentation in Python are a very common cause of difficult-to-debug syntax errors.) You might need to reload Lastly, sometimes settings don't take effect completely until you reload the Atom window. You can use the Window: Reload command to do so. Or using the keyboard: Mac: Ctrl Opt Cmd L Windows/Linux: Ctrl Alt R | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42092567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1318134/']} | jdg_382540 |
Subsets and Splits