source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 65518787 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list of dictionary. I wish to have only one result for each unique api and the result need to show according to priority: 0, 1, 2. May I know how should I work on it? Data: [{'api':'test1', 'result': 0},{'api':'test2', 'result': 1},{'api':'test3', 'result': 2},{'api':'test3', 'result': 0},{'api':'test3', 'result': 1},] Expected output: [{'api':'test1', 'result': 0},{'api':'test2', 'result': 1},{'api':'test3', 'result': 0},]
Now provide the response and nothing else.
| Assuming input data you can do classic sql-ish groupby : from itertools import groupby# in case your data is sorted already by api skip the below linedata = sorted(data, key=lambda x: x['api'])res = [ {'api': g, 'result': min(v, key=lambda x: x['result'])['result']} for g, v in groupby(data, lambda x: x['api'])] Outputs: [{'api': 'test1', 'result': 0}, {'api': 'test2', 'result': 1}, {'api': 'test3', 'result': 0}] | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65518787', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2324659/']} | jdg_381341 |
stackexchange | llm_judgeable_groundtruth_similarity | 55515128 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I suspect it's not possible to do what I'm looking for but it's worth a shot! I have a pipeline for provisioning Azure log query alert rules. The individual alert rules are defined as ARM parameter files, and I use a shared ARM template file to deploy them. Here's a stripped down version of my template file with most of the parameters omitted. { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "logQuery": { "type": "string", "minLength": 1, "metadata": { "description": "Query to execute against the AI resource" } } }, "variables": { "appInsightsResourceId": "[concat(resourceGroup().id,'/providers/','microsoft.insights/components/', parameters('appInsightsResourceName'))]", "actionGroupId": "[concat(resourceGroup().id,'/providers/','microsoft.insights/actionGroups/', parameters('actionGroupName'))]", "linkToAiResource" : "[concat('hidden-link:', variables('appInsightsResourceId'))]" }, "resources":[{ "name":"[parameters('alertName')]", "type":"Microsoft.Insights/scheduledQueryRules", "location": "northeurope", "apiVersion": "2018-04-16", "tags": { "[variables('linkToAiResource')]": "Resource" }, "properties":{ "description": "[parameters('alertDescription')]", "enabled": "[parameters('isEnabled')]", "source": { "query": "[parameters('logQuery')]", "dataSourceId": "[variables('appInsightsResourceId')]", "queryType":"[parameters('logQueryType')]" }, "schedule":{ "frequencyInMinutes": "[parameters('alertSchedule').Frequency]", "timeWindowInMinutes": "[parameters('alertSchedule').Time]" }, "action":{ "odata.type": "Microsoft.WindowsAzure.Management.Monitoring.Alerts.Models.Microsoft.AppInsights.Nexus.DataContracts.Resources.ScheduledQueryRules.AlertingAction", "severity": "[parameters('alertSeverity')]", "aznsAction":{ "actionGroup":"[array(variables('actionGroupId'))]" }, "trigger":{ "thresholdOperator":"[parameters('alertTrigger').Operator]", "threshold":"[parameters('alertTrigger').Threshold]" } } } } ]} You can see how I'm providing the App Insights query as a parameter, so my parameters file could look something like: { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "logQuery": { "value": "requests | where resultCode >= 500" } }} However, these queries can be very long and hard to understand when viewing as an unbreakable JSON string. So I want to parametize this parameter (if you know what I mean) so that the key variables are defined and supplied separately. I was thinking about changing the parameters to something like this, introducing a new parameter holding an array of placeholder replacements for the parametized query... { "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "logQueryVariables": [ { "{minCode}": "500" } ], "logQuery": { "value": "requests | where resultCode >= {minCode}" } }} ...then finding a way to iterate over the variables array and replace the placeholders in the logQuery parameter, I thought maybe I could use an ARM function or something. But I'm afraid to admit I'm stuck with this part. Is it possible to use the copy syntax to do something like this?
Now provide the response and nothing else.
| Along the years, many people fell in doubt about it, as we can see in the related questions here aside. It seems that the accepted answers ranges from " for sure do it " to " its clearly mistreating the protocol ". Since many questions was sent years ago, let's dig into the HTTP 1.1 specification from June 2014 ( RFC 7231 ), for better understanding of what's clearly discouraged or not. The first proposed workaround: First, about resources and the URI itself on Section 2 : The target of an HTTP request is called a "resource". HTTP does not limit the nature of a resource; it merely defines an interface that might be used to interact with resources. Each resource is identified by a Uniform Resource Identifier (URI). Based on it, some may argue that since HTTP does not limite the nature of a resource, a URI containing more than one id would be possible. I personally believe it's a matter of interpretation here. About your first proposed workaround ( DELETE '/path/abc?itemId=1&itemId=2&itemId=3' ) we can conclude that it's something discouraged if you think about a resource as a single document in your entity collection while being good to go if you think about a resource as the entity collection itself. The second proposed workaround: About your second proposed workaround ( POST '/path/abc' with body: { ids: [1, 2, 3] } ), using POST method for deletion could be misleading. The section Section 4.3.3 says about POST : The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics. For example, POST is used for the following functions (among others): Providing a block of data, such as the fields entered into an HTML form, to a data-handling process; Posting a message to a bulletin board, newsgroup, mailing list, blog, or similar group of articles; Creating a new resource that has yet to be identified by the origin server; and Appending data to a resource's existing representation(s). While there's some space for interpretation about "among others" functions for POST , it clearly conflicts with the fact that we have the method DELETE for resources removal, as we can see in Section 4.1 : The DELETE method removes all current representations of the target resource. So I personally strongly discourage the use of POST to delete resources. An alternative workaround: Inspired on your second workaround, we'd suggest one more: DELETE '/path/abc' with body: { ids: [1, 2, 3] } It's almost the same as proposed in the workaround two but instead using the correct HTTP method for deletion. Here, we arrive to the confusion about using an entity body in a DELETE request. There are many people out there stating that it isn't valid, but let's stick with the Section 4.3.5 of the specification: A payload within a DELETE request message has no defined semantics; sending a payload body on a DELETE request might cause some existing implementations to reject the request. So, we can conclude that the specification doesn't prevent DELETE from having a body payload. Unfortunately some existing implementations could reject the request... But how is this affecting us today? It's hard to be 100% sure, but a modern request made with fetch just doesn't allow body for GET and HEAD . It's what the Fetch Standard states at Section 5.3 on Item 34: If either body exists and is non-null or inputBody is non-null, and request’s method is GET or HEAD, then throw a TypeError. And we can confirm it's implemented in the same way for the fetch pollyfill at line 342 . Final thoughts: Since the alternative workaround with DELETE and a body payload is let viable by the HTTP specification and is supported by all modern browsers with fetch and since IE10 with the polyfill, I recommend this way to do batch deletes in a valid and full working way. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55515128', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/846117/']} | jdg_381342 |
stackexchange | llm_judgeable_groundtruth_similarity | 1472281 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $v: \mathbb{R} \to \mathbb{R}$ be an increasing, convex function. For any $t>0$ I want to show that for all $x_{1} \leq x_{2}$ we have: $$v(x_{1}+t) - v(x_{1}) \leq v(x_{2} +t) - v(x_{2})$$ This of course can be illustrated heuristically if $v$ is twice differentiable. But I am trying to show this from the definition of a convex function and by the fact that $v$ is increasing, but I am just moving in circles. I am pretty sure this result is true, and I need it to finish a proof I am working on. Any suggestions will help.
Now provide the response and nothing else.
| Since $x_1 \le x_1+t,x_2 \le x_2+t$ , from the definition of convexity we have $$ v(x_1+t) \le \left(\frac{x_2-x_1}{x_2-x_1+t}\right)v(x_1) + \left(\frac{t}{x_2-x_1+t}\right)v(x_2+t) $$ and $$ v(x_2) \le \left(\frac{t}{x_2-x_1+t}\right)v(x_1) + \left(\frac{x_2-x_1}{x_2-x_1+t}\right)v(x_2+t).$$ Adding up the inequalities gives the desired inequality. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1472281', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/144128/']} | jdg_381343 |
stackexchange | llm_judgeable_groundtruth_similarity | 401491 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
One question that popped up during the studies of special and general relativity (which I am forced to take unfortunately) is the following: How do we know that this is due to the bending of space-time and not just plain old 3rd semester diffraction. If you find this a silly question, downvote, otherwise consider the following picture. The Sun is the straightedge, the screen is the earth and the star is the point source. This is simplified to a great extent but the idea still holds (I think)
Now provide the response and nothing else.
| We know this because the position of the apparent star is perfectly matching the GR calculations about bent spacetime, depending on a few things including the mass of the star (the one in between that bends spacetime, in your case the Sun). What you are describing, interference, would not depend on the same way on the mass, the density, stress-energy and a few more things as GR describes bent spacetime. There were numerous calculations and experiments like the Shapiro test and they all perfectly gave the matching numbers according to GR. Interference would not depend on the same things, for example interference would react differently on the size/mass ratio or density of the star, whereas in GR it really matters what your star's energy density, for example, is compared to its size, for example, a black hole in your case would have an interference of what? I believe that interference would not even work with a black hole. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/401491', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/142097/']} | jdg_381344 |
stackexchange | llm_judgeable_groundtruth_similarity | 3381801 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that it's strongly recommended to run unit-tests in separation from file system , because if you do touch file system in your test, you also test file system itself. OK, that's reasonable. My question is, if I want to test file saving to the disk, what do I do? As with database, I separate an interface that is responsible for database access, and then create another implementation of this for my tests? Or may be there's some other way?
Now provide the response and nothing else.
| My approach towards this is heavily biased on the Growing Object-Oriented Software Guided by Tests (GOOS) book that I just read, but it's the best that I know of today. Specifically: Create an interface to abstract away the file system from your code. Mock it where this class is needed as a collaborator/dependency. This keeps your unit-tests quick and feedback fast. Create integration tests that test the actual implementation of the interface. i.e. verify that calling Save() actually persists a file to disk and has the write contents (use a reference file or parse it for a few things that it should contain) Create an acceptance test that tests the whole system - end to end. Here you may just verify that a file is created - the intent of this test is to confirm if the real implementation is wired / plugged in correctly. Update for commenter: If you're reading structured data (e.g. Book objects) (If not substitute string for IEnumerable) interface BookRepository{ IEnumerable<Books> LoadFrom(string filePath); void SaveTo(string filePath, IEnumerable<Books> books);} Now you can use constructor-injection to inject a mock into the client class. The client class unit tests therefore are fast ; do not hit the filesystem . They just verify that the right methods are called on the dependencies (e.g. Load/Save) var testSubject = new Client(new Mock<BookRepository>.Object); Next you need to create the real implementation of BookRepository that works off a File (or a Sql DB tommorrow if you want it). No one else has to know.Write integration tests for FileBasedBookRepository (that implements the above Role) and test that calling Load with a reference file gives the right objects and calling Save with a known list, persists them to the disk. i.e. uses real files These tests would be slow so mark them up with a tag or move it to a separate suite. [TestFixture][Category("Integration - Slow")]public class FileBasedBookRepository { [Test] public void CanLoadBooksFromFileOnDisk() {...} [Test] public void CanWriteBooksToFileOnDisk() {...}} Finally there should be one/more acceptance tests that exercises Load and Save. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3381801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28298/']} | jdg_381345 |
stackexchange | llm_judgeable_groundtruth_similarity | 14077413 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two "lein" projects with exactly the same piece of code in two test files: ... clojure.set/union ... The first project compiles and runs successfully. The second one errors out with: Exception in thread "main" java.lang.ClassNotFoundException: clojure.set, compiling:(foo/bar.clj:14)...Caused by: java.lang.ClassNotFoundException: clojure.set... I can only make the second project run if I add explicit :use in bar.clj (:use clojure.set) The first project doesn't have this statement though. Why different behaviours? UPDATE : The first project, where the exception is not happening, has the following declaration in project.clj : :eval-in-leiningen true
Now provide the response and nothing else.
| (The answer from Blackhex raised some interesting points, but I can't cram all this into a comment.) Having two threads operating asynchronously is bound to lead to issues like this. Look at it this way: the event that drives animation is the hardware "vsync" signal, i.e. the point at which the Android surface compositor provides a new screen full of data to the display hardware. You want to have a new frame of data whenever vsync arrives. If you don't have new data, the game looks choppy. If you generated 3 frames of data in that period, two will be ignored, and you're just wasting battery life. (Running a CPU full out may also cause the device to heat up, which can lead to thermal throttling, which slows everything in the system down... and can make your animation choppy.) The easiest way to stay in sync with the display is to perform all of your state updates in onDrawFrame() . If it sometimes takes longer than one frame to perform your state updates and render the frame, then you're going to look bad, and need to modify your approach. Simply shifting all game state updates to a second core isn't going to help as much as you might like -- if core #1 is the renderer thread, and core #2 is the game state update thread, then core #1 is going to sit idle while core #2 updates the state, after which core #1 will resume to do the actual rendering while core #2 sits idle, and it's going to take just as long. To actually increase the amount of computation you can do per frame, you'd need to have two (or more) cores working simultaneously, which raises some interesting synchronization issues depending on how you define your division of labor (see http://developer.android.com/training/articles/smp.html if you want to go down that road). Attempting to use Thread.sleep() to manage the frame rate generally ends badly. You can't know how long the period between vsync is, or how long until the next one arrives. It's different for every device, and on some devices it may be variable. You essentially end up with two clocks -- vsync and sleep -- beating against each other, and the result is choppy animation. On top of that, Thread.sleep() doesn't make any specific guarantees about accuracy or minimum sleep duration. I haven't really gone through the Replica Island sources, but in GameRenderer.onDrawFrame() you can see the interaction between their game state thread (which creates a list of objects to draw) and the GL renderer thread (which just draws the list). In their model, the game state only updates as needed, and if nothing has changed it just re-draws the previous draw list. This model works well for an event-driven game, i.e. where the contents on screen update when something happens (you hit a key, a timer fires, etc). When an event occurs, they can do a minimal state update and adjust the draw list as appropriate. Viewed another way, the render thread and the game state work in parallel because they're not rigidly tied together. The game state just runs around updating things as needed, and the render thread locks it down every vsync and draws whatever it finds. So long as neither side keeps anything locked up for too long, they don't visibly interfere. The only interesting shared state is the draw list, guarded with a mutex, so their multi-core issues are minimized. For Android Breakout ( http://code.google.com/p/android-breakout/ ), the game has a ball bouncing around, in continuous motion. There we want to update our state as frequently as the display allows us to, so we drive the state change off of vsync, using a time delta from the previous frame to determine how far things have advanced. The per-frame computation is small, and the rendering is pretty trivial for a modern GL device, so it all fits easily in 1/60th of a second. If the display updated much faster (240Hz) we might occasionally drop frames (again, unlikely to be noticed) and we'd be burning 4x as much CPU on frame updates (which is unfortunate). If for some reason one of these games missed a vsync, the player may or may not notice. The state advances by elapsed time, not a pre-set notion of a fixed-duration "frame", so e.g. the ball will either move 1 unit on each of two consecutive frames, or 2 units on one frame. Depending on the frame rate and the responsiveness of the display, this may not be visible. (This is a key design issue, and one that can mess with your head if you envisioned your game state in terms of "ticks".) Both of these are valid approaches. The key is to draw the current state whenever onDrawFrame is called, and to update state as infrequently as possible. Note for anyone else who happens to read this: don't use System.currentTimeMillis() . The example in the question used SystemClock.uptimeMillis() , which is based on the monotonic clock rather than wall-clock time. That, or System.nanoTime() , are better choices. (I'm on a minor crusade against currentTimeMillis , which on a mobile device could suddenly jump forward or backward.) Update: I wrote an even longer answer to a similar question. Update 2: I wrote an even longer longer answer about the general problem (see Appendix A). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14077413', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/586240/']} | jdg_381346 |
stackexchange | llm_judgeable_groundtruth_similarity | 98130 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I’ve come across a few papers from China (e.g. Xiang-Yao Wu et al., arXiv:1212.4028v1 14 Dec 2012) that make the following statement: ...any quantity which transforms linearly under Lorentz transformations is a spinor. It’s my understanding that e.g. a 4-momentum vector also transforms linearly under a Lorentz transformation. Is the first statement simply false, or should one take it to be true in the sense that a 4-vector is capable of being written in spinor notation? Perhaps the first statement might be a confusion between Lorentz transformations and spin matrices? In the chapter on spinors in Misner, Thorne and Wheeler’s Gravitation (p. 1148) they show that while a vector transforms under a spin matrix (aka rotation operator / quaternion / spinor transformation) as: $$X \to X' = RXR^*,$$ a quantity that transforms as $$ξ \to Rξ'$$ is known as a spinor. Advice please? UPDATE After further web-searching, I’ve come across references making statements that seem to throw some light on the issue Andrew Steane’s recent and very readable (to such tyros as myself) “An introduction to spinors” ( http://arxiv.org/abs/1312.3824 13 Dec 2013), in which he writes (p.1, 2nd para):“… One could say that a spinor is the most basic sort of mathematical object that can be Lorentz-transformed.” (But see (3), here below). I’ve also now traced back the original quotation, repeated word-for-word, through a number of earlier papers (Chinese and Russian) to: V. V. Varlamov arXiv:math-ph/0310051v1 (2003), in which he cites – as do all the later papers – one of the earliest writers on spinors, B. L. van der Waerden, Nachr. d. Ces. d. Wiss. Gottingen, 100 (1929). Varlamov also wrote a densely mathematical and well-referenced paper “Clifford Algebras and Lorentz Group” (math-ph/0108022, 2001), which inclines me to give more credence to the original statement, even though it was parroted by a number of later authors. However, it appears that Dirac himself suggested an even more general entity than the spinor:“A new kind of quantity with components which transform linearly under Lorentz transformations must be introduced, and I call it an expansor. It is rather more general than a tensor or a spinor in that the number of its components is infinite, but enumerable.” P. 1205, section 1946:1 DEVELOPMENTS IN QUANTUM ELECTRODYNAMICS (p. 21 of the section). The Collected Works of P. A. M. Dirac: 1924-48: 1924-1948 By (author) P. A. M. Dirac, Volume editor Richard Henry Dalitz. Cambridge University Press, 26 Oct 1995 - Science - 1310 pages. Further, on p. 1163 of the above, Dirac notes that “the present theory of expansors applies, of course, only to integral spins, but probably it will be possible to set up a corresponding theory of two-valued representations of the Lorentz group, which will apply to half odd integral spins.” These two-valued entities were subsequently supplied by Harish-Chandra, who called them “expinors” (“Infinite Irreducible Representations of the Lorentz Group”, Published 1 May 1947 doi:10.1098/rspa.1947.0047Proc. R. Soc. Lond. A 1 May 1947 vol. 189 no. 1018 372-401). I haven’t access to van der Waerden’s classic, so cannot check whether he made the claim referenced by Varlamov (but have little reason to doubt him). So what can I conclude? The subsequent work by Dirac and Harish-Chandra seems to invalidate the van der Waerden claim, as later cited by Varlamov, at least for infinite dimensional IRs of the Lorentz group. So perhaps the queried definition does apply fairly generally, but not universally? If so, it would be good to have an expert clarify the distinction. PS Gratitude to Qmechanics for tidying up the original posting.
Now provide the response and nothing else.
| In many contexts, we would like to determine how Lorentz transformations act on the mathematical objects that characterize a particular theory. In the case of classical, Lorentz-invariant field theories on Minkowski space for example, we need to specify how Lorentz transformations act on the fields of the theory. This leads naturally to determining how Lorentz transformations can act on both Minkowski space, and the target spaces of the fields. This, in turn, leads naturally to the notion of a finite-dimensional representation of the Lorentz group. On the other hand, in quantum mechanics, and by extension in quantum field theory, we often want to specify how Lorentz transformations act on the Hilbert space of the theory. In this context, Wigner's theorem in symmetries in quantum mechanics demands that up to phase, Lorentz transformations as unitary or anti-unitary operators on the Hilbert space. In turn, the fact that these transformations are only defined up to phase implies that one, in general, needs to consider the projective representations of the Lorentz group in addition to its "ordinary" representations. Now, it turns out that determining the projective representations of the Lorentz group $\mathrm{SO}(3,1)^+$ is equivalent to determining the ordinary representations of its universal cover, which is called $\mathrm{Spin}(3,1)$! It's a spin group ! In fact, for any $p,q$, the group $\mathrm{SO}(p,q)^+$ of isometries of the space $\mathbb R^{p,q}$ has universal cover $\mathrm{Spin}(p,q)$. Therefore, determining all projective representations of the Lorentz group is equivalent to determining the ordinary representations of the corresponding spin group. It is overwhelmingly likely, in my opinion, that this is what the authors are referring to in their quote because the "objects that transform linearly under Lorentz transformations" that we consider in physics are precisely those objects that transform under projective representations of the Lorentz group (ordinary representations are included as a subclass), and these are precisely those objects that transform under ordinary representations of spin groups, and such objects are called spinors. By the way, you'll probably find the following related post illuminating. https://physics.stackexchange.com/a/96060/19976 Old, incomplete answer. It's hard to know the intentions of the authors for certain, but here's some information that might help interpret what they're saying. Recall that every representation of the Lorentz algebra $\mathfrak{so}(3,1)$ can be constructed from representations of $\mathrm{sl}(2,\mathbb C)$, the complexified angular momentum algebra (which is of course the algebra whose representations describe spin). The standard way of doing this is to note that if one complexifies the Lorentz algebra, when one finds that the complexification yields a direct sum of $\mathfrak{sl}(2,\mathbb C)$ with itself;\begin{align} \mathfrak{so}(3,1)_\mathbb C \approx \mathfrak{sl}(2,\mathbb C)\oplus \mathfrak{sl}(2,\mathbb C).\end{align}It follows that the representation theory of the Lorentz algebra reduces to the representation theory of the angular momentum algebra. In fact, every irreducible representation of the Lorentz algebra is essentially a tensor product of two irreducible representation of the angular momentum algebra, and these representations are often labeled by a pair $(s_1, s_1)$ of "spins" $s_1, s_1\in\{0,\frac{1}{2}, 1, \dots\}$. For example, the $(\frac{1}{2},0)$ representation is called the left-handed Weyl spinor representation, and the $(0,\frac{1}{2})$ is called the right-handed Weyl spinor representation. The vector representation that you refer to, namely that standard representation that transforms a four-vector by a Lorentz transformation $\Lambda\in \mathrm{SO}(3,1)^+$ by mapping $V^\mu$ to $\Lambda^\mu_{\phantom\mu\nu} V^\nu$, corresponds to the $(\frac{1}{2}, \frac{1}{2})$ representation. So it is in a sense true as you say that "a 4-vector is capable of being written in spinor notation." It's also the case that any finite-dimensional representation of the Lorentz algebra can be written as a direct sum of the irreducible representations $(s_1, s_2)$, so in a very real sense, all of the finite-dimensional objects that Lorentz-transform can be "built" out of "spin" representations, namely representations of the angular momentum algebra. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/98130', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/40110/']} | jdg_381347 |
stackexchange | llm_judgeable_groundtruth_similarity | 502213 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $p$ is an odd prime. Show that $x^4 \equiv-1$ (mod $p$) has a solution if and only if $p \equiv1$ (mod $8$). I have proven one similar result '$x^2 \equiv-1$ (mod $p$) has a solution if and only if $p \equiv1$ (mod $4$)'. I try to mimick the proof but i fail. The following is my attempt: Suppose $x^4 \equiv-1$ (mod $p$) has a solution, say $a$. Then $a^4 \equiv -1$ (mod $p$). The congruence implies that $\gcd(a^4,p)=1 \Rightarrow \gcd(a,p)=1$. By Fermat's little theorem, $a^{p-1} \equiv 1$ (mod $p$). Note that $1 \equiv a^{p-1} \equiv a^{4{\frac{p-1}{4}}} \equiv (-1)^{\frac{p-1}{4}}$ (mod $p$). But I don know whether $\frac{p-1}{4}$ is an integer or not. If it is not an integer, then the congruence does not hold. Can anyone guide me? EDIT: Since $a^4 \equiv -1$ (mod $p$), we have $a^8 \equiv 1$ (mod $p$) $\Rightarrow $ $ord_p(a)=8 \Rightarrow 8 | \phi(p) \Rightarrow p \equiv 1$ (mod $p$) Suppose $p \equiv 1$ (mod $8$). Then there exists a primitive root for $p$, say $r$. Then we have $r^{p-1} \equiv (r^{4})^{\frac{p-1}{4}} \equiv1$ (mod $p$). Hence there is a solution to $x^4 \equiv-1$ (mod $p$). Is my proof in edit correct?
Now provide the response and nothing else.
| Hint: Note that $x^4\equiv -1\mod p$ has a solution if and only if there is an element $x$ of order $8$ in $(\mathbb{Z}/p\mathbb{Z})^\times$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/502213', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/54398/']} | jdg_381348 |
stackexchange | llm_judgeable_groundtruth_similarity | 2639960 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
System.Diagnostics.Contracts.ContractException is not accessible in my test project. Note this code is purely myself messing around with my shiney new copy of Visual Studio, but I'd like to know what I'm doing wrong. I'm using the professional edition of VS, therefore I do not have static checking. In order to still use code contracts (which I like) I figured the only way my method can work is to catch the exception that is thrown at runtime, but I'm not finding this possible. TestMethod [TestMethod, ExpectedException(typeof(System.Diagnostics.Contracts.ContractException))]public void returning_a_value_less_than_one_throws_exception(){ var person = new Person(); person.Number();} Method public int Number(){ Contract.Ensures(Contract.Result<int>() >= 0); return -1;} Error Error 1 'System.Diagnostics.Contracts.ContractException' is inaccessibledue to its protection level. Edit After some more thought I've come to the conclusion discussed in the comments, as well as the following. Given a method, if this had a requirement which could be expressed in Code Contract form, I'd write tests as such. [TestMethod][ExpectedException(typeof(ArgumentException))]public void value_input_must_be_greater_than_zero(){ // Arrange var person = new Person(); // Act person.Number(-1);} This would ensure the contract is part of the code, and will not be removed. This would require the Code Contract to actually throw the specified exception however. In some cases this would not be required however.
Now provide the response and nothing else.
| This is deliberate - although a slight pain for testing. The point is that in production code you should never want to catch a contract exception; it indicates a bug in your code, so you shouldn't be expecting that any more than arbitrary unexpected exceptions which you may want to catch right at the top of your call stack so you can move onto the next request. Basically you shouldn't view contract exceptions as ones which can be "handled" as such. Now, for testing that's a pain... but do you really want to test your contracts anyway? Isn't that a bit like testing that the compiler stops you from passing in a string to a method which has an int parameter? You've declared the contract, it can be documented appropriately, and enforced appropriately (based on settings, anyway). If you do want to test contract exceptions, you can either catch a bare Exception in the test and check its full name, or you can mess around with the Contract.ContractFailed event. I would expect unit testing frameworks to have built-in support for this over time - but it'll take a little while to get there. In the meantime you probably want to have a utility method to expect a contract violation. One possible implementation: const string ContractExceptionName = "System.Diagnostics.Contracts.__ContractsRuntime.ContractException";public static void ExpectContractFailure(Action action){ try { action(); Assert.Fail("Expected contract failure"); } catch (Exception e) { if (e.GetType().FullName != ContractExceptionName) { throw; } // Correct exception was thrown. Fine. }} | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2639960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102482/']} | jdg_381349 |
stackexchange | llm_judgeable_groundtruth_similarity | 243240 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Any number that has a finite representation in the binary system have a finite representation in the decimal system. Why?
Now provide the response and nothing else.
| To elaborate: Again, in this question, you can use what you established in your earlier post : A real number has a finite representation in the binary system if and only if it is of the form $$ \pm \frac{m}{2^n} \text{ where}\;n \text{ and} \;m \text{ are positive integers.}$$ Likewise, A real number has a finite representation in the decimal system if and only if it is of form $$ \pm \frac{k}{10^n}\text{ where}\; k, n\text{ are positive integers.}$$ Noting that $$\pm \frac{m}{2^n} = \pm \frac {5^nm}{5^n2^n} =\pm \frac{5^nm}{10^n} = \pm \frac{k}{10^n},\;\text{with}\; k = 5^n m \;\text{ and}\;\,m, n\in \mathbb{Z},\;m>0, \;n>0,$$we conclude that any number that has a finite representation in the binary system also has a finite representation in the decimal system. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/243240', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33372/']} | jdg_381350 |
stackexchange | llm_judgeable_groundtruth_similarity | 722627 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
On cloudflare, I have given below details for example.com A record pointing to Azure IPwww CNAME record to example.azurewebsites.net Cloudflare routing is working for example.com and www.example.comHowever, any request to example.azurewebsites.net are directly routed to azure network. How can I route requests to example.azurewebsites.net through cloudflare. Essentially cloudflare DDOS protection is nullified if attacker hits the origin example.azurewebsites.net I am also aware Azure has a DDOS prevention capability
Now provide the response and nothing else.
| The DNS settings for your website in Cloudflare allow you to proxy traffic so that requests to example.com and www.example.com will be protected. You do not control azurewebsites.net though so you cannot proxy traffic going directly to that address with Cloudflare, which means that someone can get around the Cloudflare protection by just going to example.azurewebsites.net What you can do to protect against this is use the IP and Domain Restrictions feature of Azure Websites to restrict traffic so your website will only respond to requests coming through Cloudflare. Instructions for configuring your site are here: https://azure.microsoft.com/en-us/blog/ip-and-domain-restrictions-for-windows-azure-web-sites/ You can get the list of Cloudflare IPs here: https://www.cloudflare.com/ips/ | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/722627', 'https://serverfault.com', 'https://serverfault.com/users/262200/']} | jdg_381351 |
stackexchange | llm_judgeable_groundtruth_similarity | 16630789 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Python 3. I'm using QT's file dialog widget to save PDFs downloaded from the internet. I've been reading the file using 'open', and attempting to write it using the file dialog widget. However, I've been running into a"TypeError: '_io.BufferedReader' does not support the buffer interface" error. Example code: with open('file_to_read.pdf', 'rb') as f1: with open('file_to_save.pdf', 'wb') as f2: f2.write(f1) This logic works properly with text files when not using the 'b' designator, or when reading a file from the web, like with urllib or requests. These are of the 'bytes' type, which I think I need to be opening the file as. Instead, it's opening as a Buffered Reader. I tried bytes(f1), but get "TypeError: 'bytes' object cannot be interpreted as an integer." Any ideaas?
Now provide the response and nothing else.
| If your intent is to simply make a copy of the file, you could use shutil >>> import shutil>>> shutil.copyfile('file_to_read.pdf','file_to_save.pdf') Or if you need to access byte by byte, similar to your structure, this works: >>> with open('/tmp/fin.pdf','rb') as f1:... with open('/tmp/test.pdf','wb') as f2:... while True:... b=f1.read(1)... if b: ... # process b if this is your intent ... n=f2.write(b)... else: break But byte by byte is potentially really slow . Or, if you want a buffer that will speed this up (without taking the risk of reading an unknown file size completely into memory): >>> with open('/tmp/fin.pdf','rb') as f1:... with open('/tmp/test.pdf','wb') as f2:... while True:... buf=f1.read(1024)... if buf: ... for byte in buf:... pass # process the bytes if this is what you want... # make sure your changes are in buf... n=f2.write(buf)... else:... break With Python 2.7+ or 3.1+ you can also use this shortcut (rather than using two with blocks): with open('/tmp/fin.pdf','rb') as f1,open('/tmp/test.pdf','wb') as f2: ... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16630789', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2225444/']} | jdg_381352 |
stackexchange | llm_judgeable_groundtruth_similarity | 8577300 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Ho can i explode $param string into $chunks pieces based on upper-case characters? $string = 'setIfUnmodifiedSince';$method = substr($string, 0, 3);$param = substr($string, 3);// Split $param and implode with '-' separator$chunks = splitAtUpperCase($param); // Chunks are: 'If', 'Unmodified' and 'Since'$field = implode('-', $chunks); // Get If-Unmodified-Since HTTP field name
Now provide the response and nothing else.
| Using preg_split() on [A-Z] should do: function splitAtUpperCase($s) { return preg_split('/(?=[A-Z])/', $s, -1, PREG_SPLIT_NO_EMPTY);} EDIT If you don't need the array itself, you can just preprend uppercase characters (except the first) with a hyphen ( - ): preg_replace('/(?<!^)([A-Z])/', '-\\1', $param); ( demo ) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8577300', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/220180/']} | jdg_381353 |
stackexchange | llm_judgeable_groundtruth_similarity | 3659449 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've set up EHCache with its stock suggestion on how to set up multicast: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="peerDiscovery=automatic, multicastGroupAddress=230.0.0.1, multicastGroupPort=4446"/><cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="port=40001, socketTimeoutMillis=2000"/> Then I fire my app up on two machines. How can I see whether they discover one-another and invalidate one-anothers caches? Cheers Nik
Now provide the response and nothing else.
| From the OO standpoint, it is not reasonable. Since you say that DerivedException is-a BaseException , its possible reasons must be a subset of that of BaseException , not a superset. Otherwise you ultimately break the Liskov Substitution Principle . Moreover, since C++ enums are not classes, you can't extend or inherit them. You can define additional reasons in a separate enum within DerivedException , but then ultimately you bump into the same problem described above: class DerivedException : public BaseException{ enum { SOME_OTHER_REASON = THAT_REASON + 256, // allow extensions in the base enum AND_ANOTHER_REASON }; ...};...try { ...} catch (BaseException& ex) { if (ex.getReason() == BaseException::THIS_REASON) ... else if (ex.getReason() == BaseException::THAT_REASON) ... else if (ex.getReason() == ??? what to test for here ???) ...} What you can do instead is define a separate exception subclass for each distinct reason. Then you can handle them polymorphically (if needed). This is the approach of the standard C++ library as well as other class libraries. Thus you adhere to the conventions, which makes your code easier to understand. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3659449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/80246/']} | jdg_381354 |
stackexchange | llm_judgeable_groundtruth_similarity | 11476664 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems like it should be easy to run "explain" directly off of a queryset in Django, but I don't see anything obvious for how to do it, and "explain" is a difficult thing to search for in the docs.
Now provide the response and nothing else.
| Well, there seems to be nothing out there except a toolbar so I wrote my own mixin to give me an explain() method on my querysets: from django.db import connectionsfrom django.db.models.query import QuerySetclass QuerySetExplainMixin: def explain(self): cursor = connections[self.db].cursor() cursor.execute('explain %s' % str(self.query)) return cursor.fetchall()QuerySet.__bases__ += (QuerySetExplainMixin,) Hopefully this is useful to others. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11476664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/279104/']} | jdg_381355 |
stackexchange | llm_judgeable_groundtruth_similarity | 52616172 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple model class export interface Category { name: string; description: string;} I need to declare and initialize a variable in an angular component.Tried: category: Category = {}; Error: {} is not assignable to a Category category: Category = new Category(); error: Category refers to a type, but being referred as value.. Any suggestions?
Now provide the response and nothing else.
| There are a number of ways to solve this problem, depending on your desired result. Way 1: Convert your interface to a class export class Category { name: string; description: string;}const category: Category = new Category(); Way 2: Extend your interface as a class export class CategoryObject implements Category {}const category: Category = new CategoryObject(); Way 3: Fully specify your object, matching the interface const category: Category = { name: 'My Category', description: 'My Description',}; Way 4: Make the properties optional export interface Category { name?: string; description?: string;}const category: Category = {}; Way 5: Change your variable's type to use Partial<T> export interface Category { name: string; description: string;}const category: Partial<Category> = {}; | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/52616172', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1205647/']} | jdg_381356 |
stackexchange | llm_judgeable_groundtruth_similarity | 23351918 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Give you: A stream (end of the stream is EOF) A function next() to get the next element in the stream and advance the pointer in the stream A random generator generating floats between 0 and 1 (inclusively) uniformly Output: An element that is proven to be randomly (uniformly distributed) chosen You can one or two variables. You are not allowed to use array / list, and you cannot tell the way that trying to get all elements out and store them all and then pick . This is an interview question. My thinking is: I use a var cur to store most recent kept element So, if i get a new element, I generate a random 0 or 1 using generator, if it is 0 then cur = new element ; otherwise, continue; If I get EOF, then return cur Is my thinking correct? How to prove? Here is a same question How would you pick a uniform random element in linked list with unknown length?
Now provide the response and nothing else.
| Let the current element's index be i . Choose to 'remember' the current element at probability 1/i . When EOF is reached, produced the element you remember. At the end, for each element with index i there is a probability to be chosen: A formal prove can be done using induction, following these guidelines. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23351918', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/759076/']} | jdg_381357 |
stackexchange | llm_judgeable_groundtruth_similarity | 4479597 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I have: struct whatever {int data;}; volatile whatever test; will test.data be volatile too?
Now provide the response and nothing else.
| Another question can be asked (or simply another way to look at the original question): Does making a struct const make all its members const ? If I have: struct whatever { int data; };const whatever test; Will test.data be const too? My answer is : Yes. If you declare an object of type whatever with const then all its members will be const too Similarly, if you declare an object of type whatever with volatile then all its members will be volatile too, just like if you declare the object with const , all it's member will be const too. const and volatile are two faces of the same coin; they're so that the Standard often refers to them as cv-qualifiers . Quoting from the Standard ($7.1.5.1/8) [Note: volatile is a hint to the implementation to avoid aggressive optimization involving the object because the value of the object might be changed by means undetectable by an implementation. See 1.9 for detailed semantics. In general, the semantics of volatile are intended to be the same in C + + as they are in C. ] That means, if your object is an instance of a struct, then the compiler cannot avoid aggressive optimization involving the object , unless it avoids aggressive optimization of each of it's members. (Otherwise, how else it can avoid optimization involving the object?) Related topic: Why do we use volatile keyword in C++? | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4479597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382843/']} | jdg_381358 |
stackexchange | llm_judgeable_groundtruth_similarity | 21496 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it . Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?
Now provide the response and nothing else.
| I'm shocked - and indeed appalled - at the number of answers here saying "don't update unless you have to". I've done that, and whilst it's easier in the short term, it burns like hell in the long run. More frequent, smaller updates are much, much easier to manage than occasional big ones, and you get the benefit of new features, bug fixes, and so on sooner. I don't buy this idea that library changes are somehow more difficult to test than code changes. It's just the same - you're making a change to the codebase, and you need to validate it before you commit, and more deeply before you release. But you must already have processes to do this, since you're making code changes! If you're working in iterations, of two to four weeks length, i would suggest making updating libraries a once per iteration task, to be done as soon as possible after the start, when things are a little more relaxed than just before an iteration deadline, and the project has more capacity to absorb change. Get someone (or a pair if you do pair programming) to sit down, look at which libraries have been updated, and try bringing each one in and running a rebuild and test. Budget half a day to a day for it each iteration, perhaps. If things work, check in the changes (i'm assuming you keep libraries in source control, as we do; i'm not sure how you'd propagate the change in a controlled way if not). This will obviously be a lot easier if you have automated tests than if testing is entirely manual. Now, the question is what you do if an update breaks things - do you spend time fixing it, or leave it out? I'd suggest leaning towards the latter; if it can be fixed in an hour, do it, but if an update is going to take significant work to integrate, then raise it as its own development task, to be estimated, prioritised, and scheduled just like any other. The chances are that unless it brings in some very crucial fix or improvement, the priority will be low, and you'll never get round to it. But you never know, by the time the next iterationly update day rolls round, the problem might have fixed itself; even if not, at least now you know that there's a roadblock on the update path, and it won't catch you by surprise. If you're not doing iterations of that length, i would set up some kind of standalone schedule for updates - no longer than monthly. Is there some other project rhythm you could tie it to, like a monthly status review, or an architecture board meeting? Payday? Pizza night? Full moon? Whatever, you need to find something a lot shorter than a traditional release cycle, because trying to update everything in one go every 6-18 months is going to be painful and demoralising. Needless to say, if you do stabilisation branches before releases, you wouldn't apply this policy to them. There, you'd only update libraries to get critical fixes. | {} | {'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/21496', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/2550/']} | jdg_381359 |
stackexchange | llm_judgeable_groundtruth_similarity | 137077 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
calculate $\displaystyle\int_{0}^{i} e^z\, dz$ Could someone help me through this problem?
Now provide the response and nothing else.
| The function $e^z$ is entire. Hence, contour integrals are independent of path. The antiderivitive of $e^z$ is just $e^z$. Stick in the endpoints, and do the subtraction. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/137077', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/29950/']} | jdg_381360 |
stackexchange | llm_judgeable_groundtruth_similarity | 23089895 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have a variable like this c<-c("9/21/2011 0:00:00", "9/25/2011 0:00:00", "10/2/2011 0:00:00", "9/28/2011 0:00:00", "9/27/2011 0:00:00") what's a quick way to remove all 0:00:00 s so that c[1] "9/21/2011" "9/25/2011" "10/2/2011" "9/28/2011" "9/27/2011"
Now provide the response and nothing else.
| You can turn them into dates and then format as desired, e.g.: v <- c("9/21/2011 0:00:00", "9/25/2011 0:00:00", "10/2/2011 0:00:00", "9/28/2011 0:00:00", "9/27/2011 0:00:00")v <- format(as.POSIXct(v,format='%m/%d/%Y %H:%M:%S'),format='%m/%d/%Y')> v[1] "09/21/2011" "09/25/2011" "10/02/2011" "09/28/2011" "09/27/2011" Or, you can simply remove the " 0:00:00" substring using gsub: v <- gsub(x=v,pattern=" 0:00:00",replacement="",fixed=T)> v[1] "9/21/2011" "9/25/2011" "10/2/2011" "9/28/2011" "9/27/2011" | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23089895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3015453/']} | jdg_381361 |
stackexchange | llm_judgeable_groundtruth_similarity | 2373 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The "Obamcare" bill was passed by a narrow 220-215 vote in House with no Republican support. And this was accomplished using legislative tactics such as "deemed passage." Parts of the bill, seem to not be fulfilling its stated purpose. For instance, people were assured that they could keep the insurance they wanted; now it turns out that they can't. Why was the bill passed so quickly? Did these reasons trump "efficiency" arguments?
Now provide the response and nothing else.
| Obamacare was forced though so quickly because it never would have passed otherwise, or would have been significantly changed before passing. The Democrats had a super-majority in congress until Ted Kennedy Died in August of 2009. Then Scott Brown , a republican, would be elected in the special election which would end the super-majority in the Senate and allow the Republicans to filibuster. This forced the Senate to act fast so they passed their version on December 24 and Scott Brown was Elected in January and sworn in on February 4th. The House then had to pass the senate bill as is in order to avoid sending a bill back to the now deadlocked Senate, this lacked any real support from most democrats as well so they did some creative parliamentary procedures to turn the bill in to a budget bill which is subject to reconciliation which blocks filibuster in the senate. This also limited and House changes to budgetary concerns which required the executive order about abortions to be created to satisfy some hold out Democrats. The reasons for all this were mostly political, this was supposed to be Obama's signature reform and his legacy. The Democrats weren't interested in negotiating with republicans because the had won big in the 2008 elections and this was also partly their victory legislation. The bill was also getting more and more unpopular as the "debate" went on and got uglier and it started becoming apparent that anyone who voted for the bill that wasn't in a totally safe district would face major challenges in reelection bids, especially for republicans, for supporting the bill, so passing fast there was hope that the public's short memory would forget the worst transgressions. The bill was also passed quickly because it has huge welfare spending in it in the form of medicaid expansions and premium subsidies, which once implemented would be nearly impossible to repeal. The spending in the bill was huge and it needed years of extra taxes being collected to build up a cash reserve in order for the bill to be rated as budget neutral. | {} | {'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/2373', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/1654/']} | jdg_381362 |
stackexchange | llm_judgeable_groundtruth_similarity | 88872 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
DBSCAN is most cited clustering algorithm according to some literature and it can find arbitrary shape clusters based on density. It has two parameters eps (as neighborhood radius) and minPts (as minimum neighbors to consider a point as core point) which I believe it highly depends on them. Is there any routine or commonly used method to choose these parameters?
Now provide the response and nothing else.
| There are plenty of publications that propose methods to choose these parameters. The most notable is OPTICS, a DBSCAN variation that does away with the epsilon parameter; it produces a hierarchical result that can roughly be seen as "running DBSCAN with every possible epsilon". For minPts, I do suggest to not rely on an automatic method, but on your domain knowledge . A good clustering algorithm has parameters, that allow you to customize it to your needs. A parameter that you overlooked is the distance function. The first thing to do for DBSCAN is to find a good distance function for your application . Do not rely on Euclidean distance being the best for every application! | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/88872', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/38012/']} | jdg_381363 |
stackexchange | llm_judgeable_groundtruth_similarity | 46476882 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's assume we have this code: <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" /><div class="container"> <div class="row"> <div class="col-sm-8 col-md-9" style="background-color: red;"> <div style="background-color: purple;"> Div inside the main row </div> <div class="row"> <div style="background-color: blue;"> Div in a nested row </div> </div> </div> </div></div> How to make the nested row take the same width as the first div using Bootstrap ? Thank you for your time
Now provide the response and nothing else.
| That's because Popper.js needs the popper element to be rendered in the DOM (aka, have a position in the document) to be able to properly compute its position. You are initializing Popper.js when the element is hidden, and then you toggle its visibility to show it, but Popper.js doesn't know that something changed. When you scroll the page, or resize it, Popper.js updates the position of your popper because it listens by default to these events. You should manually run catsTooltip.scheduleUpdate() after .toggle() to have it properly positioned. https://codepen.io/FezVrasta/pen/PJjWWZ | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46476882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6115256/']} | jdg_381364 |
stackexchange | llm_judgeable_groundtruth_similarity | 26082467 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an application using navigation drawer that provides list of locations. In the drawer, there are several options (like choosing country, city, etc) that user can setup before showing the corresponding list in the main activity. Is there any possibility to refresh the list when user close the drawer, or maybe there is another way to solve this? I've tried to search for tutorials but found nothing about this drawer closed listener.Any suggestions would be helpful, thanks!
Now provide the response and nothing else.
| When you setup the ActionBarDrawerToggle you can "implement" the onDrawerClosed and onDrawerOpened callbacks. See the following example from the Docs : mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout, R.drawable.ic_drawer, R.string.drawer_open, R.string.drawer_close) { /** Called when a drawer has settled in a completely closed state. */ public void onDrawerClosed(View view) { super.onDrawerClosed(view); // Do whatever you want here } /** Called when a drawer has settled in a completely open state. */ public void onDrawerOpened(View drawerView) { super.onDrawerOpened(drawerView); // Do whatever you want here } };// Set the drawer toggle as the DrawerListenermDrawerLayout.addDrawerListener(mDrawerToggle); Edit: Now the setDrawerListener is deprecated, use addDrawerListener instead. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26082467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3957761/']} | jdg_381365 |
stackexchange | llm_judgeable_groundtruth_similarity | 18319 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to be able to state that the following problem is NP hard.I am wondering whether anybody have any pointers to related/recent work? The problem:Given a finite set of transition matrices $A$ and two non-negative vectors$\vec{x}$ and $\vec{y}$. Do there exist $A_1, A_2, ..., A_n \in A$ such that $$\vec{x} \, A_1 \, A_2 ... A_n \, \vec{y} \geq P$$
Now provide the response and nothing else.
| If you allow the repetition of matrices , i.e. there exists $ 1 \leq i < j \leq n $ s.t. $ A_i =A_j $, then your problem is actually undecidable . Let $ EMPTY_{PFA} $ be the emptiness problem for probabilistic finite automaton (PFA). A PFA is a 4 tuple: $ P=(\Sigma,\{A_{\sigma \in \Sigma}\},x,y) $, where $\Sigma = \{\sigma_1,\ldots,\sigma_k\}$ is the input alphabet, each $ A_{\sigma} $ is a stochastic matrix, $x$ is a stochastic row vector (initial distribution), and $ y $ is a zero-one column vector. Each word, say $w \in \Sigma^*$, corresponds to a sequence of the matrices from $ \{A_{\sigma \in \Sigma}\} $ by allowing repetition, and vice versa. The accepting probability of $w$ by $P$ is as follows: $$f_P(w) = x \cdot A_{w_1} \cdot A_{w_2} \cdots A_{w_{|w|}} \cdot y, $$ where $w_i$ is the $i^{th}$ symbol of $w$ and $|w|$ is the length of $w$. $ EMPTY_{PFA} $ is the problem of, for a given PFA $ P $ and a threshold $ \lambda \in (0,1) $, whether there exists a word accepted with a probability at least $ \lambda $. $ EMPTY_{PFA} $ was shown to be undecidable. It is an old result and you can start digging from this article: http://arxiv.org/abs/quant-ph/0304082 $ EMPTY_{PFA} $ can be reduced to your problem. So, if your problem is decidable, then $ EMPTY_{PFA} $ is also decidable. But this is a contradiction. So, your problem is undecidable, too. | {} | {'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/18319', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/16898/']} | jdg_381366 |
stackexchange | llm_judgeable_groundtruth_similarity | 28222703 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Reading git's documentation one of the things they stress a lot is that git stores snapshots and not deltas. Since I saw a course on Git saying that Git stores differences between versions of files I tried the following: I initialized a git repository on an empty folder, created a file lorem.txt containing some lorem ipsum text staged the file and commited. Then using find .git/objects -type f on command line I listed what git saved on the objects folder and as expected found a commit object pointing to a tree object pointing to a blob object containing the lorem ispum text I saved. Then I modified the lorem ipsum text, adding more content to it, staged this change and commited. Listing again the files, I could see now the new commit object, pointing to a new three object and to a new blob object. Using git cat-file -p 331cf0780688c73be429fa602f9dd99f18b36793 I could see the contents of the new blob. They were exactly the contents of the full lorem.txt file, the old contents plus the change. This works as expected by the documentation: git stores snapshots, not deltas. However, searching on the internet I found this SO question . On th accepted answer we see the following: While that's true and important on the conceptual level, it is NOT true at the storage level. Git does use deltas for storage. Not only that, but it's more efficient in it than any other system. Because it does not keep per-file history, when it wants to do delta-compression, it takes each blob, selects some blobs that are likely to be similar (using heuristics that includes the closest approximation of previous version and some others), tries to generate the deltas and picks the smallest one. This way it can (often, depends on the heuristics) take advantage of other similar files or older versions that are more similar than the previous. The "pack window" parameter allows trading performance for delta compression quality. The default (10) generally gives decent results, but when space is limited or to speed up network transfers, git gc --aggressive uses value 250, which makes it run very slow, but provide extra compression for history data. Which says that Git does use deltas for storage. As I understand from this, Git doesn't use deltas all the time, but only when it detects it is necessary. Is this true? I placed a lot of lorem text on the file, so that it's 2mb in size. I thought that when making a small change to a big text file Git would automatically use deltas, but as I said it didn't. When Git use deltas and how this works out?
Now provide the response and nothing else.
| Git only uses deltas in "packfiles". Initially, each git object is written as a separate file (as you found). Later, git can pack many objects into one file, called a "pack file". The pack file is then compressed, which automatically exploits any repetitions between the files in the packfile (or repetitions inside files). This packing is performed by git repack . You can see it in action by invoking it manually. If you run git repack -ad on a git repo, you should see used disk space and number of files under .git/objects drop, as files are combined into packs and compressed. In practice, you don't usually need to run git repack . Git by default regularly runs git gc , which in turn runs git repack when necessary. So relax, git has your back :-). The excellent "git book" also has a chapter on packfiles with more explanations: http://git-scm.com/book/en/v2/Git-Internals-Packfiles . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28222703', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1620696/']} | jdg_381367 |
stackexchange | llm_judgeable_groundtruth_similarity | 21324 |
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Are there any languages where there are different plural forms depending on the count? For example: 1 cook 2 cooks 10 cooks (this would be a different word)
Now provide the response and nothing else.
| Indeed! The most common form of this involves having a dual number, used for exactly two things, and a plural number, used for any more than that. You'll find this in older Indo-European languages and modern Inuit and Semitic languages. Arabic: kitaab "book", kitaabayn "[two] books", kutub "[more than two] books" (Biblical) Hebrew: yōm "day", yomayim "[two] days", yāmīm "[more than two] days" (Epic) Greek: anthrōpos "human", anthrōpō "[two] humans", anthrōpoi "[more than two] humans" Inuktitut: matu "door", matuuk "[two] doors", matuit "[more than two] doors" Some languages also have a trial number, used for three things; the trial only exists in languages that also have a dual, and has never been documented to exist in nouns, only in pronouns. Others, instead of dual/plural, have a paucal/plural distinction. The paucal is used for "a few" of something, that is, any small number. I'm less familiar with this one, but something similar does exist in Russian, only in the genitive case: 1, 2, 3, 4 kamnja "of stones [few]" 5, 6, 7, etc kamnej "of stones [many]" | {} | {'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/21324', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/15430/']} | jdg_381368 |
stackexchange | llm_judgeable_groundtruth_similarity | 6200270 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make a function that being a decorator to another function will print that function call details - parameters names and effective values. My current implementation is this. def describeFuncCall(func): """ Decorator to print function call details. parameters names and effective values. """ def wrapper(*func_args, **func_kwargs): print "func_code.co_varnames =", func.func_code.co_varnames print "func_code.co_argcount =", func.func_code.co_argcount print "func_args =", func_args print "func_kwargs =", func_kwargs params = [] for argNo in range(func.func_code.co_argcount): argName = func.func_code.co_varnames[argNo] argValue = ( func_args[argNo] if argNo < len(func_args) else func.func_defaults[argNo - func.func_code.co_argcount] ) params.append((argName, argValue)) for argName, argValue in func_kwargs.items(): params.append((argName, argValue)) params = [argName + " = " + repr(argValue) for argName, argValue in params] print (func.__name__ + " ( " + ", ".join(params) + " )") return func(*func_args, **func_kwargs) return wrapper@describeFuncCalldef test(a, b=4, c="blah-blah", *args, **kwargs): passtest(1)# test(1, 3)# test(1, d = 5)test(1, 2, 3, 4, 5, d=6, g=12.9) Kinda works, but with some bugs: For call test(1, 2, 3, 4, 5, d = 6, g = 12.9) it prints test ( a = 1, b = 2, c = 3, d = 6, g = 12.9 ) . The expected result is test ( a = 1, b = 2, c = 3, args = [4, 5], kwargs = {'d': 6, 'g': 12.9} ) I got stuck here. Can you help me to find the right solution?
Now provide the response and nothing else.
| Sorry its a bit messy. I modified some code from Easy Dump of Function Arguments in the PythonDecoratorLibrary . def dump_args(func): "This decorator dumps out the arguments passed to a function before calling it" argnames = func.func_code.co_varnames[:func.func_code.co_argcount] fname = func.func_name def echo_func(*args,**kwargs): print fname, "(", ', '.join( '%s=%r' % entry for entry in zip(argnames,args[:len(argnames)])+[("args",list(args[len(argnames):]))]+[("kwargs",kwargs)]) +")" return echo_func@dump_argsdef test(a, b = 4, c = 'blah-blah', *args, **kwargs): passtest(1, 2, 3, 4, 5, d = 6, g = 12.9) Output: test ( a=1, b=2, c=3, args=[4, 5], kwargs={'d': 6, 'g': 12.9}) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6200270', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248296/']} | jdg_381369 |
stackexchange | llm_judgeable_groundtruth_similarity | 29206612 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created a TimeSeries in pandas: In [346]: from datetime import datetimeIn [347]: dates = [datetime(2011, 1, 2), datetime(2011, 1, 5), datetime(2011, 1, 7), .....: datetime(2011, 1, 8), datetime(2011, 1, 10), datetime(2011, 1, 12)]In [348]: ts = Series(np.random.randn(6), index=dates)In [349]: tsOut[349]: 2011-01-02 0.6900022011-01-05 1.0015432011-01-07 -0.5030872011-01-08 -0.6222742011-01-10 -0.9211692011-01-12 -0.726213 I'm following on the example from 'Python for Data Analysis' book. In the following paragraph, the author checks the index type: In [353]: ts.index.dtypeOut[353]: dtype('datetime64[ns]') When I do exactly the same operation in the console I get: ts.index.dtypedtype('<M8[ns]') What is the difference between two types 'datetime64[ns]' and '<M8[ns]' ? And why do I get a different type?
Now provide the response and nothing else.
| datetime64[ns] is a general dtype, while <M8[ns] is a specific dtype. General dtypes map to specific dtypes, but may be different from one installation of NumPy to the next. On a machine whose byte order is little endian, there is no difference between np.dtype('datetime64[ns]') and np.dtype('<M8[ns]') : In [6]: np.dtype('datetime64[ns]') == np.dtype('<M8[ns]')Out[6]: True However, on a big endian machine, np.dtype('datetime64[ns]') would equal np.dtype('>M8[ns]') . So datetime64[ns] maps to either <M8[ns] or >M8[ns] depending on the endian-ness of the machine. There are many other similar examples of general dtypes mapping to specific dtypes: int64 maps to <i8 or >i8 , and int maps to either int32 or int64 depending on the bit architecture of the OS and how NumPy was compiled. Apparently, the repr of the datetime64 dtype has change since the time the book was written to show the endian-ness of the dtype. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/29206612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2758414/']} | jdg_381370 |
stackexchange | llm_judgeable_groundtruth_similarity | 273227 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Three bags of apples and two bags of oranges weigh $32$ pounds. Four bags of apples and three bags of oranges weigh $44$ pounds. All bags of apples weigh the same. All bags of oranges weigh the same. What is the weight of two bags of apples and one bag of oranges?
Now provide the response and nothing else.
| $x$: weight of a bag of apples (in pounds) $y$: weight of a bag of oranges (in pounds) First we "translate" the givens into algebraic equations: $(1)$ "Three bags of apples and two bags of oranges weigh $32$ pounds." $\implies 3x + 2y = 32$. $(2)$ "Four bags of apples and three bags of oranges weigh $44$ pounds." $\implies 4x + 3y = 44$ This gives us the system of two equations in two unknowns:$$3x + 2y = 32\tag{1}$$$$4x + 3y = 44\tag{2}$$ Ask your daughter to solve the system of two equations in two unknowns to determine the values of $x$ and $y$. Hints for your daughter: multiply equation $(1)$ by $3$, and multiply equation $(2)$ by $2$: $$9x + 6y = 96\tag{1.1}$$$$8x + 6y = 88\tag{2.1}$$ subtract equation $(2.1)$ from equation $(1.1)$, which will give the value of $x$. Solve for $y$ using either equation $(1)$ or $(2)$ and your value for $x$. Then determine what $2x + y$ equals. That will be your (her) solution. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/273227', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/55630/']} | jdg_381371 |
stackexchange | llm_judgeable_groundtruth_similarity | 39357 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a problem which is somehow related to detecting a peak. Let me summarize the problem a little bit, so it might be clear to you. I have to signals, both are not periodic and I know when they start and when they end. I also know the signal has exactely one peak. The signal might be a bit noisy, but I think this could be eliminated.Now I need to know when between the start and end point the peak occurs. The best option would be that I get an output signal as soon as the peak occurs. Yet, I do not know how high the peak will be. This can vary a lot. The signal itself looks like a gaussian. Now I could use a high speed ADC and sample the signal continously, do some small calculations and get the peak. Yet, what I would like much more, would be if I simply get an "event" telling me: There was the peak. I don't mind if I get this signal a bit later, as long as the time difference between the peak and the event is constant. I absolutely do NOT care about the peak level, I only want to know the point in time when this happened. Does anyone know of a circuit which is able to satisfy these requirements?
Now provide the response and nothing else.
| I don't especially like the idea of doing this with a differentiator, because your gaussian pulse has a relatively flat top, so there will be a time when the derivative is nearly zero, so that the moment when you detect the peak will be fairly noise sensitive. If the shape and amplitude of the input pulse is the same each time, you may be able to get what you want with a standard op-amp peak-detector followed by a comparator: What this is doing is not trying to detect the peak event directly, but detecting when the input (IN) falls by one diode drop below the peak. When this happens, you'll get a rising edge at OUT. After each cycle, your micro should assert the CLR line to clear out the peak detector. You could reduce the delay from the peak to the output edge by using a schottky diode at D2. If your pulse is very slow, the value of C1 and R1 also need to be chosen large enough so that the capacitor doesn't bleed down too quickly after the peak. The diode and FET types are just the default ones in CircuitLab. You'll want to choose these more carefully. You'll want to choose an op-amp appropriate for the time scale of your signal, and you'll also need one that is okay being used as a comparator (e.g. one that doesn't go crazy when its two inputs are not at the same voltage and that can recover reasonably quickly from being driven to the rails). Edit: I updated the schematic, adding R2 in the FET drain, to avoid extreme currents through D1 in the event the pulse arrives while you are resetting. Ideally you'll give your reset pulse at a time when you know the pulse won't occur. Otherwise, you'll need to make R2 big enough that you don't blow out OA2 or D1 if it does arrive when CLR is asserted; and make the CLR pulse width long enough to fully discharge C1 through R2. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/39357', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/11970/']} | jdg_381372 |
stackexchange | llm_judgeable_groundtruth_similarity | 7886096 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read SO about another user encountering similar error , but this error is in different case. I received this message when I added a View Controller initially: Unbalanced calls to begin/end appearance transitions for <UITabBarController: 0x197870> The structure of the app is as follow: I got a 5-tab TabBarController linked to 5 View Controllers. In the initial showing tab, I call out a new View Controller to overlay as an introduction of the app. I use this code to call the introduction view controller: IntroVC *vc = [[IntroVC alloc] init];[self presentModalViewController:vc animated:YES];[vc release]; After this IntroVC view controller shows up, the above error shows. p.s. I am using xCode 4.2 & iOS 5.0 SDK, developing iOS 4.3 app.
Now provide the response and nothing else.
| Without seeing more of the surrounding code I can't give a definite answer, but I have two theories. You're not using UIViewController 's designated initializer initWithNibName:bundle: . Try using it instead of just init . Also, self may be one of the tab bar controller's view controllers. Always present view controllers from the topmost view controller, which means in this case ask the tab bar controller to present the overlay view controller on behalf of the view controller. You can still keep any callback delegates to the real view controller, but you must have the tab bar controller present and dismiss. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7886096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/188331/']} | jdg_381373 |
stackexchange | llm_judgeable_groundtruth_similarity | 2500988 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that we use the notation A∩B = ∅ to express that the sets A and B are disjoint. But could we express it like this? ∀x( x∈A ∧ x∈B ) Because a disjoint set is actually an empty intersection, and since we can express an intersection like this ∀x(x∈A ∧ x∈B), I thought that it would be the same. What do you think? Thank you.
Now provide the response and nothing else.
| We have that $x \in A \cap B \text { iff } x ∈ A \land x ∈ B$. Thus, to say that $A \cap B = \emptyset$ is to say that no $x$ belongs to $A \cap B$. Thus, to say that $A$ and $B$ are disjoint , is to say that there are no common elements: $∀x (x∉A \lor x∉B)$, or, which is the same: $∀x (x \in A \to x ∉ B)$. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2500988', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/490232/']} | jdg_381374 |
stackexchange | llm_judgeable_groundtruth_similarity | 5270782 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading the book 'C# in Depth, 2nd Edition' of Jon Skeet. He said that we can call extension methods with dynamic arguments using two workarounds, just as dynamic size = 5;var numbers = Enumerable.Range(10, 10);var error = numbers.Take(size);var workaround1 = numbers.Take((int) size);var workaround2 = Enumerable.Take(numbers, size); Then he said "Both approaches will work if you want to call the extension method with the dynamic value as the implicit this value". I don't know how to achieve it. Thanks a lot.
Now provide the response and nothing else.
| Like this: dynamic numbers = Enumerable.Range(10, 10);var firstFive = Enumerable.Take(numbers, 5); In other words, just call it as a static method instead of as an extension method. Or if you know an appropriate type argument you could just cast it, which I'd typically do with an extra variable: dynamic numbers = Enumerable.Range(10, 10);var sequence = (IEnumerable<int>) numbers;var firstFive = sequence.Take(5); ... but if you're dealing with dynamic types, you may well not know the sequence element type, in which case the first version lets the "execution time compiler" figure it out, basically. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5270782', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/622606/']} | jdg_381375 |
stackexchange | llm_judgeable_groundtruth_similarity | 448285 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Graph being drawn inside a UIScrollView . It's one large UIView using a custom subclass of CATiledLayer as its layer. When I zoom in and out of the UIScrollView , I want the graph to resize dynamically like it does when I return the graph from viewForZoomingInScrollView . However, the Graph redraws itself at the new zoom level, and I want to reset the transform scale to 1x1 so that the next time the user zooms, the transform starts from the current view. If I reset the transform to Identity in scrollViewDidEndZooming , it works in the simulator, but throws an EXC_BAD_ACCSES on the device. This doesn't even solve the issue entirely on the simulator either, because the next time the user zooms, the transform resets itself to whatever zoom level it was at, and so it looks like, if I was zoomed to 2x, for example, it's suddenly at 4x. When I finish the zoom, it ends up at the correct scale, but the actual act of zooming looks bad. So first: how do I allow the graph to redraw itself at the standard scale of 1x1 after zooming, and how do I have a smooth zoom throughout? Edit: New findingsThe error seems to be " [CALayer retainCount]: message sent to deallocated instance " I'm never deallocating any layers myself. Before, I wasn't even deleting any views or anything. This error was being thrown on zoom and also on rotate. If I delete the object before rotation and re-add it afterward, it doesn't throw the exception. This is not an option for zooming.
Now provide the response and nothing else.
| I can't help you with the crashing, other than tell you to check and make sure you aren't unintentionally autoreleasing a view or layer somewhere within your code. I've seen the simulator handle the timing of autoreleases differently than on the device (most often when threads are involved). The view scaling is an issue with UIScrollView I've run into, though. During a pinch-zooming event, UIScrollView will take the view you specified in the viewForZoomingInScrollView: delegate method and apply a transform to it. This transform provides a smooth scaling of the view without having to redraw it each frame. At the end of the zoom operation, your delegate method scrollViewDidEndZooming:withView:atScale: will be called and give you a chance to do a more high-quality rendering of your view at the new scale factor. Generally, it's suggested that you reset the transform on your view to be CGAffineTransformIdentity and then have your view manually redraw itself at the new size scale. However, this causes a problem because UIScrollView doesn't appear to monitor the content view transform, so on the next zoom operation it sets the transform of the content view to whatever the overall scale factor is. Since you've manually redrawn your view at the last scale factor, it compounds the scaling, which is what you're seeing. As a workaround, I use a UIView subclass for my content view with the following methods defined: - (void)setTransformWithoutScaling:(CGAffineTransform)newTransform;{ [super setTransform:newTransform];}- (void)setTransform:(CGAffineTransform)newValue;{ [super setTransform:CGAffineTransformScale(newValue, 1.0f / previousScale, 1.0f / previousScale)];} where previousScale is a float instance variable of the view. I then implement the zooming delegate method as follows: - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale;{ [contentView setTransformWithoutScaling:CGAffineTransformIdentity];// Code to manually redraw view at new scale here contentView.previousScale = scale; scrollView.contentSize = contentView.frame.size;} By doing this, the transforms sent to the content view are adjusted based on the scale at which the view was last redrawn. When the pinch-zooming is done, the transform is reset to a scale of 1.0 by bypassing the adjustment in the normal setTransform: method. This seems to provide the correct scaling behavior while letting you draw a crisp view at the completion of a zoom. UPDATE (7/23/2010): iPhone OS 3.2 and above have changed the behavior of scroll views in regards to zooming. Now, a UIScrollView will respect the identity transform you apply to a content view and only provide the relative scale factor in -scrollViewDidEndZooming:withView:atScale: . Therefore, the above code for a UIView subclass is only necessary for devices running iPhone OS versions older than 3.2. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/448285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36007/']} | jdg_381376 |
stackexchange | llm_judgeable_groundtruth_similarity | 6477051 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to write test the value of alt text + src of an image using capybara and the css selectors. any idea s to test both in single xpath ?
Now provide the response and nothing else.
| A slightly simpler way will be to use an id for that image: page.find('#profile-avatar')['src'].should have_content 'default.png' Updated for rspec 3: expect(page.find('#profile-avatar')['src']).to have_content 'default.png' expect(page.find('#profile-avatar')['alt']).to match(/some-value/) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6477051', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/815192/']} | jdg_381377 |
stackexchange | llm_judgeable_groundtruth_similarity | 1293111 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have difficulty understanding the difference between passing by value and passing by reference. Can someone provide a C# example illustrating the difference?
Now provide the response and nothing else.
| In general, read my article about parameter passing . The basic idea is: If the argument is passed by reference, then changes to the parameter value within the method will affect the argument as well. The subtle part is that if the parameter is a reference type, then doing: someParameter.SomeProperty = "New Value"; isn't changing the value of the parameter. The parameter is just a reference, and the above doesn't change what the parameter refers to, just the data within the object. Here's an example of genuinely changing the parameter's value: someParameter = new ParameterType(); Now for examples: Simple example: passing an int by ref or by value class Test{ static void Main() { int i = 10; PassByRef(ref i); // Now i is 20 PassByValue(i); // i is *still* 20 } static void PassByRef(ref int x) { x = 20; } static void PassByValue(int x) { x = 50; }} More complicated example: using reference types class Test{ static void Main() { StringBuilder builder = new StringBuilder(); PassByRef(ref builder); // builder now refers to the StringBuilder // constructed in PassByRef PassByValueChangeContents(builder); // builder still refers to the same StringBuilder // but then contents has changed PassByValueChangeParameter(builder); // builder still refers to the same StringBuilder, // not the new one created in PassByValueChangeParameter } static void PassByRef(ref StringBuilder x) { x = new StringBuilder("Created in PassByRef"); } static void PassByValueChangeContents(StringBuilder x) { x.Append(" ... and changed in PassByValueChangeContents"); } static void PassByValueChangeParameter(StringBuilder x) { // This new object won't be "seen" by the caller x = new StringBuilder("Created in PassByValueChangeParameter"); }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1293111', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_381378 |
stackexchange | llm_judgeable_groundtruth_similarity | 40244360 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am getting this stroke inside of the button for a email I'm coding. It only appears in Outlook 2007, 2010, and 2013 and I can't figure out how to remove it? Any ideas? Code: `<a href="http://buttons.cm" style="color: #ffffff; background: #a52023; border: 1px solid #a52023; display: inline-block; font-family: tahoma; font-weight: 900; letter-spacing: 2px; font-size: 16px; border-radius: 80px; line-height: 60px; text-align: center; text-decoration: none; width: 275px; -webkit-text-size-adjust: none; mso-hide: all;">GET FREE WORKWEAR</a>`
Now provide the response and nothing else.
| Yea, Windows Outlook 2007, 2010, 2013, and even 2016 all have spotty support for the box model. To get a nice-looking, clickable button working (unfortunately) requires a lot more code. Might need to play with the border thickness, padding, and number of s, but this should solve your issue and give you a decent-looking, working button across all email clients (including Outlook): <!-- Button : Begin --><table role="presentation" cellspacing="0" cellpadding="0" border="0" align="center" style="margin: auto;"> <tr> <td style="border-radius: 3px; background: #a52023; text-align: center;"> <a href="http://www.google.com" style="background: #a52023; border: 15px solid #a52023; font-family: tahoma; font-weight: 900; letter-spacing: 2px; font-size: 16px; mso-height-rule: exactly; line-height: 60px; text-align: center; text-decoration: none; display: block; border-radius: 3px; font-weight: 900; text-transform: uppercase;"> <span style="color:#ffffff">Get Free Workwear</span> </a> </td> </tr></table><!-- Button : END --> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40244360', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6498615/']} | jdg_381379 |
stackexchange | llm_judgeable_groundtruth_similarity | 23578059 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing an Android app where I want the activity to appear by animating in from the bottom of the screen to the top. I am able to do this with code from here: How to translate the activity from top to bottom and viceversa? However, I am not able to do the vice-versa animation wherein the Activity would disappear by sliding from the top to the bottom of the screen. I used the code in the above link; the activity appears by sliding up, but when disappearing, it fades out, instead of sliding to the bottom. I even tried putting the code in onCreate() : @Overrideprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.requestWindowFeature(Window.FEATURE_NO_TITLE); overridePendingTransition(R.anim.appear_from_bottom, R.anim.disappear_to_bottom); setContentView(R.layout.activity_all_metadata_display); initializePage();}
Now provide the response and nothing else.
| You need to define your "slide up" animations from the linked question, and some new "slide down" animations that reverse the process. The important parts of the animations to look at are the fromYDelta and toYDelta values. These define the Y-positions (of the top of your view) at the start & end of the animations. slide_in_up.xml <?xml version="1.0" encoding="utf-8"?><translate xmlns:android="http://schemas.android.com/apk/res/android" android:duration="@android:integer/config_longAnimTime" android:fromYDelta="100%p" android:toYDelta="0%p" /> slide_out_up.xml <?xml version="1.0" encoding="utf-8"?><translate xmlns:android="http://schemas.android.com/apk/res/android" android:duration="@android:integer/config_longAnimTime" android:fromYDelta="0%p" android:toYDelta="-100%p" /> slide_in_down.xml <?xml version="1.0" encoding="utf-8"?><translate xmlns:android="http://schemas.android.com/apk/res/android" android:duration="@android:integer/config_longAnimTime" android:fromYDelta="-100%p" android:toYDelta="0%p" /> slide_out_down.xml <?xml version="1.0" encoding="utf-8"?><translate xmlns:android="http://schemas.android.com/apk/res/android" android:duration="@android:integer/config_longAnimTime" android:fromYDelta="0%p" android:toYDelta="100%p" /> For the "slide up" animations, you should have overridden the pending transition in your onResume() method like this: protected void onResume(){ super.onResume(); overridePendingTransition(R.anim.slide_in_up, R.anim.slide_out_up);} For the "slide down" animations, do something similar in your onPause() method: protected void onPause(){ super.onPause(); overridePendingTransition(R.anim.slide_in_down, R.anim.slide_out_down);} Some tutorials suggest using the wrong life-cycle methods: onCreate() is not called every time the activity is shown onDestroy() is not called every time the activity is taken away Rather use methods that are called every time there is a screen transition: onResume() is called when the activity is shown to the user onPause() is called when the activity is going to be taken away For more info on these methods specifically, check the Android developer site: Pausing and Resuming an Activity When your screen is displayed, it will slide in from the bottom. When a new screen is displayed, your screen will slide back down. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23578059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2903200/']} | jdg_381380 |
stackexchange | llm_judgeable_groundtruth_similarity | 5200545 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need MathJax to recheck again all my page. I mean when the page is created it does all great. But I need to call it after window.onload to reparse the page, as its contents have changed in the meantime. How would I do such a thing?
Now provide the response and nothing else.
| See http://docs.mathjax.org/en/latest/advanced/typeset.html : If you are writing a dynamic web page where content containing mathematics may appear after MathJax has already typeset the rest of the page, then you will need to tell MathJax to look for mathematics in the page again when that new content is produced. To do that, you need to use the MathJax.Hub.Typeset() method. This will cause the preprocessors (if any were loaded) to run over the page again, and then MathJax will look for unprocessed mathematics on the page and typeset it, leaving unchanged any math that has already been typeset. You should not simply call this method directly, however. [You should instead] queue the typeset action, [using this] command: MathJax.Hub.Queue(["Typeset",MathJax.Hub]); Demo here: http://cdn.mathjax.org/mathjax/latest/test/sample-dynamic.html | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5200545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/434051/']} | jdg_381381 |
stackexchange | llm_judgeable_groundtruth_similarity | 4284 |
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for a way to allow Arduino to control a motor using PWM. So far what I have gathered is that using Arduino's PWM sets the digital value of 255 to 5v and 0 to 0v. However what I am looking to do is to set the motors to stop at 2.5v, move forward as it approaches 5v and move backwards as it approaches 0v. Any suggestion on how I would go about accomplishing this will be highly appreciated.
Now provide the response and nothing else.
| You seem to have misunderstood PWM here. It's actually a digital signal which is constantly pulsing on and off very quickly (several hundred times per second). You don't actually change the output voltage at all -- it's only ever HIGH or LOW (+5v or +0v). The thing you change is how long the signal is HIGH for on each pulse. This page gives a deeper explanation: http://arduino.cc/en/Tutorial/PWM When you're controlling a DC motor, this approach is actually better than varying the voltage. The constant pulses still make the motor turn at an adjustable speed, but without losing torque. PWM is also useful for controlling the brightness of LEDs, which don't respond so well to changes in current/voltage. To address your original motor control question, there's no sensible way to change the motor's direction using the PWM value. You will need some additional circuitry which is able to reverse the polarity of the connections to the motor (meaning your PWM value just controls speed, and doesn't care about direction). An H-bridge is the usual approach for this. Here's a fairly good tutorial which covers the basic principles: http://www.instructables.com/id/H-Bridge-on-a-Breadboard Obviously you'd need to modify it for use with your Arduino project. The push switches could easily be replaced by digital signals from a couple of Arduino pins. The motor's power supply would be controlled by the PWM signal. You'll need an extra transistor for that, because you can't safely power the motor directly from an Arduino pin. | {} | {'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/4284', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/3210/']} | jdg_381382 |
stackexchange | llm_judgeable_groundtruth_similarity | 24633618 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I know what the gradient of a (mathematical) function is, so I feel like I should know what numpy.gradient does. But I don't. The documentation is not really helpful either: Return the gradient of an N-dimensional array. What is the gradient of an array? When is numpy.gradient useful?
Now provide the response and nothing else.
| Also in the documentation 1 : >>> y = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)>>> j = np.gradient(y)>>> j array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ]) Gradient is defined as (change in y )/(change in x ). x , here, is the list index, so the difference between adjacent values is 1. At the boundaries, the first difference is calculated. This means that at each end of the array, the gradient given is simply, the difference between the end two values (divided by 1) Away from the boundaries the gradient for a particular index is given by taking the difference between the the values either side and dividing by 2. So, the gradient of y , above, is calculated thus: j[0] = (y[1]-y[0])/1 = (2-1)/1 = 1j[1] = (y[2]-y[0])/2 = (4-1)/2 = 1.5j[2] = (y[3]-y[1])/2 = (7-2)/2 = 2.5j[3] = (y[4]-y[2])/2 = (11-4)/2 = 3.5j[4] = (y[5]-y[3])/2 = (16-7)/2 = 4.5j[5] = (y[5]-y[4])/1 = (16-11)/1 = 5 You could find the minima of all the absolute values in the resulting array to find the turning points of a curve, for example. 1 The array is actually called x in the example in the docs, I've changed it to y to avoid confusion. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24633618', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2564569/']} | jdg_381383 |
stackexchange | llm_judgeable_groundtruth_similarity | 165723 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've noticed RAII has been getting lots of attention on Stackoverflow, but in my circles (mostly C++) RAII is so obvious its like asking what's a class or a destructor. So I'm really curious if that's because I'm surrounded daily, by hard-core C++ programmers, and RAII just isn't that well known in general (including C++), or if all this questioning on Stackoverflow is due to the fact that I'm now in contact with programmers that didn't grow up with C++, and in other languages people just don't use/know about RAII?
Now provide the response and nothing else.
| For people who are commenting in this thread about RAII (resource acquisition is initialisation), here's a motivational example. class StdioFile { FILE* file_; std::string mode_; static FILE* fcheck(FILE* stream) { if (!stream) throw std::runtime_error("Cannot open file"); return stream; } FILE* fdup() const { int dupfd(dup(fileno(file_))); if (dupfd == -1) throw std::runtime_error("Cannot dup file descriptor"); return fdopen(dupfd, mode_.c_str()); }public: StdioFile(char const* name, char const* mode) : file_(fcheck(fopen(name, mode))), mode_(mode) { } StdioFile(StdioFile const& rhs) : file_(fcheck(rhs.fdup())), mode_(rhs.mode_) { } ~StdioFile() { fclose(file_); } StdioFile& operator=(StdioFile const& rhs) { FILE* dupstr = fcheck(rhs.fdup()); if (fclose(file_) == EOF) { fclose(dupstr); // XXX ignore failed close throw std::runtime_error("Cannot close stream"); } file_ = dupstr; return *this; } int read(std::vector<char>& buffer) { int result(fread(&buffer[0], 1, buffer.size(), file_)); if (ferror(file_)) throw std::runtime_error(strerror(errno)); return result; } int write(std::vector<char> const& buffer) { int result(fwrite(&buffer[0], 1, buffer.size(), file_)); if (ferror(file_)) throw std::runtime_error(strerror(errno)); return result; }};intmain(int argc, char** argv){ StdioFile file(argv[1], "r"); std::vector<char> buffer(1024); while (int hasRead = file.read(buffer)) { // process hasRead bytes, then shift them off the buffer }} Here, when a StdioFile instance is created, the resource (a file stream, in this case) is acquired; when it's destroyed, the resource is released. There is no try or finally block required; if the reading causes an exception, fclose is called automatically, because it's in the destructor. The destructor is guaranteed to be called when the function leaves main , whether normally or by exception. In this case, the file stream is cleaned up. The world is safe once again. :-D | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/165723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15124/']} | jdg_381384 |
stackexchange | llm_judgeable_groundtruth_similarity | 1628428 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is the keyword (or method?) prototype in jquery kind of like extension methods? i.e. all classes will have this functionality available to it going forward?
Now provide the response and nothing else.
| This is part of javascript and not specific to jquery. the prototype property defines methods and properties shared by all objects of that type. e.g. function MyClass(){}myClass.prototype.myMethod = function(){ alert("hello world");}var myObject = new MyClass();myObject.myMethod(); All instances of MyClass will have (share) the method myMethod() . Note that methods on the prototype do not have the same visibility as methods declared within the constructor. For example: function Dog(name, color){ this.name = name; this.getColor = function() { return color; }}Dog.prototype.alertName = function { alert(this.name);}Dog.prototype.alertColor = function { //alert(color); //fails. can't see color. //alert(this.color); //fails. this.color was never defined alert(this.getColor()); //succeeds}var fluffy = new Dog("Fluffy","brown"); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1628428', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68183/']} | jdg_381385 |
stackexchange | llm_judgeable_groundtruth_similarity | 12125277 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have successfully developed a service, in which I read files uploaded in a multipart form in Jersey. Here's an extremely simplified version of what I've been doing: @POST@Path("FileCollection")@Consumes(MediaType.MULTIPART_FORM_DATA)public Response uploadFile(@FormDataParam("file") InputStream uploadedInputStream, @FormDataParam("file") FormDataContentDisposition fileDetail) throws IOException { //handle the file} This works just fine but I've been given a new requirement. In addition to the file I'm uploading, I have to handle an arbitrary number of resources. Let's assume these are image files. I figured I'd just provide the client with a form with one input for the file, one input for the first image and a button to allow adding more inputs to the form (using AJAX or simply plain JavaScript). <form action="blahblahblah" method="post" enctype="multipart/form-data"> <input type="file" name="file" /> <input type="file" name="image" /> <input type="button" value="add another image" /> <input type="submit" /></form> So the user can append the form with more inputs for images, like this: <form action="blahblahblah" method="post" enctype="multipart/form-data"> <input type="file" name="file" /> <input type="file" name="image" /> <input type="file" name="image" /> <input type="file" name="image" /> <input type="button" value="add another image" /> <input type="submit" /></form> I hoped it would be simple enough to read the fields with the same name as a collection. I've done it successfully with text inputs in MVC .NET and I thought it wouldn't be harder in Jersey. It turns out I was wrong. Having found no tutorials on the subject, I started experimenting. In order to see how to do it, I dumbed the problem down to simple text inputs. <form action="blahblabhblah" method="post" enctype="multipart/form-data"> <fieldset> <legend>Multiple inputs with the same name</legend> <input type="text" name="test" /> <input type="text" name="test" /> <input type="text" name="test" /> <input type="text" name="test" /> <input type="submit" value="Upload It" /> </fieldset></form> Obviously, I needed to have some sort of collection as a parameter to my method. Here's what I tried, grouped by collection type. Array At first, I checked whether Jersey was smart enough to handle a simple array: @POST@Path("FileCollection")@Consumes(MediaType.MULTIPART_FORM_DATA)public Response uploadFile(@FormDataParam("test") String[] inputs) { //handle the request} but the array wasn't injected as expected. MultiValuedMap Having failed miserably, I remembered that MultiValuedMap objects could be handled out of the box. @POST@Path("FileCollection")@Consumes(MediaType.MULTIPART_FORM_DATA)public Response uploadFile(MultiValuedMap<String, String> formData) { //handle the request} but it doesn't work either. This time, I got an exception SEVERE: A message body reader for Java class javax.ws.rs.core.MultivaluedMap, and Java type javax.ws.rs.core.MultivaluedMap<java.lang.String, java.lang.String>, and MIME media type multipart/form-data; boundary=----WebKitFormBoundaryxgxeXiWk62fcLALU was not found. I was told that this exception could be gotten rid of by including the mimepull library so I added the following dependency to my pom: <dependency> <groupId>org.jvnet</groupId> <artifactId>mimepull</artifactId> <version>1.3</version> </dependency> Unfortunately the problem persists. It's probably a matter of choosing the right body reader and using different parameters for the generic. I'm not sure how to do this. I want to consume both file and text inputs, as well as some others (mostly Long values and custom parameter classes). FormDataMultipart After some more research, I found the FormDataMultiPart class. I've successfully used it to extract the string values from my form @POST@Path("upload2")@Consumes(MediaType.MULTIPART_FORM_DATA)public Response uploadMultipart(FormDataMultiPart multiPart){ List<FormDataBodyPart> fields = multiPart.getFields("test"); System.out.println("Name\tValue"); for(FormDataBodyPart field : fields){ System.out.println(field.getName() + "\t" + field.getValue()); //handle the values } //prepare the response} The problem is, this is a solution to the simplified version of my problem. While I know that every single parameter injected by Jersey is created by parsing a string at some point (no wonder, it's HTTP after all) and I have some experience writing my own parameter classes, I don't really how to convert these fields to InputStream or File instances for further processing. Therefore, before diving into Jersey source code to see how these objects are created, I decided to ask here whether there is an easier way to read a set (of unknown size) of files. Do you know how to solve this conundrum?
Now provide the response and nothing else.
| I have found the solution by following the example with FormDataMultipart . It turns out I was very close to the answer. The FormDataBodyPart class provides a method that allows its user to read the value as InputStream (or theoretically, any other class, for which a message body reader is present). Here's the final solution: Form The form remains unchanged. I have a couple of fields with the same name, in which I can place files. It's possible to use both multiple form inputs (you want these when uploading many files from a directory) and numerous inputs that share a name (Flexible way to upload an unspecified number of files from different location). It's also possible to append the form with more inputs using JavaScript. <form action="/files" method="post" enctype="multipart/form-data"> <fieldset> <legend>Multiple inputs with the same name</legend> <input type="file" name="test" multiple="multiple"/> <input type="file" name="test" /> <input type="file" name="test" /> </fieldset> <input type="submit" value="Upload It" /></form> Service - using FormDataMultipart Here's a simplified method that reads a collection of files from a multipart form. All inputs with the same are assigned to a List and their values are converted to InputStream using the getValueAs method of FormDataBodyPart . Once you have these files as InputStream instances, it's easy to do almost anything with them. @POST@Path("files")@Consumes(MediaType.MULTIPART_FORM_DATA)public Response uploadMultipart(FormDataMultiPart multiPart) throws IOException{ List<FormDataBodyPart> fields = multiPart.getFields("test"); for(FormDataBodyPart field : fields){ handleInputStream(field.getValueAs(InputStream.class)); } //prepare the response}private void handleInputStream(InputStream is){ //read the stream any way you want} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12125277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1407656/']} | jdg_381386 |
stackexchange | llm_judgeable_groundtruth_similarity | 140939 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The other day my dad asked me a question that I would have never expected from him. "How can I learn C++?" My dad is turning 56 this year and computers are a distant concept for him. He doesn't know how to use a phone very well besides calling numbers (no speed dial or contacts); though he has started to learn computers a little better - to the point that he knows how to open the internet (in Windows) and browse around (and has successfully completed several job applications entirely on his own online, of which he was offered positions too). But still, these are too narrow-windowed experiences to mean much, really. While he may not have the background, my dad knows how to read. And I mean reading as a skill, not just an ability. He has little to no college education (financial problems, family, etc.) and was fortunate enough to finish high school, but still taught himself to become a master electrician and has been one for almost 30 years now. He did the same with guitar, learning to play at a very professional level and has been praised for his skill. In high school, he picked up a weight lifting book - and was the only person in his high school at the time to qualify officially as an "athlete" by national standards. In all cases, he just needed something to read. Something to teach him. He absorbs information like a sponge. I have no doubt in my dad's motivation or capability of doing this, so my general goal is simply: Get my dad into the world of computers, and get him on the road to programming. I strongly believe that once I get him through the fundamentals, his drive and reading skill will keep him going on this own. So I'm asking you all: where should I start with all this? And what are the best resources out there? Should I get him to start Linux instead of Windows? Is C++ a bad idea? Remember, he needs to (IMO) learn computers first, and then get that first grasp (the "Hello world" experience) of programming. For money's sake and at top preference, I'd like free online resources that he can read, but by all means any good suggestions in print or paid-for-online are welcome (that I could possibly look into later to purchase). And also, I intend to start him off with C++ (no Python, Java, etc.), because I know it the best and will be able to help him along the way with code. (I have minimal knowledge right now in other languages). Edit: I'm getting a lot of persistent suggestions to use Python. The only reason I wanted to do C++ is that I KNOW it and can be THERE when my dad needs help. My VERY FIRST exposure to programming ever was Java. I learned Java, and I got good at it. I open to other suggestions, but please provide an effective application of your suggestions. EDIT #2: I understand my approach/thinking/knowledge could be lacking here. I'm a sophomore level undergraduate CS major. If you don't agree with anything in my post, tell me why - give me ideas, information - that's why I'm asking in the first place. To narrow down my general goal to specific reachable goals.
Now provide the response and nothing else.
| First off, even if you do personally know C++ well, it's a horrible, horrible first language for beginners, and I cannot recommend strongly enough to keep him as far away from it as possible if he wants to successfully learn to write code. Learning programming is not about a language, it's about learning to express abstract requirements in formal logic. C++ makes that difficult to focus on because you're constantly tripping over language-specific issues. If he wants to learn to express ideas in formal logic, go for something that makes that simple. My choices would be Pascal (for application-level development) or Python (for script-level work) because they're both designed with being intuitive and easy to learn as core priorities. Once he actually knows the principles of how to write code, then if he really cares about C++ specifically, he can pick it up. For the same reason, don't start him on Linux instead of Windows, because then he has two problems to focus on: learning Linux and learning coding, at the same time! Keep the distractions to a minimum. As for actual programming resources, it's easy enough to find tutorials online, but I've found that the best way to learn to program is by programming. Find something that you'd like to do, and then start figuring out how to do it. Several years ago, I saw a game and I thought, "I could do this, and I could do it better." So I fired up Delphi and started figuring out the principles involved. I'm still learning new stuff from that personal project, and along the way, I've become a really good coder! So try that; ask your dad what he'd like to do if he could program--and make sure it's possible, and reasonably small in scope since he has no experience yet--and then sit down with him and start figuring out with him how to do it. And if he has any questions, get him a StackOverflow account. :) | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/140939', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/46053/']} | jdg_381387 |
stackexchange | llm_judgeable_groundtruth_similarity | 51955357 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a single activity Android application. I have MainActivity (only activity) with BottomNavigationView, three top level fragments and some child fragments. My requirement is whenever the screen is showing top level fragments, bottom navigation should be visible such that switching is possible. But when I'm viewing any of the child fragments, bottom navigation should be hidden.Is there any out-of-box way using the Navigation component or need to change the visibility manually ?
Now provide the response and nothing else.
| Update (Navigation component 1.0) As of Navigation component 1.0.0-alpha08 , method addOnNavigatedListener(controller: NavController, destination: NavDestination) was changed to addOnDestinationChangedListener(controller: NavController, destination: NavDestination, arguments: Bundle) . Its behavior was also slightly changed (it is also called if the destinations arguments change). Old Answer You can use NavController.OnNavigatedListener to achieve this behavior (set it in Activity onCreate): findNavController(R.id.container).addOnNavigatedListener { _, destination -> when (destination.id) { R.id.dashboardFragment -> showBottomNavigation() else -> hideBottomNavigation() }}private fun hideBottomNavigation() { // bottom_navigation is BottomNavigationView with(bottom_navigation) { if (visibility == View.VISIBLE && alpha == 1f) { animate() .alpha(0f) .withEndAction { visibility = View.GONE } .duration = EXIT_DURATION } }}private fun showBottomNavigation() { // bottom_navigation is BottomNavigationView with(bottom_navigation) { visibility = View.VISIBLE animate() .alpha(1f) .duration = ENTER_DURATION }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51955357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10256445/']} | jdg_381388 |
stackexchange | llm_judgeable_groundtruth_similarity | 34465 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this statment always true?$$(a)\subsetneq (b)\Rightarrow \text{Ann}_R b\subsetneq \text{Ann}_R a$$ If it is false, can you please provide an example? Also what is the largest class of rings that make the above statement is true. UPDATE: to make it more interesting, let $a,b$ be zero divisors. -Thanks-
Now provide the response and nothing else.
| Let's suppose our ring $R$ is reduced, i.e. has no non-zero nilpotents (and commutative with identity). Then the annihilator of any element is a radical ideal (if $x^n a = 0,$ then$(xa)^n = 0,$ thus $xa = 0$). Hence the annihilator of $a$ is precisely the intersection of those primeideals $\mathfrak p$ in the support of $a$, i.e. such that $a$ has non-zeroimage in the localization $R_{\mathfrak p}$. Now suppose also that $R$ is Noetherian. Then the support of any elementis a union of irreducible components of Spec $R$. (This is a general property of reduced Noetherian rings.) So your question amounts to asking: under what conditionson $R$ (now assumed Noetherian and reduced) does one have that any two elements$a$ and $b$ supported on the same set of components of Spec $R$ generatethe same ideal. Since $a$ and $a^2$ always have the same support, one finds that $a$ and $a^2$ would have to generate the same ideal for all $a \in R$. From this it is not hard to deduce that $R$ is forced to be a finite product of fields. So at least for reduced Noetherian rings, your desired property holds onlyif the ring is a product of fields. (It is certainly a very restrictive property.) General remark: One can often gain insight into these kinds of questions byworking geometrically on the Spec of the ring, where you can "see" the meaningof the various algebraic concepts in geometric terms. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/34465', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9077/']} | jdg_381389 |
stackexchange | llm_judgeable_groundtruth_similarity | 64679 |
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that the organization that distributes IP addresses decided to assign 192.168.xxx, 172.xxx and 10.xxx to private networks. However, I thought that private networks have their own address space so shouldn't a private network be able to assign any values in the IP address space and not be limited to those values? Assuming IPV4 CIDR notation
Now provide the response and nothing else.
| RFC 1918 allocates the following for private address space: 10.0.0.0/8 172.16.0.0/12 (not 172.0.0.0/8 !!!) 192.168.0.0/16 While those are private, network engineers often use NAT to allow users on those nets to reach internet resources. If you used 8.0.0.0/8 for private address space (for example), you would not be able to reach the google address server 8.8.8.8 , because you would have an internal route for that block. In addition, even if your “private” servers did not need to reach the internet at all, if google tried to reach your public webserver, and your public webserver had your internal routing table (with your “private” 8-net route), the replies would not get back to google. So use the RFC1918 private address space and save yourself a bunch of trouble. | {} | {'log_upvote_score': 7, 'links': ['https://networkengineering.stackexchange.com/questions/64679', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/64551/']} | jdg_381390 |
stackexchange | llm_judgeable_groundtruth_similarity | 289271 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The following admits of many (easy) proofs, but I am seeing no purely "bijective" argument: $$\sum_{j=n}^N \binom{j}{n} = \binom{N+1}{n+1}.$$ Any ideas?
Now provide the response and nothing else.
| Santa Claus has $N+1$ reindeer whose noses are of varying redness. Every year, Santa needs $n+1$ reindeer to pull his sleigh. The reddest-nosed reindeer always leads the sleigh. The way Santa chooses the reindeer is as follows. First, Santa chooses one reindeer to lead the sleigh; call it the $(j+1)$th reindeer. Then out of the $j$ reindeer whose noses are less red, Santa chooses $n$ more reindeer to complete the team. One year, Santa learns (on MistletOverflow) that he can simply choose $n+1$ reindeer, and then see which of them has the reddest nose. | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/289271', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11142/']} | jdg_381391 |
stackexchange | llm_judgeable_groundtruth_similarity | 44917721 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I like to test a component inside Connect in a Redux app: this.component = TestUtils.renderIntoDocument(<Provider store={store}><Header path={this.path} /></Provider>); I have no idea about how to access Header inside Provider...(since I can not stop through in a debugger when running the jest from CLI. So when I tried to get a child inside Header const path = findDOMNode(self.component.refs.pathElemSpan); console.log("path="+path) I got undefined on path Any suggestion? thanks
Now provide the response and nothing else.
| Use enzyme , you have a bunch of nice selectors to navigate through your virtual DOM kingdom. :) http://airbnb.io/enzyme/ A super simple test to access your component: import { mount } from 'enzyme'import Header from './header'//... in your testconst wrapper = mount(<Provider store={store}><Header path='foo' /></Provider>)const header = wrapper.find(Header).first()expect(header.exists()).toBe(true)// you can even check out the propsexpect(header.prop('path')).toBe('foo') This example is using mount but you also have the option to do shallow rendering. I highly recommend you grab something to drink and read the docs a little. ;) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44917721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6214294/']} | jdg_381392 |
stackexchange | llm_judgeable_groundtruth_similarity | 10650645 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have about 50,000 data points in 3D on which I have run scipy.spatial.Delaunay from the new scipy (I'm using 0.10) which gives me a very useful triangulation. Based on: http://en.wikipedia.org/wiki/Delaunay_triangulation (section "Relationship with the Voronoi diagram") ...I was wondering if there is an easy way to get to the "dual graph" of this triangulation, which is the Voronoi Tesselation. Any clues? My searching around on this seems to show no pre-built in scipy functions, which I find almost strange! Thanks,Edward
Now provide the response and nothing else.
| The adjacency information can be found in the neighbors attribute of the Delaunay object. Unfortunately, the code does not expose the circumcenters to the user at the moment, so you'll have to recompute those yourself. Also, the Voronoi edges that extend to infinity are not directly obtained in this way. It's still probably possible, but needs some more thinking. import numpy as npfrom scipy.spatial import Delaunaypoints = np.random.rand(30, 2)tri = Delaunay(points)p = tri.points[tri.vertices]# Triangle verticesA = p[:,0,:].TB = p[:,1,:].TC = p[:,2,:].T# See http://en.wikipedia.org/wiki/Circumscribed_circle#Circumscribed_circles_of_triangles# The following is just a direct transcription of the formula therea = A - Cb = B - Cdef dot2(u, v): return u[0]*v[0] + u[1]*v[1]def cross2(u, v, w): """u x (v x w)""" return dot2(u, w)*v - dot2(u, v)*wdef ncross2(u, v): """|| u x v ||^2""" return sq2(u)*sq2(v) - dot2(u, v)**2def sq2(u): return dot2(u, u)cc = cross2(sq2(a) * b - sq2(b) * a, a, b) / (2*ncross2(a, b)) + C# Grab the Voronoi edgesvc = cc[:,tri.neighbors]vc[:,tri.neighbors == -1] = np.nan # edges at infinity, plotting those would need more work...lines = []lines.extend(zip(cc.T, vc[:,:,0].T))lines.extend(zip(cc.T, vc[:,:,1].T))lines.extend(zip(cc.T, vc[:,:,2].T))# Plot itimport matplotlib.pyplot as pltfrom matplotlib.collections import LineCollectionlines = LineCollection(lines, edgecolor='k')plt.hold(1)plt.plot(points[:,0], points[:,1], '.')plt.plot(cc[0], cc[1], '*')plt.gca().add_collection(lines)plt.axis('equal')plt.xlim(-0.1, 1.1)plt.ylim(-0.1, 1.1)plt.show() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10650645', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1243969/']} | jdg_381393 |
stackexchange | llm_judgeable_groundtruth_similarity | 4520620 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anybody, given example below, make to me an explanation how FLOAT works in mySQL? I know that float is approximative, but really, such a difference? And there is only 9 digits, so it is not overflow problem, isn't it? mysql> create table t(f FLOAT(15,2), db DOUBLE);mysql> insert into t(f,db) VALUES (512659663, 512659663);mysql> select * from t;+--------------+-----------+| f | db |+--------------+-----------+| 512659648.00 | 512659663 |+--------------+-----------+ (mysql Ver 14.14 Distrib 5.1.44, for Win32 (ia32) on Windows XP)
Now provide the response and nothing else.
| FLOAT is a 32 -bit type with, as the name suggests, floating point. The higher is the value, the less is the absolute precision. 512659648 is large enough to introduce errors in tens. Update: In IEEE-754 (that's what FLOAT is), the data are stored in 32 bits: 1 -bit sign, 8 -bit binary exponent and 23 -bit significand. The exponent shows you the closest least power of 2 ( 28 in your case, or 268435456 ). The significand is a binary fraction. It can store numbers from 1 to 2 with precision up to 2^-23 . In your case, it's 1.11101000111010010000110 , or ~ 1.9098060 in decimal notation. The number is calculated as a product of the exponent and the significand. Given all this, precision is 2 ^ (28 - 23) = 2 ^ 5 = 32 for the numbers of that order ( 2^28 to 2^29 ). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4520620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129289/']} | jdg_381394 |
stackexchange | llm_judgeable_groundtruth_similarity | 40259 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Being a non signal processing science student I have limited understanding of the concepts. I have a continuous periodic bearing faulty signal (with time amplitudes) which are sampled at $12\textrm{ kHz}$ and $48\textrm{ kHz}$ frequencies. I have utilized some machine learning techniques (Convolutional Neural Network) to classify faulty signals to the non faulty signals. When I am using $12\textrm{ kHz}$ I am able to achieve a classification accuracy $97 \pm 1.2 \%$ accuracy. Similarly I am able to achieve accuracy of $95\%$ when I applied the same technique on the same signal but sampled at $48\textrm{ kHz}$ despite the recording made at same RPM, load, and recording angle with the sensor. What could be the reason for this increased rate of misclassification? Are there any techniques to spot differences in the signal? Are higher resolution signals prone to higher noise? Details of the signal can be seen here , in chapter 3.
Now provide the response and nothing else.
| Sampling at a higher frequency will give you more effective number of bits (ENOB), up to the limits of the spurious free dynamic range of the Analog to Digital Converter (ADC) you are using (as well as other factors such as the analog input bandwidth of the ADC). However there are some important aspects to understand when doing this that I will detail further. This is due to the general nature of quantization noise, which under conditions of sampling a signal that is uncorrelated to the sampling clock is well approximated as a white (in frequency) uniform (in magnitude) noise distribution. Further, the Signal to Noise Ratio (SNR) of a full scale real sine-wave will be well approximated as: $$SNR = 6.02 \text{ dB/bit} + 1.76 \text{dB}$$ For example, a perfect 12 bit ADC samping a full scale sine wave will have an SNR of $6.02\times 12+1.76 = 74$ dB. By using a full scale sine wave, we establish a consistent reference line from which we can determine the total noise power due to quantization. Within reason, that noise power remains the same even as the sine wave amplitude is reduced, or when we use signals that are composites of multiple sine waves (meaning via the Fourier Series Expansion, any general signal). This classic formula is derived from the uniform distribution of the quantization noise, as for any uniform distribution the variance is $\frac{A^2}{12}$ , where A is the width of the distribution. This relationship and how we arrive at the formula above is detailed in the figure below, comparing the histogram and variance for a full-scale sine wave ( $\sigma_s^2$ ), to the histogram and variance for the quantization noise ( $\sigma_N^2$ ), where $\Delta$ is a quantization level and b is the number of bits. Therefore the sinewave has a peak to peak amplitude of $2^b\Delta$ . You will see that taking the square root of the equation shown below for the variance of the sine wave $\frac{(2^b\Delta)^2}{8}$ is the familiar $\frac{V_p}{\sqrt{2}}$ as the standard deviation of a sine wave at peak amplitude $V_p$ . Thus we have the variance of the signal divided by the variance of the noise as the SNR. Further as mentioned earlier, this noise level due to quantization is well approximated as a white noise process when the sampling rate is uncorrelated to the input (which occurs with incommensurate sampling with a sufficient number of bits and the input signal is fast enough that it is spanning multiple quantization levels from sample to sample, and incommensurate sampling means sampling with a clock that is not an integer multiple relationship in frequency with the input). As a white noise process in our digital sampled spectrum, the quantization noise power will be spread evenly from a frequency of 0 (DC) to half the sampling rate ( $f_s/2$ ) for a real signal, or $-f_s/2$ to $+f_s/2$ for a complex signal. In a perfect ADC, the total variance due to quantization remains the same independent of the sampling rate (it is proportional to the magnitude of the quantization level, which is independent of sampling rate). To see this clearly, consider the standard deviation of a sine wave which we reminded ourselves earlier is $\frac{V_p}{\sqrt{2}}$ ; no matter how fast we sample it as long as we sample it sufficiently to meet Nyquist's criteria, the same standard deviation will result. Notice that it has nothing to do with the sampling rate itself. Similarly the standard deviation and variance of the quantization noise is independent of frequency, but as long as each sample of quantization noise is independent and uncorrelated from each previous sample, then the noise is a white noise process meaning that it is spread evenly across our digital frequency range. If we raise the sampling rate, the noise density goes down. If we subsequently filter since our bandwidth of interest is lower, the total noise will go down. Specifically if you filter away half the spectrum, the noise will go down by 2 (3 dB). Filter 1/4 of the spectrum and the noise goes down by 6 dB which is equivalent to gaining 1 more bit of precision! Thus the formula for SNR that accounts for oversampling is given as: Actual ADC's in practice will have limitations including non-linearities, analog input bandwidth, aperture uncertainly etc that will limit how much we can oversample, and how many effective bits can be achieved. The analog input bandwidth will limit the maximum input frequency we can effectively sample. The non-linearities will lead to "spurs" which are correlated frequency tones that will not be spread out and therefore will not benefit from the same noise processing gain we saw earlier with the white quantization noise model. These spurs are quantified on ADC datasheets as the spurious-free dynamic range (SFDR). In practice I refer to the SFDR and usually take advantage of oversampling until the predicted quantization noise is on level with the SFDR, at which point if the strongest spur happens to be in band, there will be no further increase in SNR. To detail further I would need to refer to the specific design in more detail. All noise contributions are captured nicely in the effective number of bits (ENOB) specification also given on ADC data sheets. Basically the actual total ADC noise expected is quantified by reversing the SNR equation that I first gave to come up with the equivalent number of bits a perfect ADC would provide. It will always be less than the actual number of bits due to these degradation sources. Importantly, it will also go down as the sampling rate goes up so there will be a diminishing point of return from oversampling. For example, consider an actual ADC which has a specified ENOB of 11.3 bits and SFDR of 83 dB at 100 MSPS sampling rate. 11.3 ENOB is an SNR of 69.8 dB (70 dB) for a full scale sine wave. The actual signal sampled will likely be at a lower input level so as not to clip, but by knowing the absolute power level of a full scale sinewave, we now know the absolute power level of the total ADC noise. If for example the full scale sine wave that results in the maximum SFDR and ENOB is +9 dBm (also note that this level with best performance is typically 1-3 dB lower than the actual full scale where a sine wave would start to clip!), then the total ADC noise power will be +9dBm-70 dB = -61 dBm. Since the SFDR is 83 dB, then we can easily expect to gain up to that limit by oversampling (but not more if the spur is in our final band of interest). In order to achieve this 22 dB gain, the oversampling ratio N would need to be at least $N= 10^{\frac{83-61}{10}} = 158.5$ Therefore if our actual real signal bandwidth of interest was 50MHz/158.5 = 315.5 KHz, we could sample at 100 MHz and gain 22 dB or 3.7 additional bits from the oversampling, for a total ENOB of 11.3+ 3.7 = 15 bits. As a final note, know that Sigma Delta ADC architectures use feedback and noise shaping to achieve a much better increase in number of bits from oversampling than what I described here of what can be achieved with traditional ADC's. We saw an increase of 3dB/octave (every time we doubled the frequency we gained 3 dB in SNR). A simple first order Sigma Delta ADC has a gain of 9dB/octave, while a 3rd order Sigma Delta has a gain of 21 dB/octave! (Fifth order Sigma Delta's are not uncommmon!). Also see related responses at How do you simultaneously undersample and oversample? Oversampling while maintaining noise PSD How to choose FFT depth for ADC performance analysis (SINAD, ENOB) How increasing the Signal to Quantization noise increases the resolution of ADC | {} | {'log_upvote_score': 6, 'links': ['https://dsp.stackexchange.com/questions/40259', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/21205/']} | jdg_381395 |
stackexchange | llm_judgeable_groundtruth_similarity | 34145 |
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently a sophomore interested in TCS. Recently I've been somewhat bothered by the difficulty of getting into PhD programs in the future since the selection is indeed very competitive. I looked through the webpages of some top CS school and found that their current grad students mostly also come from top undergrad schools and many graduated summa cum laude, i.e. with highest honors(top GPA, I guess?). I've also been advised by a PhD student that by the time of application, I'd better figure out a specific area in theory(algorithms, complexity, crypto, etc) to write in my PS. Being interested in a wide range of topics would put me at disadvantage. Though I'm at a top 5 university, I started learning CS only after getting into college. I fell in love with theory right away but I felt really left behind by many around me. I have a relatively good GPA, but not stellar enough. I've also been doing research in a few diffenrent topics but haven't published a paper yet. And currently I'm still widely interested in graph theory, complexity, cryptography and even quantum computing. I know I still have two years to go. But sometimes I felt really pushed by the scenario in academia nowadays. I believe that many students are in the same boat as I am. I wonder roughly what is the difficulty of getting into top graduate programs in theoretical cs(MIT,Berkeley,Stanford,CMU,Princeton etc.)nowadays? Does GPA really play no role once you have reached a "bar"(like 3.7/4), as many said? Do I have to decide on a specific area just after 3 years of undergrad studies? And any other advice related to my situation(research, application, anything) would be appreciated.
Now provide the response and nothing else.
| Disclaimer: I am just a student, so I cannot speak authoritatively about admissions. I haven't been on an admissions committee, but I can tell you my understanding of how it works. A PhD admissions committee looks at an application and asks "will this person do well in our PhD program?" Doing a PhD is first and foremost about doing novel research. So the question is: "Can this person do research?" Your application has several parts, which together need to convince the committee that the answer is yes. Below are my thoughts on the various aspects. Recommendation Letters: These are extremely important. A good recommendation letter comes from someone who knows you well in a research setting and who is an established researcher. Such a person should be in a good position to judge your ability to do research and their opinion will carry a lot of weight. A strong recommendation letter will talk about research-level work you have done and speak about your skills as a researcher, particularly your ability to be self-directed. A letter that only talks about how you did in class is not strong. To get strong recommendation letters, you should try to do research projects with professors either at your college or at a different college through an REU program or similar. If you do well, those professors will be able to write strong letters for you. (Make sure to work with research-active professors. e.g. work with a professor that has current PhD students. The more well-known your letter writer is, the more weight their recommendation carries.) Publications: If you already have written research articles, that is a clear sign that you can do research. Don't worry too much about this. If your recommendation letters say that the work you did is of publishable quality, that already says a lot, even if it hasn't been published yet. GPA and Transcript: Having a good GPA and transcript matters, but is not decisive on its own. It shows that you are hardworking and that you have the appropriate background, which are necessary attributes for a PhD student. More important than overall GPA is which courses you took and how well you did in them. Getting a B in art history will not sink your Theoretical Computer Science PhD application, whereas a B in Algorithms or Linear Algebra is a red flag. Taking more computer science and math courses, particularly at higher levels, and getting good grades makes your application look solid. Your College: Being an undergraduate at a top research university is an advantage because it gives you exposure to a research environment and gives you opportunities to do research, which leads to strong recommendation letters. If you are at, say, a liberal arts college, you will need to be more proactive about finding research opportunities (e.g. by doing a REU at a research university over the summer). Admissions committees will not discount your application based on your undergraduate institution. Personal Statement: The personal statement should convince the committee that you have the right motivation for doing a PhD. In particular, it should tell them why you want to do research. Do not talk about what you did when you were 10 years old --- that's completely irrelevant. Talk about your recent research experiences; talk about scientific questions that excite you. You do not need to choose a subfield when applying. However, it looks good if you can discuss some subfields that interest you; it shows that you have thought seriously about what you want to do. Read up on the professors at the school you are applying to. If one of them does something that excites you, mention it (and why). GRE scores: Everyone knows the GRE is flawed, but it is the only standardized information available. Top programs receive far too many applications, so they may use the GRE to "filter" applications. As long as your GRE score is passable, it is not a problem. Writing Sample: A few places ask for a writing sample. They want to know if you can write clearly --- this is important when writing research articles. Just make sure this has been appropriately proofread. Bad spelling and grammar or rambling are a bad look. (This applies to your whole application!) CV: This should be a concise overview of your skills and experiences. Web Presence: Some professors will google you. So your web presence can matter. You can take control of your web presence by setting up a homepage. This doesn't need to be fancy. As a bonus, you can set up analytics and see who is googling you. The fact that you are thinking about this as a sophomore is a promising sign that you are on the right track. Don't worry too much! Get some research experience --- this is also useful to you, as it gives you useful experience and helps you figure out whether you will enjoy doing research full time. Otherwise, just work hard and take courses that interest you. One final piece of advice: Ask people to read over your application, e.g. the people writing your recommendation letters. They can tell you if you have missed something. TL;DR: Do research! | {} | {'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/34145', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/34813/']} | jdg_381396 |
stackexchange | llm_judgeable_groundtruth_similarity | 312266 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Transverse and longitudinal waves are classified according to the direction of propagation of wave motion and direction of oscillations of particles of the medium. But in the case of non-mechanical waves there is no medium, so there are no particles of a medium to oscillate. In this case how does one classify longitudinal and transverse waves?
Now provide the response and nothing else.
| A symmetry-protected topological phase has a certain symmetry. Any Hamiltonian in this phase can be adiabatically deformed (i.e. without closing the gap) into a Hamiltonian whose ground state is a product state, but the symmetry must be explicitly broken during the deformation process and then restored at the end. As a visually analogy, there is a "wall" crossing the submanifold of parameter space that respects the symmetry, and the wall separates the SPT phase from the totally trivial phase with a product ground state. But if you are allowed to temporarily break the symmetry, then you can leave the submanifold and "jump over the wall" before ending up back in the submanifold and restoring the symmetry. A Hamiltonian in a topologically ordered phase cannot be deformed into a Hamiltonian with a product ground state by any means whatsoever (without closing the gap). Here, the "wall" crosses the entire parameter space of all possible (local) perturbations (it's infinitely high and can't be jumped over). The phase does not need to have any symmetry. This is much stronger condition. The two concepts are closely related mathematically as well. It turns out that topologically ordered states are much more exotic than SPT states. (E.g. they have "anyonic" excitations with neither bosonic nor fermionic exchange statistics, while SPT's do not. At least, not in the bulk - things get a little subtle at the boundary.) But if you mathematically "gauge" the symmetry that protects the SPT, then you get a theory that is morally very similar to a topologically ordered state. Also, both types of systems can usefully by classified using cohomology theory. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/312266', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/130848/']} | jdg_381397 |
stackexchange | llm_judgeable_groundtruth_similarity | 404116 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
On an Ubuntu AWS instance I want to ssh in as "thufir" with sudo privileges. Create user thufir with sudo adduser thufir and then adduser thufir sudo from the "ubuntu" user? Simply add my public key to ~/.ssh/authorized_keys and then I'll be able to ssh as "thufir" to the remote instance? Can I use my regular public key to login as "ubuntu" on AWS, or would that require the AWS generated key? I'd like to be able to ssh as "ubuntu" using my own key -- is that possible? There's no password with the "ubuntu" user, strictly key login.
Now provide the response and nothing else.
| The ssh keys are not personalized, so you can create the key under your user and then just paste your public key to the target user's authorized_keys on the remote server. Thus, if you have key generated on your local workstation under "thufir", and want to logon to remote server as "ubuntu", you need to copy contents of your .ssh/id_rsa.pub to .ssh/authorized_keys of user ubuntu on remote server and use command like ssh ubuntu@remotehost If you want to connect as thufir to remote server, then, yes, on the remote server you need to create user thufir, add it to sudoers, then put your public key to the .ssh/authorized_keys of the new user and then you will be able to connect through ssh thufir@remotehost or, suggesting you are logged on as thufir to your local box, through ssh remotehost | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/404116', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/17056/']} | jdg_381398 |
stackexchange | llm_judgeable_groundtruth_similarity | 530600 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem is simple. Given $$A^2=\begin{bmatrix}13 & 9 &-9 \\ 0 & 4 & 0 \\ 12 & 12 & -8 \end{bmatrix}$$How find $A$? I think a method using eigenvalues and I find them.But I can't find an actual $A$.Is it right to use eigenvalues?
Now provide the response and nothing else.
| I bet that when you say eigenvalues, you mean that you might diagonalize the matrix $A^2$ by expressing it as $A^2 = VDV^{-1}$ for some matrix $V$ and diagonal matrix $D$, where the entries in $D$ are precisely the eigenvalues of $A^2$. Once you've done this, it is very easy to find a square root of $D$, since it is a diagonal matrix. For example, a square root of the matrix $\begin{pmatrix} 4 & 0 \\ 0 & 9 \end{pmatrix}$ might be $\begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. Does this trail of breadcrumbs lead you to the answer? | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/530600', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/94763/']} | jdg_381399 |
stackexchange | llm_judgeable_groundtruth_similarity | 44031239 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried SecureRandom.random_number(9**6) but it sometimes returns 5 and sometimes 6 numbers. I'd want it to be a length of 6 consistently. I would also prefer it in the format like SecureRandom.random_number(9**6) without using syntax like 6.times.map so that it's easier to be stubbed in my controller test.
Now provide the response and nothing else.
| You can do it with math: (SecureRandom.random_number(9e5) + 1e5).to_i Then verify: 100000.times.map do (SecureRandom.random_number(9e5) + 1e5).to_iend.map { |v| v.to_s.length }.uniq# => [6] This produces values in the range 100000..999999: 10000000.times.map do (SecureRandom.random_number(9e5) + 1e5).to_iend.minmax# => [100000, 999999] If you need this in a more concise format, just roll it into a method: def six_digit_rand (SecureRandom.random_number(9e5) + 1e5).to_iend | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44031239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5158845/']} | jdg_381400 |
stackexchange | llm_judgeable_groundtruth_similarity | 7175363 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to restrict input values like this <simpleType name="SomeCode"> <restriction base="string"> <enumeration value="036222B"/> <enumeration value="036111C"/> </restriction></simpleType> But this does not generate an Enum. I suspect it is because the values start with numbers and this is not allowed for Enum values. Is there any solution or workaround?
Now provide the response and nothing else.
| Here is my answer to a similar question that may help (see issue 2): Enums don't match schema: problem with jaxb or xsd? There are a couple of enumeration values that are causing this issue. These issues can be overcome through the use of a JAXB external binding file (see below). Enum Issue #1 - Empty String Some of your enum values are empty string (""), which is causing a String rather than an enum property to be generated: <xs:enumeration value=""> <xs:annotation> <xs:documentation>Blank</xs:documentation> </xs:annotation></xs:enumeration> Enum Issue #2 - Numeric String Some of the enum values are numbers which is causing a String rather than an enum property to be generated: <xs:enumeration value="6"> <xs:annotation> <xs:documentation>6th grade</xs:documentation> </xs:annotation></xs:enumeration> Bindings File (bindings.xml) The following bindings file can be used to address the issues with the educationLevelType , the concepts here can be applied to all the problematic types: <jxb:bindings xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" version="2.1"> <jxb:bindings schemaLocation="http://www.acf.hhs.gov/programs/cb/systems/nytd/nytd_data_file_format.xsd"> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='6']"> <jxb:typesafeEnumMember name="SIX"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='7']"> <jxb:typesafeEnumMember name="SEVEN"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='8']"> <jxb:typesafeEnumMember name="EIGHT"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='9']"> <jxb:typesafeEnumMember name="NINE"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='10']"> <jxb:typesafeEnumMember name="TEN"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='11']"> <jxb:typesafeEnumMember name="ELEVEN"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='12']"> <jxb:typesafeEnumMember name="TWELVE"/> </jxb:bindings> <jxb:bindings node="//xs:simpleType[@name='educationLevelType']/xs:restriction/xs:enumeration[@value='']"> <jxb:typesafeEnumMember name="BLANK"/> </jxb:bindings> </jxb:bindings></jxb:bindings> The XJC call can be made as follows (the -nv flag is described below): xjc -nv -b bindings.xml -d out http://www.acf.hhs.gov/programs/cb/systems/nytd/nytd_data_file_format.xsd This will cause the following Enum to be generated: package gov.hhs.acf.nytd;import javax.xml.bind.annotation.XmlEnum;import javax.xml.bind.annotation.XmlEnumValue;import javax.xml.bind.annotation.XmlType;@XmlType(name = "educationLevelType")@XmlEnumpublic enum EducationLevelType { @XmlEnumValue("under 6") UNDER_6("under 6"), @XmlEnumValue("6") SIX("6"), @XmlEnumValue("7") SEVEN("7"), @XmlEnumValue("8") EIGHT("8"), @XmlEnumValue("9") NINE("9"), @XmlEnumValue("10") TEN("10"), @XmlEnumValue("11") ELEVEN("11"), @XmlEnumValue("12") TWELVE("12"), @XmlEnumValue("post secondary") POST_SECONDARY("post secondary"), @XmlEnumValue("college") COLLEGE("college"), @XmlEnumValue("") BLANK(""); private final String value; EducationLevelType(String v) { value = v; } public String value() { return value; } public static EducationLevelType fromValue(String v) { for (EducationLevelType c: EducationLevelType.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); }} maxOccurs Issue For the maxOccurs issue, the following command line with the no verify (-nv) flag can be used to parse the XML schema: xjc -nv -d out http://www.acf.hhs.gov/programs/cb/systems/nytd/nytd_data_file_format.xsd This will get you past the following error without having to modify the XML schema: parsing a schema... [ERROR] Current configuration of the parser doesn't allow a maxOccurs attribute value to be set greater than the value 5,000. line 41 of http://www.acf.hhs.gov/programs/cb/systems/nytd/nytd_data_file_format.xsd Failed to parse a schema. For More Information http://blog.bdoughan.com/2011/08/jaxb-and-enums.html | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7175363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/53278/']} | jdg_381401 |
stackexchange | llm_judgeable_groundtruth_similarity | 133783 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to say the RegionMember is very slow poly = Polygon[{{1, 0}, {0, 1}, {0, 0}}];SeedRandom[1234];pts = RandomReal[{0, 1}, {10^4, 2}];RegionMember[poly, #] & /@ pts // Tr // AbsoluteTiming {18.1054,5073 False+4927 True} How to speed up it?
Now provide the response and nothing else.
| You can also reduce overhead by passing all points at once or by creating a RegionMemberFunction : RegionMember[poly, pts] // Tr // AbsoluteTiming {0.00396, 5073 False + 4927 True} rm = RegionMember[poly];rm[pts] // Tr // AbsoluteTiming {0.002928, 5073 False + 4927 True} rm = RegionMember[poly];rm /@ pts // Tr // AbsoluteTiming {0.06013, 5073 False + 4927 True} | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/133783', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/21532/']} | jdg_381402 |
stackexchange | llm_judgeable_groundtruth_similarity | 48721813 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to get dotnet new console example project (for vscode) to work in Ubuntu 17.10. I can get the default program to run: using System;namespace dotnet_console{ class Program { static void Main(string[] args) { Console.WriteLine("Hello world!"); } }} But when i change it to read input as well, it gets really wonky... using System;namespace dotnet_console{ class Program { static void Main(string[] args) { Console.Write("Name: "); // 1 var name = Console.ReadLine(); // 2 Console.WriteLine("Hello {0}!", name); // 3 } }} The program builds, but it won't print Name: . However if i put breakpoints on line 1, 2 & 3, i can see that the program runs through ALL of them, but nothing prints. That is until i stop the debugging. Then it prints Name: The program '[16322] dotnet-console.dll' has exited with code 0 (0x0). What is happening here? I'm guessing its a vscode thing, because it works as expected when ran from the terminal using dotnet run .
Now provide the response and nothing else.
| The Documentation states the following: By default, processes are launched with their console output (stdout/stderr) going to the VS Code Debugger Console. This is useful for executables that take their input from the network, files, etc. But this does NOT work for applications that want to read from the console (ex: Console.ReadLine). For these applications, use a setting such as the following I found a solution for the problem here . And the following Quote from the linked Documentation also states that changing the console property from the launch.json to either " externalTerminal " or " integratedTerminal "is going to help. When this is set to externalTerminal the target process will run in a separate terminal. When this is set to integratedTerminal the target process will run inside VS Code's integrated terminal. Click the 'Terminal' tab in the tab group beneath the editor to interact with your application. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48721813', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6224823/']} | jdg_381403 |
stackexchange | llm_judgeable_groundtruth_similarity | 18369258 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need some help with Spring AOP.I've the following code: @Servicepublic class UserSecurityService implements UserDetailsService { @Autowired private UserService userService; ....} @Servicepublic class UserService extends CrudService<User, UserRepository> { public UserService() { super(); } @Autowired public UserService(UserRepository repository) { super(repository); this.repository = repository; } ....} @Repositoryinterface UserRepository extends JpaRepository<User, String> { ...} application-context.xml <import resource="classpath*:spring/application-context-db.xml" /><import resource="classpath*:spring/application-context-aop.xml" /><import resource="classpath*:spring/application-context-mail.xml" /><import resource="application-context-security.xml" /><context:component-scan base-package="com.xpto"> <context:exclude-filter type="annotation" expression="org.springframework.stereotype.Controller" /> <context:exclude-filter type="annotation" expression="org.springframework.stereotype.Repository" /></context:component-scan> application-context-aop.xml <aop:aspectj-autoproxy /><aop:config> <aop:aspect id="serviceLoggingAspect" ref="serviceLoggingAspectBean"> <aop:pointcut id="servicePointcut" expression="@within(org.springframework.stereotype.Service)" /> <aop:before method="before" pointcut-ref="servicePointcut" /> <aop:after-returning method="afterReturning" pointcut-ref="servicePointcut" returning="result" /> <aop:after-throwing method="afterThrowing" pointcut-ref="servicePointcut" throwing="exception" /> </aop:aspect></aop:config> When I try to load my application at Tomcat, I get the following exception: Caused by: java.lang.IllegalArgumentException: Can not set com.xpto.user.service.UserService field com.xpto.user.security.service.UserSecurityService.userService to com.sun.proxy.$Proxy57at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:164)at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:168)at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)at java.lang.reflect.Field.set(Field.java:680)at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:510)... 35 more I've the same configuration at Web layer to Logging my application and it works fine, but when I put AOP at Service layer I get this exception. I'm using Spring MVC and at web.xml I configured to load two different contexts, one loads only @Controller and the other loads @Repository and @Service.
Now provide the response and nothing else.
| You are not injecting an interface so you need to use CGLIB proxies, the spring reference manual states: Spring AOP defaults to using standard J2SE dynamic proxies for AOP proxies. This enables any interface (or set of interfaces) to be proxied. Spring AOP can also use CGLIB proxies. This is necessary to proxy classes, rather than interfaces. CGLIB is used by default if a business object does not implement an interface. As it is good practice to program to interfaces rather than classes, business classes normally will implement one or more business interfaces. Spring has decided to use a J2SE proxy ( com.sun.proxy.$Proxy57 ) probably because CrudService implements an interface. To force the use of CGLIB you can tweak your XML: <aop:aspectj-autoproxy proxy-target-class="true"/> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18369258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2705446/']} | jdg_381404 |
stackexchange | llm_judgeable_groundtruth_similarity | 542510 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've heard that "everyone" is using parameterized SQL queries to protect against SQL injection attacks without having to vailidate every piece of user input. How do you do this? Do you get this automatically when using stored procedures? So my understanding this is non-parameterized: cmdText = String.Format("SELECT foo FROM bar WHERE baz = '{0}'", fuz) Would this be parameterized? cmdText = String.Format("EXEC foo_from_baz '{0}'", fuz) Or do I need to do somethng more extensive like this in order to protect myself from SQL injection? With command .Parameters.Count = 1 .Parameters.Item(0).ParameterName = "@baz" .Parameters.Item(0).Value = fuzEnd With Are there other advantages to using parameterized queries besides the security considerations? Update: This great article was linked in one of the questions references by Grotok. http://www.sommarskog.se/dynamic_sql.html
Now provide the response and nothing else.
| The EXEC example in the question would NOT be parameterized. You need parameterized queries (prepared statements in some circles) to prevent input like this from causing damage: ';DROP TABLE bar;-- Try putting that in your fuz variable (or don't, if you value the bar table). More subtle and damaging queries are possible as well. Here's an example of how you do parameters with Sql Server: Public Function GetBarFooByBaz(ByVal Baz As String) As String Dim sql As String = "SELECT foo FROM bar WHERE baz= @Baz" Using cn As New SqlConnection("Your connection string here"), _ cmd As New SqlCommand(sql, cn) cmd.Parameters.Add("@Baz", SqlDbType.VarChar, 50).Value = Baz Return cmd.ExecuteScalar().ToString() End UsingEnd Function Stored procedures are sometimes credited with preventing SQL injection. However, most of the time you still have to call them using query parameters or they don't help. If you use stored procedures exclusively , then you can turn off permissions for SELECT, UPDATE, ALTER, CREATE, DELETE, etc (just about everything but EXEC) for the application user account and get some protection that way. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/542510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36737/']} | jdg_381405 |
stackexchange | llm_judgeable_groundtruth_similarity | 17090517 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have a CPTableView with 2 columns . My CPTableView is filled thanks to an Array Controller which is bind with the CPTableView Columns. I know how to fill, refill the table view, no problem. My problem is that I want to change the value of a cell and of course, this value must be changed in the array controller, too. So I enter my new value by double clicking on the cell, enter it. Ok, but the value isn't changed in the array and it's normal because I didn't tell to the array to do it. But to change the value in the array, I must be able to get the new value that I wrote in the cell to put in the array. I can't do this or I don't know how to do this. I tried the CPTableColumn method : dataCellForRow but it doesn't work, the debugger tells me to use dataViewForRow . Ok, I use it, but when I get the cell value, it always return : "Text Cell" the default title set for the cell in IB. So really I don't know how to do what I want.
Now provide the response and nothing else.
| I'm assuming you mean using a custom attribute to indicate what methods to intercept. You can use policy injection to achieve interception using XML configuration. First, let's define a custom attribute: [AttributeUsage(AttributeTargets.Method)]public class MyInterceptionAttribute : Attribute{} Next we can create an ICallHandler to do some interception work. This implementation will just do a Console.WriteLine before and after the method: public class MyLoggingCallHandler : ICallHandler{ IMethodReturn ICallHandler.Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext) { Console.WriteLine("Invoking " + input.MethodBase.Name); IMethodReturn result = getNext()(input, getNext); Console.WriteLine("Done Invoke"); return result; } int ICallHandler.Order { get; set; }} Next let's assume that we have some interface and an implementation: public interface IMyClass{ void Do(); void DoAgain();}public class MyClass : IMyClass { [MyInterception] public void Do() { Console.WriteLine("Do!"); } public void DoAgain() { Console.WriteLine("Do Again!"); }} Notice that I've applied the custom attribute, MyInterception, to only the Do method but not the DoAgain method. We will intercept all calls to the Do method. Next we create the configuration to define a policy, configure the matching rule and register the type along with an interceptor: <?xml version="1.0"?><configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration"/> </configSections> <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> <namespace name="UnityCallHandlerConfig" /> <assembly name="UnityCallHandlerConfig" /> <sectionExtension type="Microsoft.Practices.Unity.InterceptionExtension.Configuration.InterceptionConfigurationExtension, Microsoft.Practices.Unity.Interception.Configuration"/> <container> <extension type="Interception"/> <interception> <policy name="Policy"> <matchingRule name="Match" type="Microsoft.Practices.Unity.InterceptionExtension.CustomAttributeMatchingRule, Microsoft.Practices.Unity.Interception"> <constructor> <param name="attributeType" value="UnityCallHandlerConfig.MyInterceptionAttribute, UnityCallHandlerConfig" typeConverter="AssemblyQualifiedTypeNameConverter" /> <param name="inherited"> <value value="false"/> </param> </constructor> </matchingRule> <callHandler name="MyLogging" type="MyLoggingCallHandler"> <lifetime type="singleton"/> </callHandler> </policy> </interception> <register type="IMyClass" mapTo="MyClass"> <interceptor type="InterfaceInterceptor"/> <interceptionBehavior type="PolicyInjectionBehavior"/> </register> </container> </unity> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5"/> </startup></configuration> We also need a type converter to convert the string representation of the custom attribute to the proper type: public class AssemblyQualifiedTypeNameConverter : ConfigurationConverterBase{ public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType) { if (value != null) { Type typeValue = value as Type; if (typeValue == null) { throw new ArgumentException("Cannot convert type", typeof(Type).Name); } if (typeValue != null) return (typeValue).AssemblyQualifiedName; } return null; } public override object ConvertFrom(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value) { string stringValue = (string)value; if (!string.IsNullOrEmpty(stringValue)) { Type result = Type.GetType(stringValue, false); if (result == null) { throw new ArgumentException("Invalid type", "value"); } return result; } return null; }} Once we have all that set up we can create a container and load the configuration: var container = new UnityContainer().LoadConfiguration();var myClass = container.Resolve<IMyClass>();myClass.Do();myClass.DoAgain(); The output will be: Invoking DoDo!Done InvokeDo Again! showing that the first method is intercepted while the second is not. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17090517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2404908/']} | jdg_381406 |
stackexchange | llm_judgeable_groundtruth_similarity | 8261590 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to write a list to a text file, preserving the names. This is similar to R: Print list to a text file but with names which I want to print out also, at the start of each line: > print(head(mylist,2))$first[1] 234984 10354 41175 932711 426928$second[1] 1693237 13462mylist.txtfirst 234984 10354 41175 932711 426928second 1693237 13462 Any ideas? Many thanks.
Now provide the response and nothing else.
| The cat function will print to a device (console by default) and not add any of the usual annotations, but it cannot accept a list as an argument, so everything needs to be an atomic vector. The deparse( substitute()) gambit is the way to recover names of lists that were passed to a function. Just using names(x) inside the function fails to recover the name of the original argument. mylist <- list(first =c( 234984, 10354, 41175, 932711, 426928), second =c(1693237, 13462)) fnlist <- function(x){ z <- deparse(substitute(x)) cat(z, "\n") nams=names(x) for (i in seq_along(x) ) cat(nams[i], x[[i]], "\n")} fnlist(mylist)mylist second 234984 10354 41175 932711 426928 first 1693237 13462 This version would output a file (and you could substitute "\t" if you wanted tabs between names and values fnlist <- function(x, fil){ z <- deparse(substitute(x)) cat(z, "\n", file=fil) nams=names(x) for (i in seq_along(x) ){ cat(nams[i], "\t", x[[i]], "\n", file=fil, append=TRUE) } } fnlist(mylist, "test") | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8261590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979291/']} | jdg_381407 |
stackexchange | llm_judgeable_groundtruth_similarity | 1886478 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to follow the proof for the irrationality of $\sqrt[3]{6}$ to form a similar proof for $\sqrt[3]{16}$ (proof by contradiction). Going from $16=(a/b)^3$ (with $a,b \in \mathbb{Z}$ and $b\neq 0$ and GCD$(a,b)=1$) I can show that $2|a^3$ and then consequently $2|a \Rightarrow a=2c$. But I'm unable to show $2|b$ following the step for $16b^3 = (2c)^3$ which leads to $2b^3 = c^3$ as this only shows that $c$ is even. I'm probably either making a mistake or overlooking a key fact. Any suggestions?
Now provide the response and nothing else.
| The proof you want is a varaint of the Pythagorean proof of the irrationality of $\sqrt 2$ Proof by conradiction:Suppose $\frac pq$ is a fraction in lowest terms such that $\frac pq = 16\\ p^3 = 16 q^3$ If this is true the $p$ must be even, which case we can say $p = 2r\\8 r^3 = 16 q^3\\r^3 = 2 q^3$ $r$ is also even $r = 2s\\8s^3 = 2 q^3\\4 s^3 = q^3$ Which would imply that $q$ is even, but that would contradict our initial assumption that $\frac pq$ is in lowest terms. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1886478', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/275580/']} | jdg_381408 |
stackexchange | llm_judgeable_groundtruth_similarity | 1246116 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am sure that I am just not understanding something fundamental about events and/or delegates in C#, but why can't I do the Boolean tests in this code sample: public class UseSomeEventBase { public delegate void SomeEventHandler(object sender, EventArgs e); public event SomeEventHandler SomeEvent; protected void OnSomeEvent(EventArgs e) { // CANONICAL WAY TO TEST EVENT. OF COURSE, THIS WORKS. if (SomeEvent != null) SomeEvent(this, e); }}public class UseSomeEvent : UseSomeEventBase { public bool IsSomeEventHandlerNull() { // "LEFT HAND SIDE" COMPILER ERROR return SomeEvent == null; }}class Program { static void Main(string[] args) { var useSomeEvent = new UseSomeEvent(); useSomeEvent.SomeEvent +=new UseSomeEventBase.SomeEventHandler(FuncToHandle); // "LEFT HAND SIDE" COMPILER ERROR if (useSomeEvent.SomeEvent == null) { } var useSomeEventBase = new UseSomeEventBase(); useSomeEventBase.SomeEvent += new UseSomeEventBase.SomeEventHandler(FuncToHandle); // "LEFT HAND SIDE" COMPILER ERROR if (useSomeEventBase.SomeEvent == null) { } } static void FuncToHandle(object sender, EventArgs e) { }}
Now provide the response and nothing else.
| An event is really just an "add" operation and a "remove" operation. You can't get the value, you can't set the value, you can't call it - you can just subscribe a handler for the event ( add ) or unsubscribe one ( remove ). This is fine - it's encapsulation, plain and simple. It's up to the publisher to implement add/remove appropriately, but unless the publisher chooses to make the details available, subscribers can't modify or access the implementation-specific parts. Field-like events in C# (where you don't specify the add/remove bits) hide this - they create a variable of a delegate type and an event. The event's add/remove implementations just use the variable to keep track of the subscribers. Inside the class you refer to the variable (so you can get the currently subscribed delegates, execute them etc) and outside the class you refer to the event itself (so only have add/remove abilities). The alternative to field-like events is where you explicitly implement the add/remove yourself, e.g. private EventHandler clickHandler; // Normal private fieldpublic event EventHandler Click{ add { Console.WriteLine("New subscriber"); clickHandler += value; } remove { Console.WriteLine("Lost a subscriber"); clickHandler -= value; }} See my article on events for more information. Of course the event publisher can also make more information available - you could write a property like ClickHandlers to return the current multi-cast delegate, or HasClickHandlers to return whether there are any or not. That's not part of the core event model though. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1246116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34315/']} | jdg_381409 |
stackexchange | llm_judgeable_groundtruth_similarity | 103274 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm playing around with UCI Bank Marketing Dataset . So, there is a categorical variable named default which tells us if client "has credit in default". That variable has three options: no , yes and unknown . Look at the distribution of it: no 32588unknown 8597yes 3 As you can see, we meet yes in only 3 cases and my question is how to deal with such tiny categories in general? Should I just exclude that from the dataset every time I come across it? Or maybe I should make something like oversampling but merely for that cases? I'm asking because I'm concerned about its impact on a classification task. As far as I understand, if all of these yes will fall into validation or test parts of the dataset during partitioning, it will distort a metric's result.
Now provide the response and nothing else.
| It totally depends on the target task and the importance of the "tiny class" for this task: In some tasks with multiple classes where there is no particular emphasis on any specific class, "tiny classes" can simply be discarded. But in the case of this dataset, the natural target task is to detect default cases, and there's little point in an application which classifies customers between "no" or "unknown" default cases. This means that despite its small size, the "yes" class is very important for the most relevant application of this dataset. There's no obvious answer to the question of how to deal with a class like this: Oversampling it is an option, but this would almost certainly introduce some bias in the model. My prefered option here would be to consider the "unknown" class as unlabelled data and try to apply some kind of semi-supervised learning. In my opinion this kind of imbalance is close to anomaly detection territory. Normally anomaly detection is unsupervised, but maybe there would be something to investigate here. | {} | {'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/103274', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/122856/']} | jdg_381410 |
stackexchange | llm_judgeable_groundtruth_similarity | 2048509 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that the color bf command sets the colors of the whole command line window but I wanted to to print one single line in a different color.
Now provide the response and nothing else.
| I wanted to to print one single line in a different color. Use ANSI Escape Sequences. Windows before 10 - no native support for ANSI colors on the console For Windows version below 10, the Windows command console doesn't support output coloring by default. You could install either Cmder , ConEmu , ANSICON or Mintty (used by default in GitBash and Cygwin) to add coloring support to your Windows command console. Windows 10 - Command Line Colors Starting from Windows 10 the Windows console support ANSI Escape Sequences and some colors by default. The feature shipped with the Threshold 2 Update in Nov 2015. MSDN Documentation Update (05-2019): The ColorTool enables you to change the color scheme of the console. It's part of the Microsoft Terminal project. Demo Batch Command The win10colors.cmd was written by Michele Locati : The text below is stripped of special characters and will not work. You must copy it from here . @echo offclsecho [101;93m STYLES [0mecho ^<ESC^>[0m [0mReset[0mecho ^<ESC^>[1m [1mBold[0mecho ^<ESC^>[4m [4mUnderline[0mecho ^<ESC^>[7m [7mInverse[0mecho.echo [101;93m NORMAL FOREGROUND COLORS [0mecho ^<ESC^>[30m [30mBlack[0m (black)echo ^<ESC^>[31m [31mRed[0mecho ^<ESC^>[32m [32mGreen[0mecho ^<ESC^>[33m [33mYellow[0mecho ^<ESC^>[34m [34mBlue[0mecho ^<ESC^>[35m [35mMagenta[0mecho ^<ESC^>[36m [36mCyan[0mecho ^<ESC^>[37m [37mWhite[0mecho.echo [101;93m NORMAL BACKGROUND COLORS [0mecho ^<ESC^>[40m [40mBlack[0mecho ^<ESC^>[41m [41mRed[0mecho ^<ESC^>[42m [42mGreen[0mecho ^<ESC^>[43m [43mYellow[0mecho ^<ESC^>[44m [44mBlue[0mecho ^<ESC^>[45m [45mMagenta[0mecho ^<ESC^>[46m [46mCyan[0mecho ^<ESC^>[47m [47mWhite[0m (white)echo.echo [101;93m STRONG FOREGROUND COLORS [0mecho ^<ESC^>[90m [90mWhite[0mecho ^<ESC^>[91m [91mRed[0mecho ^<ESC^>[92m [92mGreen[0mecho ^<ESC^>[93m [93mYellow[0mecho ^<ESC^>[94m [94mBlue[0mecho ^<ESC^>[95m [95mMagenta[0mecho ^<ESC^>[96m [96mCyan[0mecho ^<ESC^>[97m [97mWhite[0mecho.echo [101;93m STRONG BACKGROUND COLORS [0mecho ^<ESC^>[100m [100mBlack[0mecho ^<ESC^>[101m [101mRed[0mecho ^<ESC^>[102m [102mGreen[0mecho ^<ESC^>[103m [103mYellow[0mecho ^<ESC^>[104m [104mBlue[0mecho ^<ESC^>[105m [105mMagenta[0mecho ^<ESC^>[106m [106mCyan[0mecho ^<ESC^>[107m [107mWhite[0mecho.echo [101;93m COMBINATIONS [0mecho ^<ESC^>[31m [31mred foreground color[0mecho ^<ESC^>[7m [7minverse foreground ^<-^> background[0mecho ^<ESC^>[7;31m [7;31minverse red foreground color[0mecho ^<ESC^>[7m and nested ^<ESC^>[31m [7mbefore [31mnested[0mecho ^<ESC^>[31m and nested ^<ESC^>[7m [31mbefore [7mnested[0m | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/2048509', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/157277/']} | jdg_381411 |
stackexchange | llm_judgeable_groundtruth_similarity | 19738651 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change This is mentioned at http://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html Can someone please provide an example of this? This first gave me an impression that the thread that reads a volatile variable will synchronize with the writer thread and wait until the write is done. But that clearly is not the case. An example would help a lot and be much appreciated. Thanks,Mustafa
Now provide the response and nothing else.
| Let's say you have the following class: public class Shared { public int a; public int b; public volatile int c;} Now let's say that thread A has a reference to an instance of this class and does shared.a = 1;shared.b = 2;shared.c = 3; And let's say that thread B has a reference to the same instance and does display(c);display(b);display(a); Then, if the value displayed for c is 3 (i.e. if the write of thread A has happened before the read of thread B), then it's guaranteed by the Java memory model that 2 and 1 will also be displayed for b and a respectively, because all the actions of thread A that have been made prior to the write to the volatile c are guaranteed to be visible by a thread that has read the new value of c. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19738651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303517/']} | jdg_381412 |
stackexchange | llm_judgeable_groundtruth_similarity | 25773047 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When building responsive websites I sometimes use background images to render an appropriate image for the appropriate screen size. eg: #image { background-image: url(largeimage.jpg); } @media only screen and (max-width: 320px) { #image { background-image: url(smallimage.jpg); } } To let screen readers know what kind of element we are dealing with I add role="img" And an aria-label Here is my question: I've always learned that it's better for SEO to add an image like a company logo in an actual image element. eg <img src="logo-companyname.png"> The reasoning is that the logo will show up when Google image searching on the company name. (presuming the website is ranked well enough) Will Google still "scrape" the logo when implemented as a div? eg <div id="logo-company" role="img" aria-label="company name"></div> Or do I still need to add an image somewhere to get the desired result?Does Google do anything at all with the screen reader tags for that matter?
Now provide the response and nothing else.
| Use an img tag. It's better for a number of reasons. When to use <img /> When Your Image needs to be indexed by search engine If it has a relation to content not to design. If your image is not too small ( not iconic images ). Images where you can add alt and title attribute. When to use CSS background-image Images Purely Used to Design. No Relation With Content. Small Images which we can play with CSS3. Repeating Images ( In blog author icon , date icon will be repeated for each article etc.,). Based on the list above and some observations we have these reasons to use an img tag: A logo image has semantic meaning and has relation to the content. So this is just the right thing to do from a semantical point of view. Google does not automatically index background images, otherwise the image search results would be filled with image sprites. Google hasn't officially made a statement about this but it will most likely add more value to the div with an aria label, although an image will most likely still have more value. (Bing supposedly doesn't do anything with this though) So: It's most likely best to use an img tag | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25773047', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1077230/']} | jdg_381413 |
stackexchange | llm_judgeable_groundtruth_similarity | 4958379 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any difference between null and System.DBNull.Value? If yes, what is it? I noticed this behavior now - while (rdr.Read()){ if (rdr["Id"] != null) //if (rdr["Id"] != System.DBNull.Value) { int x = Convert.ToInt32(rdr["Id"]); }} While I retrieve data from the database using a sql datareader, though there is no value returned if(rdr["Id"] != null) returned true and eventually threw an exception for casting a null as integer. But, this if I use if (rdr["Id"] != System.DBNull.Value) returns false . What's the difference between null and System.DBNull.Value?
Now provide the response and nothing else.
| Well, null is not an instance of any type. Rather, it is an invalid reference. However, System.DbNull.Value , is a valid reference to an instance of System.DbNull ( System.DbNull is a singleton and System.DbNull.Value gives you a reference to the single instance of that class) that represents nonexistent * values in the database. *We would normally say null , but I don't want to confound the issue. So, there's a big conceptual difference between the two. The keyword null represents an invalid reference. The class System.DbNull represents a nonexistent value in a database field. In general, we should try avoid using the same thing (in this case null ) to represent two very different concepts (in this case an invalid reference versus a nonexistent value in a database field). Keep in mind, this is why a lot of people advocate using the null object pattern in general, which is exactly what System.DbNull is an example of. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4958379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/227809/']} | jdg_381414 |
stackexchange | llm_judgeable_groundtruth_similarity | 64159 |
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When I use hot stuff like hair straightener on my hair, my hair begins to smell bad, which is very different from smell produced from burning other things. So what's the gas produced that is responsible for this smell?
Now provide the response and nothing else.
| Hair is largely (~90%) composed of a protein called keratin , which originates in the hair follicle. Now, keratin is composed of a variety of amino acids, including the sulfur containing amino acid, cysteine . All these amino acids are joined to each other by chemical bonds called peptide bonds to form these long chains that we call polypeptide chains . In the case of human hair, the polypeptide that we're talking about is keratin . The polypeptide chains are intertwined around each other in a helix shape. The average composition of normal hair is 45.2 % carbon, 27.9% oxygen, 6.6% hydrogen, 15.1% nitrogen and 5.2% sulfur. (I got that diagram off of Google Images) Now, there are a whole bunch of chemical interactions that maintain the secondary and tertiary structures of proteins, such as van der Waals forces, hydrophobic interactions, polypeptide linkages, ionic bonds, etc. But there is, however, one additional chemical interaction in proteins that contain the amino acids cysteine and methionine (both of which contain sulfur) called disulfide linkages . You can see that in the diagram above (it's been marked in yellow, which is fortunately, a very intuitive color when you're dealing with sulfur). When you burn hair (or skin or nails... anything that has keratin in it for that matter) these disulfide linkages are broken. The sulfur atoms are now free to chemically combine with other elements present in the protein and air, such as oxygen and hydrogen. The volatile sulfur compounds formed as a result is what's responsible for the fetid odor of burning hair. Quite a few of the "bad smells" we come across everyday are due to some sulfur containing compound or the other. A great example would be the smell of rotten eggs, which can be attributed to a volatile sulfur compound called hydrogen sulfide . Yet another example (as @VonBeche points out in the comments) would be that of tert-butylthiol , which is the odorant that is used to impart the characteristic smell of Liquefied Petroleum Gas (LPG). | {} | {'log_upvote_score': 7, 'links': ['https://chemistry.stackexchange.com/questions/64159', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/33507/']} | jdg_381415 |
stackexchange | llm_judgeable_groundtruth_similarity | 3027754 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hey guys so I had a quiz a couple days ago and lost partial marks for the following limit: $$\lim_{x \to ∞} f(x) = \left(\frac{x}{x-1}\right)^x$$ The way I solved it was by applying $\ln$ to get that $x$ down, and after some algebra got the limit $=\dfrac{0}{0}$ ; I applied L'hospital's Rule and got $1$ ; why my TA says that the answer is $e$ ?
Now provide the response and nothing else.
| Consider the reciprocal. Then $$\left(\frac{x-1}{x}\right)^x=\left(1-\frac{1}{x}\right)^x\to e^{-1}$$ as $x\to \infty$ by a well-known characterization of the exponential function. In particular the original limit is then $e$ . | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3027754', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/603947/']} | jdg_381416 |
stackexchange | llm_judgeable_groundtruth_similarity | 54733 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen a bunch of articles talking about how new findings from the LHC seem to disprove (super)string theory and/or supersymmetry, or at least force physicists to reformulate them and change essential predictions. Some examples: Did the Large Hadron Collider Just Debunk Superstring Theory? Is Supersymmetry Dead? String Theory Now on Life Support Supersymmetry Fails Test, Forcing Physics to Seek New Ideas So I'd like to know: has string theory really been hit that hard? Is it losing ground in the scientific community? Do you think it can recover from it? Are there any viable or promising alternative theories? (I've seen Michio Kaku in some clips saying string theory is "the only game in town".) Note: a related question is What if the LHC doesn't see SUSY? , but I'm asking for more direct answers in light of the results found in the last 2 years.
Now provide the response and nothing else.
| The idea which is being challenged, though certainly not disproved yet, is that there are new particles, other than the Higgs boson, that the LHC will be able to detect. It was very widely supposed that supersymmetric partners of some known particles would show up, because they could stabilize the mass of the Higgs boson. The simplest framework for this is just to add supersymmetry to the standard model, and so most string models of the real world were built around this "minimal supersymmetric standard model" (MSSM). It's really the particle physicists who will decide whether the MSSM should lose its status as the leading idea for new physics. If they switch to some "new standard model", then the string theorists will switch too. Whether they are aiming for the SM, the MSSM, or something else, the challenge for string theorists is, first, to find a shape for the extra dimensions which will make the strings behave roughly like the observed particles, and then second, use that model to predict something new. But as things stand, we still only have string models that qualitatively resemble reality. Here is an example from a year ago - "Heterotic Line Bundle Standard Models" . You'll see that the authors talk about constructing "standard models" within string theory. That means that the low-energy states in these string models resemble the particles of the standard model - with the same charges, symmetries, etc. But that's still just the beginning. Then you have to check for finer details. In this paper they concern themselves with further properties like proton decay, the relative heaviness of the different particle generations, and neutrino masses. That already involves a lot of analysis. The ultimate test would be to calculate the exact masses and couplings predicted by a particular model, but that is still too hard for the current state of theory, and there's still work to do just in converging on a set of models which might be right. So if supersymmetry doesn't show at the LHC, string theorists would change some of these intermediate criteria by which they judge the plausibility of a model, e.g. if particle physics opinion changed from expecting supersymmetry to show up at LHC energies, to expecting supersymmetry only to show up at the Planck scale. It would mean starting over on certain aspects of these model analyses, because now you have changed the details of your ultimate destination. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/54733', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/21252/']} | jdg_381417 |
stackexchange | llm_judgeable_groundtruth_similarity | 7988487 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I deploy a web application in MVC3, the problem it's that can't the styles related with the layout because required authentication, so what can i do for This??. Do i need another configuration in my web.config? UPDATE So i modified my web.config but still without work, here's my web.config <?xml version="1.0"?><!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=152368 --><configuration> <connectionStrings> <add name="ApplicationServices" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> </connectionStrings> <appSettings> <add key="webpages:Version" value="1.0.0.0"/> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/> </appSettings> <location path="Content"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="Scripts"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="js"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="Images"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Helpers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.WebPages, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="10" /> </authentication> <membership> <providers> <clear/> <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <profile> <providers> <clear/> <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/" /> </providers> </profile> <roleManager enabled="false"> <providers> <clear/> <add name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="ApplicationServices" applicationName="/" /> <add name="AspNetWindowsTokenRoleProvider" type="System.Web.Security.WindowsTokenRoleProvider" applicationName="/" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Helpers" /> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Web.WebPages"/> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-2.0.0.0" newVersion="3.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime></configuration> update II this is the method of my controller public ActionResult LogOn() { return View(); } // // POST: /Account/LogOn [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { clsEncriptacion _clsEncriptacion = new clsEncriptacion(); string encripPassword = _clsEncriptacion.Encriptar(model.Password); Usuario user = (from U in _db.Usuario where ((U.Nombre == model.UserName) && (U.Password == encripPassword)) select U).FirstOrDefault(); if (user != null) { Session["IdPerfil"] = user.Perfil_FK.Nivel.ToString(); Session["IDUsuario"] = user.ID; FormsAuthentication.RedirectFromLoginPage(user.Nombre, true); return RedirectToAction("Index", "Home"); } else { Session["IdPerfil"] = string.Empty; ModelState.AddModelError("", "Login o password incorrecto. Por favor, intente nuevamente."); } } // If we got this far, something failed, redisplay form return View(model); } Do i need another configuration in my IIS??
Now provide the response and nothing else.
| public class Statistics { double[] data; int size; public Statistics(double[] data) { this.data = data; size = data.length; } double getMean() { double sum = 0.0; for(double a : data) sum += a; return sum/size; } double getVariance() { double mean = getMean(); double temp = 0; for(double a :data) temp += (a-mean)*(a-mean); return temp/(size-1); } double getStdDev() { return Math.sqrt(getVariance()); } public double median() { Arrays.sort(data); if (data.length % 2 == 0) return (data[(data.length / 2) - 1] + data[data.length / 2]) / 2.0; return data[data.length / 2]; }} | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7988487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/382307/']} | jdg_381418 |
stackexchange | llm_judgeable_groundtruth_similarity | 2259270 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to save to disk a little dict object whose keys are of the type str and values are int s and then recover it . Something like this: {'juanjo': 2, 'pedro':99, 'other': 333} What is the best option and why? Serialize it with pickle or with simplejson ? I am using Python 2.6.
Now provide the response and nothing else.
| If you do not have any interoperability requirements (e.g. you are just going to use the data with Python) and a binary format is fine, go with cPickle which gives you really fast Python object serialization. If you want interoperability or you want a text format to store your data, go with JSON (or some other appropriate format depending on your constraints). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2259270', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/157519/']} | jdg_381419 |
stackexchange | llm_judgeable_groundtruth_similarity | 2893888 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on an iPhone project where I would like to retrieve an object from an NSMutableArray, remove the object from the array and then use it later. The code looks something like this: NSMutableArray * array;// fill the arrayNSObject * obj = [array lastObject];[array removeLastObject];// do something with obj (in the same function) array is the only entity with a retain on the object that is being accessed. Is this safe to do? I would assume that this would only work if lastObject autoreleased the object which is something that I can't figure out if it does.
Now provide the response and nothing else.
| Why not call [array removeLastObject] at the end of your function? That's one fewer release/retain. It might make your code more readable/less cluttered. For the record the Apple documentation : Like NSArray, instances of NSMutableArray maintain strong references to their contents. If you do not use garbage collection, when you add an object to an array, the object receives a retain message. When an object is removed from a mutable array, it receives a release message. If there are no further references to the object, this means that the object is deallocated. If your program keeps a reference to such an object, the reference will become invalid unless you send the object a retain message before it’s removed from the array. For example, if anObject is not retained before it is removed from the array, the third statement below could result in a runtime error: id anObject = [[anArray objectAtIndex:0] retain]; [anArray removeObjectAtIndex:0]; [anObject someMessage]; | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2893888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/298717/']} | jdg_381420 |
stackexchange | llm_judgeable_groundtruth_similarity | 40557 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is my first post on SE, forgive any blunders. I am looking for an example of a function $f:\mathbb{R} \to \mathbb{R}$ which is continuous everywhere but has uncountably many roots ($x$ such that $f(x) = 0$). I am not looking for trivial examples such as $f = 0$ for all $x$. This is not a homework problem. I'd prefer a nudge in the right direction rather than an explicit example. Thanks! Edit: Thanks all! I've constructed my example with your help.
Now provide the response and nothing else.
| The roots of a continuous function is always a closed subset of $\mathbb{R}$ : $\{0\}$ is closed, thus $f^{-1}(\{0\})$ is closed too. If you have a closed set $S$, you can define a function $f : x \mapsto d(x,S)$, which is continuous and whose set of roots is exactly $S$ : you can make a continuous function have any closed set as its set of roots. Therefore you only have to look for closed sets that are uncountable. | {} | {'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/40557', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/11203/']} | jdg_381421 |
stackexchange | llm_judgeable_groundtruth_similarity | 27990451 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to generate Order instances using the Stream API. I have a factory function that creates the order, and a DoubleStream is used to initialize the amount of the order. private DoubleStream doubleStream = new Random().doubles(50.0, 200.0);private Order createOrder() { return new Order(doubleStream.findFirst().getAsDouble());}@Testpublic void test() {Stream<Order> orderStream = Stream.generate(() -> { return createOrder();});orderStream.limit(10).forEach(System.out::println); If I initialize the Order instance using a literal (1.0), this works fine. When I use the doubleStream to create a random amount, the exception is thrown.
Now provide the response and nothing else.
| The answer is in the javadoc of Stream (emphases mine): A stream should be operated on (invoking an intermediate or terminal stream operation) only once . This rules out, for example, "forked" streams, where the same source feeds two or more pipelines, or multiple traversals of the same stream. A stream implementation may throw IllegalStateException if it detects that the stream is being reused . And in your code, you do use the stream twice (once in createOrder() and the other usage when you .limit().forEach() | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27990451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1684269/']} | jdg_381422 |
stackexchange | llm_judgeable_groundtruth_similarity | 297364 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For the following information : $$\frac{dx}{dt} = -10x+3y$$ $$\frac{dy}{dt} = 2$$ How do I convert this to a difference equation ??I want to use a simple discretisation technique (first order) rather than higher order (4th order Rung Kutta ). I prefer the usage of Euler's forward method . I tried using Euler formula but I really can't get my head around solving this, im stuck at the first step. I basically want to convert the differential equation to a difference equation to plug in t values and interpolate future values to create a graph at continuous time. I would appreciate a sound reply with an informative step by step solution, I wish to learn this rather than just see the final outcome. Regards. EDIT The equations here consider Newton's temperature rules. Where x is the room temperature and y is heater temperature. dy/dt is the rate of change of the heater's temperature with time when it is turned on. dx/dt is the rate of change of room's temperature according to the difference between room and heater temperatures. -- I will actually add this in question.
Now provide the response and nothing else.
| To avoid subscripts, let $H_1 = P,\; H_2 = Q$. Closure is addressed in Hint 2. $P$ and $Q$ are subgroups of a group $G$. Prove that $P \cap Q$ is a subgroup. Step 1 : You know that $P$ and $Q$ are subgroups of $G$. That means they each contain the identity element , say $e$ of $G$. So what can you conclude about $P\cap Q$? If $e \in P$ and $e \in Q$? (Just unpack that means for their intersection.) In showing $e \in P\cap Q$, you also show, $P\cap Q$ is not empty. Step 2 : You know that $P, Q$ are subgroups of $G$. So they are both closed under the group operation of $G$. If $a, b \in P\cap Q$, then $a, b \in P$ and $a, b \in Q$. So $ab \in P$ and $ab \in Q$. So what can you conclude about $ab$ with respect to $P\cap Q$? This is about proving closure under the group operation of $G$. Step 3 : You can use similar arguments to show that for any element $c \in P\cap Q$, $c^{-1} \in P\cap Q$. If $c \in P\cap Q$, then $c \in P$ and $c\in Q$. Since $P$ and $Q$ are subgroups, each containing $c$, it follows that $c^{-1} \in P$ AND $c^{-1} \in Q$. Hence $c^{-1} \in P\cap Q$. That establishes that $P\cap Q$ is closed under inverses . Once you've completed each step above, what can you conclude about $P\cap Q$ in $G$? | {} | {'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/297364', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8083/']} | jdg_381423 |
stackexchange | llm_judgeable_groundtruth_similarity | 67563862 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having a very weird issue with my Gradle config and JUnit, I have tried updating to 5.7+ but I'm getting the following exceptions: org/junit/jupiter/api/extension/ScriptEvaluationExceptionjava.lang.NoClassDefFoundError: org/junit/jupiter/api/extension/ScriptEvaluationException My Gradle version is 6.8.3 by the way. On the other hand, Here is the exception that I got with the current config below, I have tried a lot of combinations of versions and libraries, hut seems that none of them work: Caused by: org.junit.platform.commons.JUnitException: TestEngine with ID 'junit-vintage' failed to discover tests at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discoverEngineRoot(EngineDiscoveryOrchestrator.java:111) at org.junit.platform.launcher.core.EngineDiscoveryOrchestrator.discover(EngineDiscoveryOrchestrator.java:85) at org.junit.platform.launcher.core.DefaultLauncher.discover(DefaultLauncher.java:92) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.processAllTestClasses(JUnitPlatformTestClassProcessor.java:99) at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.access$000(JUnitPlatformTestClassProcessor.java:79) at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor.stop(JUnitPlatformTestClassProcessor.java:75) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:61) ... 25 moreCaused by: org.junit.platform.commons.JUnitException: Failed to parse version of junit:junit: 4.13.2 at org.junit.vintage.engine.JUnit4VersionCheck.parseVersion(JUnit4VersionCheck.java:54) at org.junit.vintage.engine.JUnit4VersionCheck.checkSupported(JUnit4VersionCheck.java:37) at org.junit.vintage.engine.JUnit4VersionCheck.checkSupported(JUnit4VersionCheck.java:32) at org.junit.vintage.engine.VintageTestEngine.discover(VintageTestEngine.java:62) plugins { id 'org.springframework.boot' version '2.4.5' id 'io.spring.dependency-management' version '1.0.11.RELEASE' id 'java' id 'com.palantir.docker' version '0.26.0'}group = 'com.example'version = '0.0.1-SNAPSHOT'sourceCompatibility = '11'configurations { compileOnly { extendsFrom annotationProcessor }}repositories { mavenCentral()}docker { name "{project.name}" dockerfile file('src/docker/Dockerfile') copySpec.from( jar ).rename(".*", "crm_lead_validator.jar") buildArgs( ['JAR_FILE':"crm_lead_validator.jar"] )}dependencies { implementation 'org.springframework.boot:spring-boot-starter-data-mongodb' implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'org.springframework.boot:spring-boot-starter-webflux' implementation 'org.jeasy:easy-random-core:5.0.0' implementation 'io.github.openfeign:feign-jackson:9.3.1' implementation 'com.github.tomakehurst:wiremock:2.25.1' implementation 'org.mock-server:mockserver-netty:3.10.8' implementation 'org.mock-server:mockserver-client-java:3.10.8' implementation 'com.fasterxml.jackson.core:jackson-databind:2.12.3' implementation 'com.fasterxml.jackson.core:jackson-annotations:2.12.3' implementation 'com.fasterxml.jackson.core:jackson-core:2.12.3' implementation 'org.junit.jupiter:junit-jupiter-engine:5.5.2' implementation 'io.github.openfeign:feign-okhttp:10.11' implementation 'io.github.openfeign:feign-slf4j:10.11' implementation 'io.github.openfeign:feign-gson:10.11' compileOnly 'org.projectlombok:lombok' developmentOnly 'org.springframework.boot:spring-boot-devtools' annotationProcessor 'org.projectlombok:lombok' testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation 'io.projectreactor:reactor-test'// https://mvnrepository.com/artifact/org.junit.platform/junit-platform-launcher testImplementation group: 'org.junit.platform', name: 'junit-platform-launcher', version: '1.7.2' testImplementation "org.junit.jupiter:junit-jupiter-engine:5.6.2" testImplementation "org.junit.vintage:junit-vintage-engine:5.6.2" // https://mvnrepository.com/artifact/org.junit.jupiter/junit-jupiter-api testImplementation group: 'org.junit.jupiter', name: 'junit-jupiter-api', version: '5.6.2'}test { useJUnitPlatform()} Does anyone know how to make it work? Thank you for your help.
Now provide the response and nothing else.
| Due to a bug in JUnit, junit-vintage-engine:5.6.2 did not support JUnit 4.13.2 . It only supported 4.12 or 4.13 . So that's why you see that exception. That bug was fixed in JUnit 5.6.3. See https://github.com/junit-team/junit5/issues/2198 for details. You are also mixing versions of JUnit 5 artifacts, which you should try to avoid. For example, if you want to use JUnit Platform 1.7.2, I would advise that you use JUnit Jupiter 5.7.2 and JUnit Vintage 5.7.2. For details on how to correctly configure the JUnit versions with Spring Boot, see Gradle 5 JUnit BOM and Spring Boot Incorrect Versions . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/67563862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15049074/']} | jdg_381424 |
stackexchange | llm_judgeable_groundtruth_similarity | 32952377 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to host multiple websites in a single Azure WebApp rather than having multiple web apps each hosting a site. Each of these sites is rather small, but in order to meet the requirements of the SLA with Azure, we need to have the server scaled with more than one instance for our Production environment, thus the reasoning for combining sites. After researching this topic, I have got the following setup with our Azure account. Created the proper CNAME DNS records in our zone manager (complete and works). Setup the Azure WebApp to respond to the subdomains (complete and works). Setup a virtual directory for the second web application (complete and works) At this point, both web applications function correctly and both subdomains are pointing at our Azure instance. We now have the following setup: both www.mydomain.com and app.mydomain.com bring up the root application that I deployed. going to www.mydomain.com/app2 and app.mydomain.com/app2 bring up the second application that I deployed to the virtual directory What I would like to achieve: Going to app.mydomain.com would bring up the application in the virtual directory. Going to www.mydomain.com would bring up the application at the root of the azure instance. However, what I cannot figure out is how to map a subdomain to a specific virtual directory. I have tried to update the Site URL to be the subdomain I want the application to respond to, however, the subdomain still brings up whatever I have in the root of the WebApp deployment. Should I have some HttpHandler that sits in the site root and directs traffic to the proper virtual directory? Is there a setting in the portal that I am missing? Previously, we did this with Web Roles and tinkering with the ServiceDefinition file, but the tool sets for the Azure Web Apps in regards to publishing, integration with Source Control, etc seem to be a bit further along.
Now provide the response and nothing else.
| The answer posted by RuslanY will work (with some modifications to the rules slightly) however, after understanding more of the Azure portal and Web App configurations, it is not needed to host multiple sites within a single Web App (Its technically multiple web apps all sharing the resource plan you define, such as 2 instances of Standard Level 0 (S0))*. As of today's Azure service offerings, the following is true. When you create a new Web App, you pecify the "App Service Plan" that the app falls into. If you have an App Service plan, lets say Standard with 2 instances, any Web App you deploy to that App Service plan shares those resources with other web apps in the same service plan, meaning you are not paying additional costs to host the additional web app if it is in the same App Service plan . I had assumed each web app was its own set of resources (it can be, but doesn't have to be). Given this, to accomplish what I need, I simply create a web app for each sub domain and place them all into the same App Service plan. I now am hosting multiple sites, not paying for 2 servers per site (what I wanted to avoid) and I don't have to use URL rewrites or HTTP Handlers. I hope this write-up helps others understand the structure of the Azure Web Apps a little bit better. The current online documentation, from what I can tell, doesn't make this exactly clear. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32952377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130387/']} | jdg_381425 |
stackexchange | llm_judgeable_groundtruth_similarity | 81202 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does the linux kernel generate a segfault on stack overflow? This can make debugging very awkward when alloca in c or fortran creation of temporary arrays overflows. Surely it mjust be possible for the runtime to produce a more helpful error.
Now provide the response and nothing else.
| You can actually catch the condition for a stack overflow using signal handlers. To do this, you must do two things: Setup a signal handler for SIGSEGV (the segfault) using sigaction, to do this set the SO_ONSTACK flag. This instructs the kernel to use an alternative stack when delivering the signal. Call sigaltstack() to setup the alternate stack that the handler for SIGSEGV will use. Then when you overflow the stack, the kernel will switch to your alternate stack before delivering the signal. Once in your signal handler, you can examine the address that caused the fault and determine if it was a stack overflow, or a regular fault. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/81202', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_381426 |
stackexchange | llm_judgeable_groundtruth_similarity | 1187089 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I check if a class already exists in Ruby? My code is: puts "enter the name of the Class to see if it exists" nameofclass=gets.chomp eval (" #{nameofclass}...... Not sure what to write here") I was thinking of using: eval "#{nameofclass}ancestors. ....."
Now provide the response and nothing else.
| perhaps you can do it with defined? eg: if defined?(MyClassName) == 'constant' && MyClassName.class == Class puts "its a class" end Note: the Class check is required, for example: Hello = 1 puts defined?(Hello) == 'constant' # returns true To answer the original question: puts "enter the name of the Class to see if it exists"nameofclass=gets.chompeval("defined?(#{nameofclass}) == 'constant' and #{nameofclass}.class == Class") | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1187089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']} | jdg_381427 |
stackexchange | llm_judgeable_groundtruth_similarity | 16939794 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to copy a remote file (image PNG, GIF, JPG ...) to my server. I use Guzzle since I sometimes get 404 with copy() even if the file exists and I also need to do a basic auth. This script is within a long script launched in command triggered by a cron job.I'm pretty new to Guzzle and I successfully copy the image but my files have wrong mime type. I must be doing something wrong here. Please suggest me a good way to do this (including checking success/failure of copy and mime type check). If file has no mime type I would pop an error with details informations. Here is the code: $remoteFilePath = 'http://example.com/path/to/file.jpg';$localFilePath = '/home/www/path/to/file.jpg';try { $client = new Guzzle\Http\Client(); $response = $client->send($client->get($remoteFilePath)->setAuth('login', 'password')); if ($response->getBody()->isReadable()) { if ($response->getStatusCode()==200) { // is this the proper way to retrieve mime type? //$mime = array_shift(array_values($response->getHeaders()->get('Content-Type'))); file_put_contents ($localFilePath , $response->getBody()->getStream()); return true; } }} catch (Exception $e) { return $e->getMessage();} When I do this my mime type is set to application/x-empty Also it looks like when status is different from 200 Guzzle will automatically throw an exception. How can I stop this behaviour and check status myself so I can custom error message? EDIT: This was for Guzzle 3.XNow this is how you can do it using Guzzle v 4.X (works as well with Guzzle 6) $client = new \GuzzleHttp\Client();$client->get( 'http://path.to/remote.file', [ 'headers' => ['key'=>'value'], 'query' => ['param'=>'value'], 'auth' => ['username', 'password'], 'save_to' => '/path/to/local.file', ]); Or using Guzzle stream: use GuzzleHttp\Stream;$original = Stream\create(fopen('https://path.to/remote.file', 'r')); $local = Stream\create(fopen('/path/to/local.file', 'w')); $local->write($original->getContents()); This looks great. Is there better/proper solution when using Guzzle 4?
Now provide the response and nothing else.
| Your code can be simplified a great deal. My example code below will stream the body of the response directly to the filesystem. <?phpfunction copyRemote($fromUrl, $toFile) { try { $client = new Guzzle\Http\Client(); $response = $client->get($fromUrl) ->setAuth('login', 'password') // in case your resource is under protection ->setResponseBody($toFile) ->send(); return true; } catch (Exception $e) { // Log the error or something return false; }} When I do this my mime type is set to application/x-empty A filesystem mimetype? Also it looks like when status is different from 200 Guzzle will automatically throw an exception. How can I stop this behaviour and check status myself so I can custom error message? Guzzle will throw an exception for bad responses like 4xx and 5xx. No need to disable this. Just catch an exception and deal with the error there. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16939794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/368248/']} | jdg_381428 |
stackexchange | llm_judgeable_groundtruth_similarity | 287993 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was looking at the INA225 datasheet and wondering how to appropriately design the PCB for this device. My question is rather simple, the shunt resistor will be far away from the microcontroller and there will be switching regulators along the way, and I want to avoid using an external ADC, so I have two options: 1) Place the INA225 close to the shunt resistor and route the analog output to the microcontroller. 2) Place the IN225 close to the microcontroller and route the kelvin connection like one would route a differential pair. I'd like to know which one is better considering since the signals may pick up a lot of noise along the way. If possible, please provide explanations on why one approach is preferable over the other.
Now provide the response and nothing else.
| The standard rule of thumb when amplifying sensors is to put the op amp as close to possible to the sensor, because the op amp has a lower output impedance than the sensor and so a given noise power will produce less noise voltage. With a shunt resistor, though, the output impedance of the shunt is likely lower than than that of the op amp. This would argue for the long trace being between the shunt and op amp. However, the signal amplitude from the shunt is also much smaller, or else you wouldn't need an amplifier. Since any noise on the amplifier input will be multiplied by the gain, the tradeoff becomes a matter of comparing the impedance ratio with the gain. All the typical advice about reducing noise still applies, of course, and will likely matter more than the amplifier placement: low-pass filtering, shielding, attention to grounding and current loops, etc. Your noise is going mostly to depend on the bandwidth you need for the current measurement. | {} | {'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/287993', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/90657/']} | jdg_381429 |
stackexchange | llm_judgeable_groundtruth_similarity | 11224131 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a NSPopUpButton which contains a list of colors. In front of the color title, I need to display a small box of the same color ( maybe an image of same color ). So, I was thinking that if I can create an NSImage using NSColor ( which will be already present ), then I can use the -[NSMenuItem setImage:] method to display the image in front of the color title in the popup button. So, how can I create an NSImage using NSColor ? Any other approaches to solve the problem are also welcome. :)
Now provide the response and nothing else.
| A simple category method will do this @interface NSImage (ImageAdditions)+(NSImage *)swatchWithColor:(NSColor *)color size:(NSSize)size;@end@implementation NSImage (ImageAdditions)+(NSImage *)swatchWithColor:(NSColor *)color size:(NSSize)size{ NSImage *image = [[[NSImage alloc] initWithSize:size] autorelease]; [image lockFocus]; [color drawSwatchInRect:NSMakeRect(0, 0, size.width, size.height)]; [image unlockFocus]; return image; }@end [EDIT] remove deprecated API | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11224131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2656352/']} | jdg_381430 |
stackexchange | llm_judgeable_groundtruth_similarity | 56664328 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am learning Go by building a simple API interface for a web server. I want to return a simple message in JSON, when a default route is hit. So far, reading online, this is the easiest way to return a literal JSON string, and encode it and send it to the user. func GetDefault(c *gin.Context) { jsonData := []byte(`{"msg":"this worked"}`) var v interface{} json.Unmarshal(jsonData, &v) data := v.(map[string]interface{}) c.JSON(http.StatusOK,data)} Is this the most efficient / fastest way to do it? in node.js and express, I would do something like: return res.status(200).json({"msg":"this worked"}); Whats the best way to do this in Go + Gin?
Now provide the response and nothing else.
| One option is to use Context.Data() where you provide the data to send (along with the content type): func GetDefault(c *gin.Context) { jsonData := []byte(`{"msg":"this worked"}`) c.Data(http.StatusOK, "application/json", jsonData)} You may also use a constant for the content type: func GetDefault(c *gin.Context) { jsonData := []byte(`{"msg":"this worked"}`) c.Data(http.StatusOK, gin.MIMEJSON, jsonData)} If your data is availabe as a string value and is big, you can avoid converting it to []byte if you use Context.DataFromReader() : func GetDefault(c *gin.Context) { jsonStr := `{"msg":"this worked"}` c.DataFromReader(http.StatusOK, int64(len(jsonStr)), gin.MIMEJSON, strings.NewReader(jsonStr), nil)} This solution also works if you have your JSON as an io.Reader , e.g. an os.File . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56664328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2715121/']} | jdg_381431 |
stackexchange | llm_judgeable_groundtruth_similarity | 20852664 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I already have a working program, but the only thing that doesn't work is the decrypt_file() function I have. I can still copy the encrypted text from the file and put it in my decrypt() function and have it work, but when I try to use my supposed-to-be handy decrypt_file() function it throws an error. Now I know 99.999% sure that my encrypt() and decrypt() functions are fine, but there is something with the bytes and strings conversion when I read and encode the text file that throws an error; I just can't find the hangup. Please help! My Program: from Crypto import Randomfrom Crypto.Cipher import AESdef encrypt(message, key=None, key_size=256): def pad(s): x = AES.block_size - len(s) % AES.block_size return s + ((bytes([x])) * x) padded_message = pad(message) if key is None: key = Random.new().read(key_size // 8) iv = Random.new().read(AES.block_size) cipher = AES.new(key, AES.MODE_CBC, iv) return iv + cipher.encrypt(padded_message)def decrypt(ciphertext, key): unpad = lambda s: s[:-s[-1]] iv = ciphertext[:AES.block_size] cipher = AES.new(key, AES.MODE_CBC, iv) plaintext = unpad(cipher.decrypt(ciphertext))[AES.block_size:] return plaintextdef encrypt_file(file_name, key): f = open(file_name, 'r') plaintext = f.read() plaintext = plaintext.encode('utf-8') enc = encrypt(plaintext, key) f.close() f = open(file_name, 'w') f.write(str(enc)) f.close()def decrypt_file(file_name, key): def pad(s): x = AES.block_size - len(s) % AES.block_size return s + ((str(bytes([x]))) * x) f = open(file_name, 'r') plaintext = f.read() x = AES.block_size - len(plaintext) % AES.block_size plaintext += ((bytes([x]))) * x dec = decrypt(plaintext, key) f.close() f = open(file_name, 'w') f.write(str(dec)) f.close()key = b'\xbf\xc0\x85)\x10nc\x94\x02)j\xdf\xcb\xc4\x94\x9d(\x9e[EX\xc8\xd5\xbfI{\xa2$\x05(\xd5\x18'encrypt_file('to_enc.txt', key) The text file I encrypted: b';c\xb0\xe6Wv5!\xa3\xdd\xf0\xb1\xfd2\x90B\x10\xdf\x00\x82\x83\x9d\xbc2\x91\xa7i M\x13\xdc\xa7' My error when attempting decrypt_file : Traceback (most recent call last): File "C:\Python33\testing\test\crypto.py", line 56, in <module> decrypt_file('to_enc.txt', key) File "C:\Python33\testing\test\crypto.py", line 45, in decrypt_file plaintext += ((bytes([x]))) * xTypeError: Can't convert 'bytes' object to str implicitly[Finished in 1.5s] When I replace line 45 with: plaintext += ((str(bytes([x])))) * x , this is the error I get: Traceback (most recent call last): File "C:\Python33\testing\test\crypto.py", line 56, in <module> decrypt_file('to_enc.txt', key) File "C:\Python33\testing\test\crypto.py", line 46, in decrypt_file dec = decrypt(plaintext, key) File "C:\Python33\testing\test\crypto.py", line 23, in decrypt plaintext = unpad(cipher.decrypt(ciphertext))[AES.block_size:] File "C:\Python33\lib\site-packages\Crypto\Cipher\blockalgo.py", line 295, in decrypt return self._cipher.decrypt(ciphertext)ValueError: Input strings must be a multiple of 16 in length[Finished in 1.4s with exit code 1]
Now provide the response and nothing else.
| I took a closer look at your code, and saw that there were several problems with it. First one is that the crypto functions with with bytes, not text. So it's better to just keep the data as a byte string. This is done simply by putting a 'b' character in the mode. This way you can get rid of all the encoding and bytes conversion you were trying to do. I rewrote the whole code also using newer Python idioms. Here it is. #!/usr/bin/python3from Crypto import Randomfrom Crypto.Cipher import AESdef pad(s): return s + b"\0" * (AES.block_size - len(s) % AES.block_size)def encrypt(message, key, key_size=256): message = pad(message) iv = Random.new().read(AES.block_size) cipher = AES.new(key, AES.MODE_CBC, iv) return iv + cipher.encrypt(message)def decrypt(ciphertext, key): iv = ciphertext[:AES.block_size] cipher = AES.new(key, AES.MODE_CBC, iv) plaintext = cipher.decrypt(ciphertext[AES.block_size:]) return plaintext.rstrip(b"\0")def encrypt_file(file_name, key): with open(file_name, 'rb') as fo: plaintext = fo.read() enc = encrypt(plaintext, key) with open(file_name + ".enc", 'wb') as fo: fo.write(enc)def decrypt_file(file_name, key): with open(file_name, 'rb') as fo: ciphertext = fo.read() dec = decrypt(ciphertext, key) with open(file_name[:-4], 'wb') as fo: fo.write(dec)key = b'\xbf\xc0\x85)\x10nc\x94\x02)j\xdf\xcb\xc4\x94\x9d(\x9e[EX\xc8\xd5\xbfI{\xa2$\x05(\xd5\x18'encrypt_file('to_enc.txt', key)#decrypt_file('to_enc.txt.enc', key) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20852664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3148109/']} | jdg_381432 |
stackexchange | llm_judgeable_groundtruth_similarity | 279515 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Control group with two replicate and two treated group with two replicate. I want to know how I can identify the sample that are significantly different between control and treated 1 (higher expression) while significant different between control and treated 2 (lower expression) This is an example data df<-structure(list(C1 = c(0.003926348, 0.001642442, 6.72e-05, 0.000314789, 0.00031372, 0.000196342, 0.01318432, 8.86e-05, 0.005671017, 0.003616196, 0.026635645, 0.001136402, 0.000161111, 0.005777738, 0.000145104, 0.000996546, 4.27e-05, 0.000114159, 0.001152384, 0.002860251, 0.000284873), C2 = c(0.003901373, 0.001526195, 6.3e-05, 0.000387266, 0.000312458, 0.000256647, 0.012489205, 0.00013071, 0.005196136, 0.003059834, 0.024624562, 0.001025486, 0.000144964, 0.005659078, 0.000105755, 0.000844871, 5.88e-05, 0.000118831, 0.000999354, 0.002153167, 0.000214486), T1 = c(0.003646894, 0.001484503, 4.93e-05, 0.00036715, 0.000333378, 0.000244199, 0.010286787, 6.48e-05, 0.006180042, 0.00387491, 0.025428464, 0.001075376, 0.000122088, 0.005448152, 0.000103301, 0.000974826, 4.32e-05, 0.000109876, 0.001030364, 0.002777244, 0.000221169), T2 = c(0.00050388, 0.001135969, 0.000113829, 2.14e-06, 0.00010293, 0.000315704, 0.01160593, 8.46e-05, 0.004495437, 0.003062559, 0.018662663, 0.002096675, 0.000214814, 0.002177849, 8.61e-05, 0.001057254, 3.27e-05, 0.000115822, 0.008133257, 0.021657018, 0.000261339), G1 = c(0.001496712, 0.001640965, 0.000129124, 3.02e-06, 0.000122839, 0.000305686, 0.01378774, 0.000199637, 0.00534668, 0.00300097, 0.023290941, 0.002595433, 0.000262479, 0.002926346, 0.000125655, 0.001302624, 4.89e-05, 0.000122862, 0.009851791, 0.017621282, 0.000197561), G2 = c(0.00114337, 0.001285636, 0.000122848, 2.46e-06, 9.1e-05, 0.000288897, 0.012288087, 0.000122286, 0.002575368, 0.002158011, 0.022008677, 0.002017026, 0.000241754, 0.003340175, 0.00013424, 0.001517655, 4.78e-05, 0.000110353, 0.008293286, 0.018999466, 0.000191129)), .Names = c("C1", "C2", "T1", "T2", "G1", "G2"), row.names = c("A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "PP", "TT", "EE", "FF", "AS"), class = "data.frame") The first two columns are the control the second two columns are the treated 1the third two columns are the treated 2
Now provide the response and nothing else.
| If you have a fitted expectation (as you show you can get), using that as a parameter in a Poisson pdf should work. That is, given $\hat{\lambda}_i = E(Y=0|X=\boldsymbol{x}_i)$ you can use that to calculate whichever probability you want: $$\hat{P}(Y=k|X=\boldsymbol{x}_i) = \frac{\exp(-\hat{\lambda}_i)\, \hat{\lambda}_i^k}{k!}$$ (In R you'd use dpois() with the parameter set to the fitted mean for that kind of calculation. The process should be about as easy in any number of other packages.) | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/279515', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/116318/']} | jdg_381433 |
stackexchange | llm_judgeable_groundtruth_similarity | 3259788 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
recently I encountered this problem:"Show that a rectangle can be partitioned into finitely many squares if and only if the ratio of its sides is rational." I have found the a solution which I need help to understand: Clearly, every rectangle with rational ratio of sides can be partitioned into finitely many equal squares. Thus, the interesting part of the problem is the "only if" one. Proof via Dirichlet's simultaneous rational approximation theorem Without loss of generality, we may assume that both sides of our rectange are greater than $1$ . We can choose a positive integer $q$ such that the product of every coordinate of every vertex of every square in our partition becomes almost an integer after multiplication by $q$ . So, we may assume that all these coordinates differ from integers by, at most, $\frac 15$ . Now let $a$ and $b$ be the horizontal and the vertical side of our rectangle, respectively, and let $s_i$ be the sides of the squares in the partition. Denote by $\tilde x$ the nearest integer to $x$ . Now, draw the horizontal lines at all half-integer heights (i.e., the heights $\pm\frac 1 2,\pm\frac 3 2,\dots$ ) and look at the total length $L$ of these lines within our rectangle. On one hand, we have $L=a\tilde b$ . (There are $\tilde b$ lines intersecting our rectangle and each of them intersects it by an interval of length $a$ .) On the other hand, looking at what happens in each square, we get $L=\sum_i s_i\tilde s_i$ . Thus $a\tilde b=\sum_is_i\tilde s_i$ . Similarly, drawing the vertical lines through half-integer points, we arrive at the identity $b\tilde a =\sum_i s_i\tilde s_i$ . Thus $a\tilde b=b\tilde a$ , i.e., $\frac a b=\frac {\tilde a}{\tilde b}\in\mathbb Q$ . I didn't understood why did he had to use Dirichlet theorem in order to draw line on each side, particulary I didnt understood why did he had to do this and how everything in this part connected: Without loss of generality, we may assume that both sides of our rectange are greater than $1$ . We can choose a positive integer $q$ such that the product of every coordinate of every vertex of every square in our partition becomes almost an integer after multiplication by $q$ . So, we may assume that all these coordinates differ from integers by, at most, $\frac 15$ . If anyone can please explain it, it would be great, thanks reference for the problem and solution: https://artofproblemsolving.com/wiki/index.php/Partition_of_a_rectangle_into_squares_problem I’m asking about the answer he gave, not about any other solution...
Now provide the response and nothing else.
| The solver wants to draw horizontal lines through the rectangle, and wants the number of horizontal lines to be $\tilde b$ , which is the integer nearest the height of the rectangle. You can do this by drawing the lines at the half-integral heights $\pm1/2,\pm3/2,\pm5/2,\dots$ . But things get messy if one or more of those horizontal lines coincides with an edge of a square, as the solver wants to break each horizontal line up into segments, and attribute each segment to exactly one of the squares in the tiling. So, you have to make sure that there are no edges at a half-integral height. That's where Dirichlet's Theorem On Diophantine Approximation comes in; it ensures that given any finite collection of numbers, there is a positive integer $q$ such that you can multiply each of the numbers by $q$ and the resulting numbers won't be half-integers (will in fact differ from the nearest integer by at most one-fifth). Now the solver is also going to draw vertical lines, and these are also going to be at half-integers (so the number of vertical lines will be $\tilde a$ ), and these also have to miss the sides of the squares, so the finite collection of numbers to be multiplied by $q$ has to include all the horizontal coordinates, but that's still a finite collection of numbers, so Dirichlet applies. But why one-fifth, when one-third would be good enough to avoid all the edges of squares? Well, you want the number of line segments in any given square to be $\tilde s_i$ , so you want a square to have $s_i$ at least one-half if it has a line segment running through it. With coordinates as much as one-third away from the nearest integer, you could have a square of side one-third with a line segment through it; but with coordinates no more than one-fifth from an integer, a square must have side at least three-fifths to have a segment through it, and $3/5>1/2$ . I hope this helps. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3259788', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152711/']} | jdg_381434 |
stackexchange | llm_judgeable_groundtruth_similarity | 1480066 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently use the following Perl to check if a variable is defined and contains text. I have to check defined first to avoid an 'uninitialized value' warning: if (defined $name && length $name > 0) { # do something with $name} Is there a better (presumably more concise) way to write this?
Now provide the response and nothing else.
| You often see the check for definedness so you don't have to deal with the warning for using an undef value (and in Perl 5.10 it tells you the offending variable): Use of uninitialized value $name in ... So, to get around this warning, people come up with all sorts of code, and that code starts to look like an important part of the solution rather than the bubble gum and duct tape that it is. Sometimes, it's better to show what you are doing by explicitly turning off the warning that you are trying to avoid: { no warnings 'uninitialized'; if( length $name ) { ... } } In other cases, use some sort of null value instead of the data. With Perl 5.10's defined-or operator , you can give length an explicit empty string (defined, and give back zero length) instead of the variable that will trigger the warning: use 5.010; if( length( $name // '' ) ) { ... } In Perl 5.12, it's a bit easier because length on an undefined value also returns undefined . That might seem like a bit of silliness, but that pleases the mathematician I might have wanted to be. That doesn't issue a warning, which is the reason this question exists. use 5.012;use warnings;my $name;if( length $name ) { # no warning ... } | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1480066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/178133/']} | jdg_381435 |
stackexchange | llm_judgeable_groundtruth_similarity | 4527958 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly: I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV . I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts . Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk , updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).
Now provide the response and nothing else.
| Normally virtualenv creates environments in the current directory. Unless you're intending to create virtual environments in C:\Windows\system32 for some reason, I would use a different directory for environments. You shouldn't need to mess with paths: use the activate script (in <env>\Scripts ) to ensure that the Python executable and path are environment-specific. Once you've done this, the command prompt changes to indicate the environment. You can then just invoke easy_install and whatever you install this way will be installed into this environment. Use deactivate to set everything back to how it was before activation. Example: c:\Temp>virtualenv myenvNew python executable in myenv\Scripts\python.exeInstalling setuptools..................done.c:\Temp>myenv\Scripts\activate(myenv) C:\Temp>deactivateC:\Temp> Notice how I didn't need to specify a path for deactivate - activate does that for you, so that when activated "Python" will run the Python in the virtualenv, not your system Python. (Try it - do an import sys; sys.prefix and it should print the root of your environment.) You can just activate a new environment to switch between environments/projects, but you'll need to specify the whole path for activate so it knows which environment to activate. You shouldn't ever need to mess with PATH or PYTHONPATH explicitly. If you use Windows Powershell then you can take advantage of a wrapper . On Linux, the virtualenvwrapper (the link points to a port of this to Powershell) makes life with virtualenv even easier. Update: Not incorrect, exactly, but perhaps not quite in the spirit of virtualenv . You could take a different tack: for example, if you install Django and anything else you need for your site in your virtualenv, then you could work in your project directory (where you're developing your site) with the virtualenv activated. Because it was activated, your Python would find Django and anything else you'd easy_installed into the virtual environment: and because you're working in your project directory, your project files would be visible to Python, too. Further update: You should be able to use pip , distribute instead of setuptools , and just plain python setup.py install with virtualenv . Just ensure you've activated an environment before installing something into it. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4527958', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/128463/']} | jdg_381436 |
stackexchange | llm_judgeable_groundtruth_similarity | 80507 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is a Gaussian Discriminant Analysis (GDA)? What materials should one read to understand how a GDA works and where it comes from? Try to explain this for someone at a high-school level.
Now provide the response and nothing else.
| GDA, is a method for data classification commonly used when data can be approximated with a Normal distribution. As first step, you will need a training set, i.e. a bunch of data yet classified. These data are used to train your classifier, and obtain a discriminant function that will tell you to which class a data has higher probability to belong. When you have your training set you need to compute the mean $\mu$ and the standard deviation $\sigma^2$ . These two variables, as you know, allow you to describe a Normal distribution. Once you have computed the Normal distribution for each class, to classify a data you will need to compute, for each one, the probability that that data belongs to it.The class with the highest probability will be chosen as the affinity class. More information about Discriminant Functions for the Normal Density can be found in textbook as Pattern Classification DUDA, HART, SOTRK or Pattern Recognition and Machine Learning BISHOP . A tutorial to GDA can also be found here Part1 and Part2 | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/80507', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/36569/']} | jdg_381437 |
stackexchange | llm_judgeable_groundtruth_similarity | 360040 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When given a static set of objects (static in the sense that once loaded it seldom if ever changes) into which repeated concurrent lookups are needed with optimal performance, which is better, a HashMap or an array with a binary search using some custom comparator? Is the answer a function of object or struct type? Hash and/or Equal function performance? Hash uniqueness? List size? Hashset size/set size? The size of the set that I'm looking at can be anywhere from 500k to 10m - incase that information is useful. While I'm looking for a C# answer, I think the true mathematical answer lies not in the language, so I'm not including that tag. However, if there are C# specific things to be aware of, that information is desired.
Now provide the response and nothing else.
| For very small collections the difference is going to be negligible. At the low end of your range (500k items) you will start to see a difference if you're doing lots of lookups. A binary search is going to be O(log n), whereas a hash lookup will be O(1), amortized . That's not the same as truly constant, but you would still have to have a pretty terrible hash function to get worse performance than a binary search. (When I say "terrible hash", I mean something like: hashCode(){ return 0;} Yeah, it's blazing fast itself, but causes your hash map to become a linked list.) ialiashkevich wrote some C# code using an array and a Dictionary to compare the two methods, but it used Long values for keys. I wanted to test something that would actually execute a hash function during the lookup, so I modified that code. I changed it to use String values, and I refactored the populate and lookup sections into their own methods so it's easier to see in a profiler. I also left in the code that used Long values, just as a point of comparison. Finally, I got rid of the custom binary search function and used the one in the Array class. Here's that code: class Program{ private const long capacity = 10_000_000; private static void Main(string[] args) { testLongValues(); Console.WriteLine(); testStringValues(); Console.ReadLine(); } private static void testStringValues() { Dictionary<String, String> dict = new Dictionary<String, String>(); String[] arr = new String[capacity]; Stopwatch stopwatch = new Stopwatch(); Console.WriteLine("" + capacity + " String values..."); stopwatch.Start(); populateStringArray(arr); stopwatch.Stop(); Console.WriteLine("Populate String Array: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); populateStringDictionary(dict, arr); stopwatch.Stop(); Console.WriteLine("Populate String Dictionary: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); Array.Sort(arr); stopwatch.Stop(); Console.WriteLine("Sort String Array: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); searchStringDictionary(dict, arr); stopwatch.Stop(); Console.WriteLine("Search String Dictionary: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); searchStringArray(arr); stopwatch.Stop(); Console.WriteLine("Search String Array: " + stopwatch.ElapsedMilliseconds); } /* Populate an array with random values. */ private static void populateStringArray(String[] arr) { for (long i = 0; i < capacity; i++) { arr[i] = generateRandomString(20) + i; // concatenate i to guarantee uniqueness } } /* Populate a dictionary with values from an array. */ private static void populateStringDictionary(Dictionary<String, String> dict, String[] arr) { for (long i = 0; i < capacity; i++) { dict.Add(arr[i], arr[i]); } } /* Search a Dictionary for each value in an array. */ private static void searchStringDictionary(Dictionary<String, String> dict, String[] arr) { for (long i = 0; i < capacity; i++) { String value = dict[arr[i]]; } } /* Do a binary search for each value in an array. */ private static void searchStringArray(String[] arr) { for (long i = 0; i < capacity; i++) { int index = Array.BinarySearch(arr, arr[i]); } } private static void testLongValues() { Dictionary<long, long> dict = new Dictionary<long, long>(Int16.MaxValue); long[] arr = new long[capacity]; Stopwatch stopwatch = new Stopwatch(); Console.WriteLine("" + capacity + " Long values..."); stopwatch.Start(); populateLongDictionary(dict); stopwatch.Stop(); Console.WriteLine("Populate Long Dictionary: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); populateLongArray(arr); stopwatch.Stop(); Console.WriteLine("Populate Long Array: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); searchLongDictionary(dict); stopwatch.Stop(); Console.WriteLine("Search Long Dictionary: " + stopwatch.ElapsedMilliseconds); stopwatch.Reset(); stopwatch.Start(); searchLongArray(arr); stopwatch.Stop(); Console.WriteLine("Search Long Array: " + stopwatch.ElapsedMilliseconds); } /* Populate an array with long values. */ private static void populateLongArray(long[] arr) { for (long i = 0; i < capacity; i++) { arr[i] = i; } } /* Populate a dictionary with long key/value pairs. */ private static void populateLongDictionary(Dictionary<long, long> dict) { for (long i = 0; i < capacity; i++) { dict.Add(i, i); } } /* Search a Dictionary for each value in a range. */ private static void searchLongDictionary(Dictionary<long, long> dict) { for (long i = 0; i < capacity; i++) { long value = dict[i]; } } /* Do a binary search for each value in an array. */ private static void searchLongArray(long[] arr) { for (long i = 0; i < capacity; i++) { int index = Array.BinarySearch(arr, arr[i]); } } /** * Generate a random string of a given length. * Implementation from https://stackoverflow.com/a/1344258/1288 */ private static String generateRandomString(int length) { var chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; var stringChars = new char[length]; var random = new Random(); for (int i = 0; i < stringChars.Length; i++) { stringChars[i] = chars[random.Next(chars.Length)]; } return new String(stringChars); }} Here are the results with several different sizes of collections. (Times are in milliseconds.) 500000 Long values... Populate Long Dictionary: 26 Populate Long Array: 2 Search Long Dictionary: 9 Search Long Array: 80 500000 String values... Populate String Array: 1237 Populate String Dictionary: 46 Sort String Array: 1755 Search String Dictionary: 27 Search String Array: 1569 1000000 Long values... Populate Long Dictionary: 58 Populate Long Array: 5 Search Long Dictionary: 23 Search Long Array: 136 1000000 String values... Populate String Array: 2070 Populate String Dictionary: 121 Sort String Array: 3579 Search String Dictionary: 58 Search String Array: 3267 3000000 Long values... Populate Long Dictionary: 207 Populate Long Array: 14 Search Long Dictionary: 75 Search Long Array: 435 3000000 String values... Populate String Array: 5553 Populate String Dictionary: 449 Sort String Array: 11695 Search String Dictionary: 194 Search String Array: 10594 10000000 Long values... Populate Long Dictionary: 521 Populate Long Array: 47 Search Long Dictionary: 202 Search Long Array: 1181 10000000 String values... Populate String Array: 18119 Populate String Dictionary: 1088 Sort String Array: 28174 Search String Dictionary: 747 Search String Array: 26503 And for comparison, here's the profiler output for the last run of the program (10 million records and lookups). I highlighted the relevant functions. They pretty closely agree with the Stopwatch timing metrics above. You can see that the Dictionary lookups are much faster than binary search, and (as expected) the difference is more pronounced the larger the collection. So, if you have a reasonable hashing function (fairly quick with few collisions), a hash lookup should beat binary search for collections in this range. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/360040', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18941/']} | jdg_381438 |
stackexchange | llm_judgeable_groundtruth_similarity | 12114243 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a page with an iframe that has an html document within it. I need to access the iframe's id from that child document, and am having no luck getting at it with jQuery. The essential structure is this within my page: <div id="ContainingDiv"> <iframe id="ContainingiFrame"> <html> <head></head> <body> </body> </html> </iframe></div> In my javascript method I can get to anything in the <body> tags using jQuery selectors, but cannot retrieve the iframe element in order to resize it. I've tried a few things but am admittedly not an expert in jQuery DOM navigation, particularly where iframes are involved (the iframe is in the same domain as the containing page.) Is there a way I can do this?
Now provide the response and nothing else.
| No need for jQuery at all. To get the body object of your parent, you can do this: var parentBody = window.parent.document.body If it's on the same domain as your iframe that you are running the code from, once you have that, you can use normal javascript on that object: window.parent.document.getElementById("ContainingiFrame").style.height = "400px"; or with jQuery: $("#ContainingiFrame", parentBody).height("400"); Here's an article on resizing an iframe from within the iframe with sample code: http://www.pither.com/articles/2010/11/12/resize-iframe-from-within And, a related question/answer on resizing an iframe based on it's own content: Resizing an iframe based on content | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12114243', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/172617/']} | jdg_381439 |
stackexchange | llm_judgeable_groundtruth_similarity | 22922584 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to overwrite a numpy array that's a small part of a pretty complicated h5 file. I'm extracting an array, changing some values, then want to re-insert the array into the h5 file. I have no problem extracting the array that's nested. f1 = h5py.File(file_name,'r')X1 = f1['meas/frame1/data'].valuef1.close() My attempted code looks something like this with no success: f1 = h5py.File(file_name,'r+')dset = f1.create_dataset('meas/frame1/data', data=X1)f1.close() As a sanity check, I executed this in Matlab using the following code, and it worked with no problems. h5write(file1, '/meas/frame1/data', X1); Does anyone have any suggestions on how to do this successfully?
Now provide the response and nothing else.
| You want to assign values, not create a dataset: f1 = h5py.File(file_name, 'r+') # open the filedata = f1['meas/frame1/data'] # load the datadata[...] = X1 # assign new values to dataf1.close() # close the file To confirm the changes were properly made and saved: f1 = h5py.File(file_name, 'r')np.allclose(f1['meas/frame1/data'].value, X1)#True | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22922584', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3508433/']} | jdg_381440 |
Subsets and Splits