source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6070
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I know (at least I think I know) that some of the main motivating problems in the development of etale cohomology were the Weil conjectures. I'd like to know what other problems one can solve using the machinery of etale cohomology. I know a little bit about how etale cohomology groups appear in algebraic number theory but I'd like to know about ways that these things show up in other mathematical subjects as well. Is there anything that an algebraic topologist should really know about etale cohomology? What about a differential geometer?
Now provide the response and nothing else.
|
$\DeclareMathOperator{\gal}{Gal}$Here's a comment which one can make to differential geometers which at least explains what etale cohomology "does". Given an algebraic variety over the reals, say a smooth one, its complex points are a complex manifold but with a little extra structure: the complex points admit an automorphism coming from complex conjugation. Hence the singular cohomology groups inherit an induced automorphism, which is extra information that is sometimes worth carrying around. In short: the cohomology of an algebraic variety defined over the reals inherits an action of $\gal(\mathbb{C}/\mathbb{R})$. The great thing about etale cohomology is that a number theorist can now do the same trick with algebraic varieties defined over $\mathbb{Q}$. The etale cohomology groups of this variety will have the same dimension as the singular cohomology groups (and are indeed isomorphic to them via a comparison theorem, once the coefficient ring is big enough) but the advantage is that that they inherit a structure of the amazingly rich and complicated group $\gal(\bar{\mathbb{Q}}/\mathbb{Q})$. I've often found that this comment sees off differential geometers, with the thought "well at least I sort-of know the point of it now". A differential geometer probably doesn't want to study $\gal(\bar{\mathbb{Q}}/\mathbb{Q})$ though. If I put my Langlands-philosophy hat on though, I can see a huge motivation for etale cohomology: Langlands says that automorphic forms should give rise to representations of Galois groups, and etale cohomology is a very powerful machine for constructing representations of Galois groups, so that's why I might be interested in it even if I'm not an algebraic geometer. Finally, I guess a much simpler motivating good reason for etale cohomology is that geometry is definitely facilitated when you have cohomology theories around. That much is clear. But if you're doing algebraic geometry over a field that isn't $\mathbb C$ or $\mathbb R$ then classical cohomology theories aren't going to cut it, and the Zariski topology is so awful that you can't use it alone to do geometry---you're going to need some help. Hence etale cohomology, which gives the right answers: e.g. a smooth projective curve over any field has a genus, and etale cohomology is a theory which assigns to it an $H^1$ of dimension $2g$ (<pedant> at least if you use $\ell$-adic cohomology for $\ell$ not zero in the field <\pedant>).
|
{}
|
{'log_upvote_score': 8, 'links': ['https://mathoverflow.net/questions/6070', 'https://mathoverflow.net', 'https://mathoverflow.net/users/493/']}
|
jdg_76241
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6222483
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am dealing with what amounts to squares on a grid. The grid if of a more or less infinite size. Grid squares can have one of two states, either claimed or unclaimed. The coordinates of the squares are stored as squares in an R tree for fast access. Before a square is unclaimed however, I want to iterate through the nearby claimed squares to ensure that by unclaiming this square, I am not splitting the claimed squares into 2 distinct regions. It's important to note, that squares are already currently claimable only if they are touching an already claimed square. So if I have something like +++++++++ And if a player wants to remove the square to create ++++ ++++ Then I want to be able to detect and reject this action. Of course this needs to be able to handle the full 2 dimensional space as well ( I tried, but found it difficult to get this across in ascii art via the formatting available to me here). This must also be fairly efficient. Any pointers as to the name of an algorithm to look up or a description on how to do this would be wonderful <3
Now provide the response and nothing else.
|
The following does exactly what i asked for. Though, it does not synthesize the implementation of the factory, it is good enough as the factory has access to the injection context so that can use other beans (injectable artifacts) during construction. It uses java based @Configuration instead of XML, but it will work with XML too. The factory interface: public interface Robot {}// Implementation of this is to be injected by the IoC in the Robot instancespublic interface Brain { String think();}public class RobotImpl implements Robot { private final String name_; private final Brain brain_; @Inject public RobotImpl(String name, Brain brain) { name_ = name; brain_ = brain; } public String toString() { return "RobotImpl [name_=" + name_ + "] thinks about " + brain_.think(); }}public class RobotBrain implements Brain { public String think() { return "an idea"; }} // The assisted factory typepublic interface RobotFactory { Robot newRobot(String name);} // this is the Spring configuration showing how to do the assisted injection @Configurationclass RobotConfig { @Bean @Scope(SCOPE_PROTOTYPE) public RobotFactory robotFactory() { return new RobotFactory() { @Override public Robot newRobot(String name) { return new RobotImpl(name, r2dxBrain()); } }; } @Bean @Scope(SCOPE_PROTOTYPE) public Brain r2dxBrain() { return new RobotBrain(); }} The test code: public class RobotTest { @Test public void t1() throws Exception { ApplicationContext ctx = new AnnotationConfigApplicationContext(RobotConfig.class); RobotFactory rf = ctx.getBean(RobotFactory.class); assertThat(rf.newRobot("R2D2").toString(), equalTo("RobotImpl [name_=R2D2] thins about an idea")); }} This achieves exactly what Guice does. The tricky difference is the Scope . Spring's default scope is Singleton and Guice's is not (it is prototype).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6222483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/776648/']}
|
jdg_76242
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
77148
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code which calculates a list of points that a line would take after bouncing off various ellipses. maxTime = 50; centers = {{0, 0}, {3, 0}};radii = {{1, 1}, {2, 1}};angles = {0, Pi/3};ClearAll[x, y, circleIntersection, nextDirection, nextPt, whichCirc, \tangentVector]circleIntersection[direction_, pt_List, circs_List] := {x, y} //. {ToRules[ Quiet@Reduce@ RegionMember[ RegionIntersection[RegionUnion[circs], Line[{pt, pt + {Cos[direction], Sin[direction]}*{maxTime, maxTime}}]], {x, y}]]};nextPt[direction_, pt_List] := With[{val = First@PadLeft[ MinimalBy[EuclideanDistance[pt, #] &]@ Function[x, Select[x, Abs[EuclideanDistance[pt, #]] > .05 &]]@ N@circleIntersection[direction, pt, MapThread[ TransformedRegion[Circle[#1, #2], RotationTransform[#3, #1]] &, {centers, radii, angles}]], 1, pt]}, If[ListQ[val], val, pt + {Cos[direction], Sin[direction]}*{maxTime, maxTime}]];whichCirc[direction_, pt_List] := First@PadLeft[ Flatten@Position[ RegionMember[#, nextPt[direction, pt]] & /@ MapThread[ TransformedRegion[Circle[#1, #2], RotationTransform[#3, #1]] &, {centers, radii, angles}], _?TrueQ, 1, 1], 1];tangentVector[pt_List, pos_Integer] := Module[{a, b, c, d, e, m, n, s, r, dx, dy}, m = Cos[angles[[pos]]]; n = Sin[angles[[pos]]]; s = radii[[pos]][[2]]; r = radii[[pos]][[1]]; a = s^2*m^2 + r^2*n^2; b = s^2*n^2 + r^2*m^2; c = 2 (m*n*r^2*centers[[pos]][[2]] - centers[[pos]][[1]]*n^2*r^2 - centers[[pos]][[1]]*m^2*s^2 - centers[[pos]][[2]]*m*n*s^2); d = 2 (centers[[pos]][[1]]*m*n*r^2 - centers[[pos]][[2]]*m^2*r^2 - centers[[pos]][[1]]*m*n*s^2 - centers[[pos]][[2]]*n^2*s^2); e = -2 (m*n*r^2 - m*n*s^2); dx = 2*a*pt[[1]] + c + e*pt[[2]]; dy = 2*b*pt[[2]] + d + e*pt[[1]]; If[pos > 0, {-dy, dx}]]; nextDirection[direction_, pt_List, ptnow_List] := With[{pos = whichCirc[direction, ptnow]}, If[pos > 0 && ptnow != pt, ArcTan[Sequence @@ (ReflectionTransform[tangentVector[pt, pos], pt][ptnow] - pt)], direction]];f1[{dir_, pt_, ___}] := {dir, nextPt[dir, pt], pt};g1[{dir_, newpt_, pt_}] := {nextDirection[dir, newpt, pt], newpt, pt};ellipsePtList[dir_, startPt_, mt_] := Module[{timeUpYet, len = 0}, timeUpYet[pt1_, pt2_] := Module[{dist = EuclideanDistance[pt1, pt2]}, len += dist; Return[len < mt]]; Flatten[NestWhileList[g1[f1[#]] &, {dir, startPt}, timeUpYet[#1[[2]], #2[[2]]] &, 2][[All, 2 ;; 2]], 1]] It allows me to do the following: With[{dir = 0, pt = {0.6, 0.3}}, Graphics[{Red, Thick, Line[ellipsePtList[dir, pt, maxTime]], Blue, MapThread[ GeometricTransformation[Circle[#1, #2], RotationTransform[#3, #1]] &, {centers, radii, angles}]}]] Which produces a very nice picture, but takes about 5 seconds on my computer to run. So I'm looking for ways to speed this code up. After some preliminary testing, it seems the whichCirc function is the bottleneck (but I will take optimizations for any part of the code). Any and all suggestions are most welcome! PS - sorry for the code formatting above, it's hard to paste in a lot of code and make it nice! EDIT : there was some confusion over exactly what was being asked. I need this to work in the generality that there could be several ellipses, positioned in various ways. For example, changing the first few lines above to: maxTime = 100; centers = {{0, 0}, {3, 0}, {0, 0}};radii = {{1, 1}, {2, 1}, {6, 6}};angles = {0, Pi/3, 0}; and then executing the same code at the bottom (with different start point): With[{dir = 0, pt = {1.6, 0.5}}, Graphics[{Red, Thick, Line[ellipsePtList[dir, pt, maxTime]], Blue, MapThread[ GeometricTransformation[Circle[#1, #2], RotationTransform[#3, #1]] &, {centers, radii, angles}]}]] gives the final picture:
Now provide the response and nothing else.
|
Update The original approach for one ellipse (below) may be adapted for several: eq = With[{p = {x, y} - {x0, y0}}, (RotationMatrix[-t0].p).{{1/a^2, 0}, {0, 1/b^2}}.(RotationMatrix[-t0].p)] - 1;sub[pt_] := Thread[{x, y} -> pt];dir[t0_] := {Cos[t0], Sin[t0]};ClearAll[next, cuts];Block[{a, b, t0, x0, y0, x1, y1, α, x, y, t, ellipses}, next = With[{sol = Simplify[ t /. Solve[eq == 0 /. sub[{x1, y1} + t dir[α]], t], TimeConstraint -> 0.1]}, With[{x2 = x /. sub[{x1, y1} + t dir[α]], y2 = y /. sub[{x1, y1} + t dir[α]]}, With[{θ = ArcTan[D[eq, x], D[eq, y]] /. sub[{x2, y2}]}, (* definitions using above algebra *) cuts[sect_][ell_] := sect /. Thread[{a, b, t0, x0, y0} -> ell]; Function @@ Hold[{ellipses, x1, y1, α}, With[{sects = cuts[sol] /@ ellipses}, t = First@Sort@Select[Chop@Flatten[sects], Positive]; {x2, y2, Mod[2 θ - α + Pi, 2 Pi]} /. Thread[{a, b, t0, x0, y0} -> ellipses~Part~First@FirstPosition[sects, t] ] ] ]]] ]; OP's new example: centers = {{0, 0}, {3, 0}, {0, 0}};radii = {{1, 1}, {2, 1}, {6, 6}};angles = {0, Pi/3, 0};ells = Flatten[{radii, List /@ angles, centers}, {{2}, {1, 3}}];With[{dir = 0, pt = {1.6, 0.5}}, ptsdir = NestList[ next[ells, Sequence @@ #] &, {Sequence @@ pt, dir}, 19]; pts = ptsdir[[All, 1 ;; 2]]; Graphics[{Red, Thick, Line[pts], Blue, MapThread[ GeometricTransformation[Circle[#1, #2], RotationTransform[#3, #1]] &, {centers, radii, angles}]}] ] // AbsoluteTiming With 100 points: Original answer I used a cartesian equation of a rotated ellipse and the angle of the normal to compute the reflection. Replace Function by Compile if you want more speed, but the figure below is computed in a little over 0.05 sec, most of which time was spent computing the plot of the ellipse. Given the ellipse x^2/a^2 + y^2/b^2 == 1 rotated by an angle t0 , an initial point {x0, y0} , and a direction α , the function next returns the next intersection and direction in a list {x1, y1, α1} . eq = (RotationMatrix[-t0].{x, y}).{{1/a^2, 0}, {0, 1/b^2}}.(RotationMatrix[-t0].{x, y}) - 1;sub[pt_] := Thread[{x, y} -> pt];dir[t0_] := {Cos[t0], Sin[t0]};Block[{a, b, t0, x0, y0, α, x, y, t}, next = (* preliminary algebra *) With[{sol = t /. Solve[eq == 0 /. sub[{x0, y0} + t dir[α]], t ] // Simplify}, With[{x1 = x /. sub[{x0, y0} + t dir[α]], y1 = y /. sub[{x0, y0} + t dir[α]]}, With[{θ = ArcTan[D[eq, x], D[eq, y]] /. sub[{x1, y1}]}, (* angle of normal *) (* function definition *) Function @@ Hold[ {a, b, t0, x0, y0, α}, t = First@Select[sol, # > 1.*^-8 &]; (* tolerance could be ~1.*^-14 *) {x1, y1, Mod[2 θ - α + Pi, 2 Pi]} (* new x, y, reflected angle *) ] ]]] ]; Here ptsdir contains a list of {x, y, theta} and pts contains a list of the points. Block[{a = 4, b = 2, t0 = Pi/6}, ptsdir = NestList[next[a, b, t0, Sequence @@ #] &, {2., 2., 0.}, 170]; pts = ptsdir[[All, 1 ;; 2]]; ContourPlot[eq == 0, {x, -5, 5}, {y, -5, 5}, Epilog -> {Red, Thickness[0.001], Line[pts]}] ]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/77148', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/191/']}
|
jdg_76243
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21546892
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have Windows 8.1 64-bit OS running on 64-bit architecture. I am installing a fresh copy of Oracle 12C, means I haven't installed any version before on my system. During the installation, I encountered the following error message; Cause - Failed to access the temporary location. Action - Ensure that the current user has required permissions to access the temporary location. Additional Information: - Framework setup check failed on all the nodes - Cause: Cause Of Problem Not Available - Action: User Action Not Available Summary of the failed nodes al-naseeha - Version of exectask could not be retrieved from node "al-naseeha" - Cause: Cause Of Problem Not Available - Action: User Action Not Available The error code is [INS-30131]. I've the full privileges of an administrator-ship. I also tried to resolve this problem via the following command; net use \\localhost\c$ But in no vain. Can you please help me sort out this issue.
Now provide the response and nothing else.
|
The error is caused due to administrative shares are being disabled. If they cannot be enabled then perform the following workaround: 6.2.23 INS-30131 Error When Installing Oracle Database or Oracle Client If the administrative shares are not enabled when performing a single instance Oracle Database or Oracle Client installation for 12c Release 1 (12.1) on Microsoft Windows 7, Microsoft Windows 8, and Microsoft Windows 10, then the installation fails with an INS-30131 error. Workaround: Execute the net share command to ensure that the administrative shares are enabled. If they are disabled, then enable them by following the instructions in the Microsoft Windows documentation. Alternatively, perform the client or server installation by specifying the following options: For a client installation: -ignorePrereq -J"-Doracle.install.client.validate.clientSupportedOSCheck=false" For a server installation: -ignorePrereq -J"-Doracle.install.db.validate.supportedOSCheck=false" This issue is tracked with Oracle bug 21452473. Source: Oracle Database Release Notes (Section 6.2.23)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21546892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3226056/']}
|
jdg_76244
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
497229
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does any Common Lisp (builtin) function return more than 2 values? I know many that return 2, but I can't think of one that returns 3. (I saw a comment here about returning more than 2 values, and tried to think of a case where CL did this, but can't.)
Now provide the response and nothing else.
|
Yes, such functions exist. Here is the complete list of functions in the COMMON-LISP package that return exactly three values, as declared in SBCL source code: COMPILE required: 3, optional: 0, rest?: NILINTEGER-DECODE-FLOAT required: 3, optional: 0, rest?: NILCOMPILE-FILE required: 3, optional: 0, rest?: NILGET-PROPERTIES required: 3, optional: 0, rest?: NILFUNCTION-LAMBDA-EXPRESSION required: 3, optional: 0, rest?: NILDECODE-FLOAT required: 3, optional: 0, rest?: NILRENAME-FILE required: 3, optional: 0, rest?: NIL In addition, the following functions return a constant number of values greater than three: DECODE-UNIVERSAL-TIME required: 9, optional: 0, rest?: NILGET-DECODED-TIME required: 9, optional: 0, rest?: NIL These functions return a variable number of values, hence possibly more than three: NO-APPLICABLE-METHOD required: 0, optional: 0, rest?: TNO-NEXT-METHOD required: 0, optional: 0, rest?: TVALUES required: 0, optional: 0, rest?: T(I've omitted some functions from this list where SBCL does not declarea values type explicitly. get-setf-expansion is one of them.) Explanations of the columns: required is minimum number of return values for these functions, optional a fixed number of return values which SBCL thinks might or might not be returned, rest? indicates that a variable number of values is expected. (Only macroexpand and macroexpand-1 actually use &optional, don't ask me why.) And just for fun, here is the source code I used to come up with these tables: (do-external-symbols (sym :common-lisp) (when (fboundp sym) (multiple-value-bind (required optional rest) (let ((fun-type (sb-int:info :function :type sym))) (etypecase fun-type (sb-kernel:fun-type (let ((returns (sb-kernel:fun-type-returns fun-type))) (etypecase returns (sb-kernel:values-type (values (length (sb-kernel:values-type-required returns)) (length (sb-kernel:values-type-optional returns)) (sb-kernel:values-type-rest returns))) (sb-kernel:named-type (if (sb-kernel:named-type-name returns) (values 1 0 t) (values 0 0 nil)))))) (t (values 0 0 t)))) (format t "~A~40Trequired: ~D, optional: ~D, rest?: ~A~%" sym required optional rest))))
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/497229', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/53607/']}
|
jdg_76245
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30081372
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have no experience in objective C, so I'm having trouble with some of the notation. In my "AppDelegate.swift" file, there is a "@UIMainApplication" at the top. What does this mean? Also, if possible, could you please give me the "equivalent" statement (if it exists) in C++? Thanks.
Now provide the response and nothing else.
|
Well, you picked a rather complicated one. The @ merely means that this is an attribute - a special marker or signal to the compiler, as Apple explains here . But @UIApplicationMain is a particularly profound attribute! It substitutes for the entire UIApplicationMain implementation that lies at the heart of a C / Objective-C application, in the main.m file (as I explain here ): int main(int argc, char *argv[]){ @autoreleasepool { return UIApplicationMain(argc, argv, nil, NSStringFromClass([AppDelegate class])); }} That is the main entry point of the app, implementing the entire launch-and-run code that is the running application. You can do something like that in Swift — a main.swift file with the Swift equivalent of that code — but Swift saves you the trouble by letting you designate your app delegate class with the @UIApplicationMain attribute. If you start a project as an Objective-C or Objective-C++ project, the template will give you a main file containing the main implementation, so there's no need to do anything special in this regard.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30081372', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4860788/']}
|
jdg_76246
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2179520
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been simply writing 2 cookies, 1 containing the user ID, and the 2nd containing 1/2 the SH1 hash of the password (salted). The way it works is self-evident. I realized that I wasnt doing this in the most secure way. Whats a better way of doing this? Preferably using a single authentication cookie. Also, is there a point to using "hard to calculate hashes"? By that I mean, using bcrypt, or hashing each item 10,000 times with whirlpool, to make it a (relatively) slow hash function (200 ms vs less than 1 ms just plain SHA1)? I mean if someone breaches your DB and gets the hashes.... what is there left to protect, since all your data is in the same DB (Unless you have some sort of a de-centralized setup, which I dont).
Now provide the response and nothing else.
|
use Sessions . Store the session id in the cookie, and store the state of the user on the server side (loggedIn, userId, IP). To clarify what you need to store in the session array: loggedIn: A boolean variable about whether the user is logged in or not. You reuse the same cookie for multiple sessions, so you remember the users username next time they come to your site, etc. userId: The uniqe id of the user in the database. Use this to get more information on the user, like username, email etc. This too can be kept in the session array after the user logs out. IP: To prevent someone from stealing the session id and using it, you store the IP of the user as well. This is optional, as sometimes you want to allow the user to roam (eg, stackoverflow allows me to move about with my laptop without logging me out when the IP changes). lastPing: The timestamp the user was last seen. This can be used instead of the cookie expiration date. If you also store the lifetime of the session, then you can log the user out due to inactivity. This means that the session id cookie can be stored on the users computer for a very long time. When the user logs out or is logged out due to inactivity, you simply set loggedIn to false. When the user logs in with the right username and password you set loggedIn to true and update the other fields (userId, IP, lifetime). When the user loads a page, you check the lastPing against the current time and the lifetime , and either update lastPing or logout the user. The session data can either be stored in the filesystem or in a database. If stored in a database, then userId is either a foreign key to the user record, or all the data can be put in the user record. Hashing rehashing a value several times is not a good idea, because you reduce the security . Instead use salt, combining a static salt (name of the page for example) and the username of the user, together with the password. A hash that takes a long time isn't better than a fast hash, a hash that results in a large digest is better than a hash that results in a short digest (due to brute force). Using SHA1 should be good enough for a normal site (IE, not a bank or a secret military organization).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2179520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_76247
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
197546
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that mass warps spacetime and gravity and acceleration are equivalent so does acceleration warp spacetime too?
Now provide the response and nothing else.
|
Sort of. You are correct in saying (with some caveats) that gravity and acceleration are equivalent. According to general relativity, gravity is manifested as curvature of spacetime. As we know from special relativity and Einstein's famous equation $E = mc^2$, energy and mass are equivalent. As a result, any type of energy contributes to gravity (i.e. to the curvature of spacetime). This relationship can be seen directly from Einstein's Field Equations of General Relativity: \begin{equation}G_{\mu\nu} = 8\pi T_{\mu\nu},\end{equation} where the left hand side of the equation (called the Einstein tensor) contains information about the curvature of spacetime and the right hand side (called the stress-energy tensor) contains information about the mass and energy contained in that spacetime. Recall that Minkowski spacetime is the spacetime of special relativity. That is, it has no curvature (no gravity) and is the shape of spacetime when you are in an inertial (non-accelerating) reference frame. So, let's ask the question: what happens when you accelerate in Minkowski space? The answer is that spacetime no longer looks flat to accelerated observers. This is precisely the equivalence principle; locally we cannot tell if we are in a gravitational field or accelerating. Thus, when we are in fact accelerating in a flat spacetime, everything will locally appear as though we are in a spacetime that is curved due to gravity. There are other interesting similarities between accelerated observers in flat spacetime and observers in gravitational fields. For example, accelerated motion leads to horizons similar to the event horizon of a black hole because if you accelerate at a constant rate for long enough then there will be portions of the spacetime to which you can never send or recieve light signals. There is also an analog of Hawking radiation that occurs for accelerated observers in Minkowski space, called the Unruh effect.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/197546', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/87415/']}
|
jdg_76248
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47443
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In David J. Griffiths' Introduction to Electrodynamics , he claims that the superposition principle is not obvious but has always been found to be consistent with the experiments. So I was wondering have we found some physics quantities which do not follow superposition principle? If we have not till now why can't we generalize and make it into a law? More specifically: Griffiths was talking about electromagnetic force. My question is about the existence of something like mass or charge and which doesn't follow this superposition principle.
Now provide the response and nothing else.
|
There are plenty of quantities that do not obey the superposition principle. A simple pendulum, for example, will behave differently (with a longer period) if you double the initial amplitude. What Griffiths means by that quote is that for the electromagnetic field there are no situations where the fields fail to add linearly. More specifically, the superposition principle is encoded in the linearity of Maxwell's equations, which states that If $(\mathbf{E}_1(\mathbf{r},t),\mathbf{B}_1(\mathbf{r},t))$ and $(\mathbf{E}_2(\mathbf{r},t),\mathbf{B}_2(\mathbf{r},t))$ are solutions of Maxwell's equations, then $$(\mathbf{E}_1(\mathbf{r},t)+\mathbf{E}_2(\mathbf{r},t),\mathbf{B}_1(\mathbf{r},t)+\mathbf{B}_2(\mathbf{r},t))$$ is also a solution. This is indeed consistent with experiment, except for two situations: If the field strength inside a medium exceeds that of its linear response, then the material ("macroscopic") Maxwell equations are no longer a linear problem. This is the bread and butter of nonlinear optics , which describes a broad range of phenomena. However, this is not a failure of Griffith's claim, as the 'microscopic' fields $\mathbf{E}$ and $\mathbf{B}$ are still a linear superpositions of those created by the free and bound charges. In certain, very careful experiments, it is possible to observe the scattering of light by light . This is explained by Quantum Electrodynamics as the temporary creation and annihilation of virtual particle-antiparticle pairs where the light beams meet, which transfer energy and information from one beam into the other. This does violate the superposition principle as stated above and as meant by Griffiths in his textbook, and it has been observed experimentally. However, outside of very specific experiments specially designed to observe it, this effect is negligible and can be ignored as regards classical electrodynamics. In the quantum version, you have a whole host of such problems to deal with.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/47443', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7527/']}
|
jdg_76249
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32821196
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm preparing myself to a Java exam, and I'm reading "OCA Java SE 8 Programmer Study Guide (Exam 1Z0-808)". In operators section I found this sentence: Shift Operators: A shift operator takes two operands whose type must be convertible to an integer primitive. I felt odd to me so I tested it with long: public class HelloWorld{ public static void main(String []args){ long test = 3147483647L; System.out.println(test << 1); }} and it worked, no compiler errors and result is correct. Does the book has a bug or am I misunderstanding the quote from the book?
Now provide the response and nothing else.
|
The shift operators >> and << are defined in JLS section 15.19 . Quoting: Unary numeric promotion (§5.6.1) is performed on each operand separately. (Binary numeric promotion (§5.6.2) is not performed on the operands.) It is a compile-time error if the type of each of the operands of a shift operator, after unary numeric promotion, is not a primitive integral type . When talking about "integer primitive", the book is really talking about "primitive integral type" (defined in JLS section 4.2.1 ): The values of the integral types are integers in the following ranges: For byte, from -128 to 127, inclusive For short, from -32768 to 32767, inclusive For int, from -2147483648 to 2147483647, inclusive For long, from -9223372036854775808 to 9223372036854775807, inclusive For char, from '\u0000' to '\uffff' inclusive, that is, from 0 to 65535
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32821196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113571/']}
|
jdg_76250
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
337309
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a little stuck with this proof. Not sure where to go. I was thinking that I could first assume that $f$ is one to one and prove that it's onto, and then assume it's onto and prove that it's one to one... but I'm not sure what to do with the knowledge that $|A|=|B|=m\in\mathbb{Z}\geq 0$. I've tried to start by defining what it means for $f$ to be one-to-one, and going from there... \begin{align*}f\text{ is one-to-one}&\Longrightarrow f=\{(a,b)\in A\times B\mid b=f(a)\wedge f^{-1}(b)=a\}\\&\Longrightarrow \dots\\&\Longrightarrow \text{$f$ is onto}\end{align*} but not sure where to go from there... I have a feeling that my definition is too presumptuous to begin with, since I'm trying to show that $f$ is onto... not assume that it is.
Now provide the response and nothing else.
|
If $f$ is not one-to-one, then there are distinct $a_1, a_2 \in A$ such that $f(a_1) = f(a_2) \in B$. As a result, $$ \left| f(A) \right| < |A|.$$ So there must be some $b \in B$ with no preimage. Hence, $f$ is not onto. For the converse, just run the arguments in reverse.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/337309', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39748/']}
|
jdg_76251
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
259654
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The standard definition of an outlier for a Box and Whisker plot is points outside of the range $\left\{Q1-1.5IQR,Q3+1.5IQR\right\}$, where $IQR= Q3-Q1$ and $Q1$ is the first quartile and $Q3$ is the third quartile of the data. What is the basis for this definition? With a large number of points, even a perfectly normal distribution returns outliers. For example, suppose you start with the sequence: xseq<-seq(1-.5^1/4000,.5^1/4000, by = -.00025) This sequence creates a percentile ranking of 4000 points of data. Testing normality for the qnorm of this series results in: shapiro.test(qnorm(xseq)) Shapiro-Wilk normality testdata: qnorm(xseq)W = 0.99999, p-value = 1ad.test(qnorm(xseq)) Anderson-Darling normality testdata: qnorm(xseq)A = 0.00044273, p-value = 1 The results are exactly as expected: the normality of a normal distribution is normal. Creating a qqnorm(qnorm(xseq)) creates (as expected) a straight line of data: If a boxplot of the same data is created, boxplot(qnorm(xseq)) produces the result: The boxplot, unlike shapiro.test , ad.test , or qqnorm identifies several points as outliers when the sample size is sufficiently large (as in this example).
Now provide the response and nothing else.
|
Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley . Chapter 3, "Boxplots and Batch Comparison", written by John D. Emerson and Judith Strenio (from page 62): [...] Our definition of outliers as data values that are smaller than $F_{L}-\frac{3}{2}d_{F}$ or larger than $F_{U}+\frac{3}{2}d_{F}$ is somewhat arbitrary, but experience with many data sets indicates that this definition serves well in identifying values that may require special attention.[...] $F_{L}$ and $F_{U}$ denote the first and third quartile, whereas $d_{F}$ is the interquartile range (i.e. $F_{U}-F_{L}$ ). They go on and show the application to a Gaussian population (page 63): Consider the standard Gaussian distribution, with mean $0$ and variance $1$ . We look for population values of this distribution that are analogous to the sample values used in the boxplot. For a symmetric distribution, the median equals the mean, so the population median of the standard Gaussian distribution is $0$ . The population fourths are $-0.6745$ and $0.6745$ , so the population fourth-spread is $1.349$ , or about $\frac{4}{3}$ . Thus $\frac{3}{2}$ times the fourth-spread is $2.0235$ (about $2$ ). The population outlier cutoffs are $\pm 2.698$ (about $2\frac{2}{3}$ ), and they contain $99.3\%$ of the distribution. [...] So [they] show that if the cutoffs are applied to a Gaussian distribution, then $0.7\%$ of the population is outside the outlier cutoffs; this figure provides a standard of comparison for judging the placement of the outlier cutoffs [...]. Further, they write [...] Thus we can judge whether our data seem heavier-tailed than Gaussian by how many points fall beyond the outlier cutoffs. [...] They provide a table with the expected proportion of values that fall outside the outlier cutoffs (labelled "Total % Out"): So these cutoffs where never intended to be a strict rule about what data points are outliers or not. As you noted, even a perfect Normal distribution is expected to exhibit "outliers" in a boxplot. Outliers As far as I know, there is no universally accepted definition of outlier. I like the definition by Hawkins (1980): An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism. Ideally, you should only treat data points as outliers once you understand why they don't belong to the rest of the data. A simple rule is not sufficient. A good treatment of outliers can be found in Aggarwal (2013). References Aggarwal CC (2013): Outlier Analysis. Springer. Hawkins D (1980): Identification of Outliers. Chapman and Hall. Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/259654', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/137032/']}
|
jdg_76252
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1945079
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
i assumed the exterior angles meet at point D, then joined A with D and tried proving AD is angle bisector of angle A. But i cant get to the proof. any idea will be helpful
Now provide the response and nothing else.
|
It is enough to consider that a bisector (internal or external) of the angle made by two distinct lines $\ell_1,\ell_2$ meeting at some point $P$ is (part of) the locus of points $Q$ for which $d(Q,\ell_1)=d(Q,\ell_2)$. If some point $Q$ belongs to the external angle bisector through $B$ and to the external angle bisector through $C$, it fulfills$$ d(Q,BA) = d(Q,BC) = d(Q,CA) $$so by $d(Q,BA)=d(Q,CA)$, it belongs to the internal angle bisector through $A$, too.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1945079', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/359506/']}
|
jdg_76253
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13575224
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a color palette with a jPanel and a JLabel array in it. At first it worked well, but then i put some other jLabels out of the JPanel and added them some events. Now I keep getting this error: Exception in thread "AWT-EventQueue-0" java.lang.IllegalArgumentException: Comparison method violates its general contract!at java.util.TimSort.mergeLo(TimSort.java:747)at java.util.TimSort.mergeAt(TimSort.java:483)at java.util.TimSort.mergeCollapse(TimSort.java:410)at java.util.TimSort.sort(TimSort.java:214)at java.util.TimSort.sort(TimSort.java:173)at java.util.Arrays.sort(Arrays.java:659)at java.util.Collections.sort(Collections.java:217)at javax.swing.SortingFocusTraversalPolicy.enumerateAndSortCycle(SortingFocusTraversalPolicy.java:136)at javax.swing.SortingFocusTraversalPolicy.getFocusTraversalCycle(SortingFocusTraversalPolicy.java:110)at javax.swing.SortingFocusTraversalPolicy.getFirstComponent(SortingFocusTraversalPolicy.java:435)at javax.swing.LayoutFocusTraversalPolicy.getFirstComponent(LayoutFocusTraversalPolicy.java:166)at javax.swing.SortingFocusTraversalPolicy.getDefaultComponent(SortingFocusTraversalPolicy.java:515)at java.awt.FocusTraversalPolicy.getInitialComponent(FocusTraversalPolicy.java:169)at java.awt.DefaultKeyboardFocusManager.dispatchEvent(DefaultKeyboardFocusManager.java:380)at java.awt.Component.dispatchEventImpl(Component.java:4731)at java.awt.Container.dispatchEventImpl(Container.java:2287)at java.awt.Window.dispatchEventImpl(Window.java:2719)at java.awt.Component.dispatchEvent(Component.java:4687)at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:723)at java.awt.EventQueue.access$200(EventQueue.java:103)at java.awt.EventQueue$3.run(EventQueue.java:682)at java.awt.EventQueue$3.run(EventQueue.java:680)at java.security.AccessController.doPrivileged(Native Method)at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)at java.awt.EventQueue$4.run(EventQueue.java:696)at java.awt.EventQueue$4.run(EventQueue.java:694)at java.security.AccessController.doPrivileged(Native Method)at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)at java.awt.EventQueue.dispatchEvent(EventQueue.java:693)at java.awt.SequencedEvent.dispatch(SequencedEvent.java:116)at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:721)at java.awt.EventQueue.access$200(EventQueue.java:103)at java.awt.EventQueue$3.run(EventQueue.java:682)at java.awt.EventQueue$3.run(EventQueue.java:680)at java.security.AccessController.doPrivileged(Native Method)at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)at java.awt.EventQueue$4.run(EventQueue.java:696)at java.awt.EventQueue$4.run(EventQueue.java:694)at java.security.AccessController.doPrivileged(Native Method)at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)at java.awt.EventQueue.dispatchEvent(EventQueue.java:693)at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:244)at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:163)at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:147)at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:139)at java.awt.EventDispatchThread.run(EventDispatchThread.java:97) I tried to remove everything i've done after first time i got this error, but still keep getting it. When i change the layout from GridLayout to anything else, then the error disappears, but the code becomes useless. So i need GridLayout. When i move everything in that JPanel to another JPanel, the error also goes away. But when i remove the first JPanel, error comes back. By the way, the program works, but it's not pleasent to keep getting errors... Edit: When i use less than 225 color, there's no error. I'm really curious about what's happening. Any explanation would be appreciated...
Now provide the response and nothing else.
|
It seems to me like you've hit a bug in the JDK since the error seems to come from Swing classes. Options: Define the property java.util.Arrays.useLegacyMergeSort as true . Either using in your code the line System.setProperty("java.util.Arrays.useLegacyMergeSort", "true"); before any Swing code. As the first line in the main method should work. Or adding -Djava.util.Arrays.useLegacyMergeSort=true to your starting options (in the console, or in the project properties in an IDE, Ant script, etc.) Upgrade your JDK and see if the problem goes away Downgrade to Java 6
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13575224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1784230/']}
|
jdg_76254
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1217729
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought it would be nice to do something like this (with the lambda doing a yield return): public IList<T> Find<T>(Expression<Func<T, bool>> expression) where T : class, new(){ IList<T> list = GetList<T>(); var fun = expression.Compile(); var items = () => { foreach (var item in list) if (fun.Invoke(item)) yield return item; // This is not allowed by C# } return items.ToList();} However, I found out that I can't use yield in anonymous method. I'm wondering why. The yield docs just say it is not allowed. Since it wasn't allowed I just created List and added the items to it.
Now provide the response and nothing else.
|
Eric Lippert recently wrote a series of blog posts about why yield is not allowed in some cases. Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 EDIT2: Part 7 (this one was posted later and specifically addresses this question) You will probably find the answer there... EDIT1: this is explained in the comments of Part 5, in Eric's answer to Abhijeet Patel's comment: Q : Eric, Can you also provide some insight intowhy "yields" are not allowed inside ananonymous method or lambda expression A : Good question. I would love to haveanonymous iterator blocks. It would betotally awesome to be able to buildyourself a little sequence generatorin-place that closed over localvariables. The reason why not isstraightforward: the benefits don'toutweigh the costs. The awesomeness ofmaking sequence generators in-place isactually pretty small in the grandscheme of things and nominal methodsdo the job well enough in mostscenarios. So the benefits are notthat compelling. The costs are large. Iteratorrewriting is the most complicatedtransformation in the compiler, andanonymous method rewriting is thesecond most complicated. Anonymousmethods can be inside other anonymousmethods, and anonymous methods can beinside iterator blocks. Therefore,what we do is first we rewrite allanonymous methods so that they becomemethods of a closure class. This isthe second-last thing the compilerdoes before emitting IL for a method.Once that step is done, the iteratorrewriter can assume that there are noanonymous methods in the iteratorblock; they've all be rewrittenalready. Therefore the iteratorrewriter can just concentrate onrewriting the iterator, withoutworrying that there might be anunrealized anonymous method in there. Also, iterator blocks never "nest",unlike anonymous methods. The iteratorrewriter can assume that all iteratorblocks are "top level". If anonymous methods are allowed tocontain iterator blocks, then boththose assumptions go out the window.You can have an iterator block thatcontains an anonymous method thatcontains an anonymous method thatcontains an iterator block thatcontains an anonymous method, and...yuck. Now we have to write a rewritingpass that can handle nested iteratorblocks and nested anonymous methods atthe same time, merging our two mostcomplicated algorithms into one farmore complicated algorithm. It wouldbe really hard to design, implement,and test. We are smart enough to doso, I'm sure. We've got a smart teamhere. But we don't want to take onthat large burden for a "nice to havebut not necessary" feature. -- Eric
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1217729', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/571/']}
|
jdg_76255
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21386671
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am doing a large migration from physical machines to ec2 instances. As of right now I have 3 x.large nodes each with 4 instance store drives (raid-0 1.6TB). After I set this this up I remembered that "The data on an instance store volume persists only during the life of the associated Amazon EC2 instance; if you stop or terminate an instance, any data on instance store volumes is lost." What do people usually do in this situation? I am worried that if one of the boxes crash then all of the data will be lost on that box if it is not 100% replicated on another. http://www.hulen.com/?p=326 I read in the above link that these guys use ephermal drives and periodically backup the content using the EBS drives and snapshots." In this question here: How do I take a backup of aws ec2 instance/ephemeral storage? People claim that you cannot backup ephermal data onto EBS snapshots. Is my best choice to use a few EBS drives and raid0 them together and be able to take snapshots directly from them? I know this is probably the most expensive solution, however, it seems to make the most sense. Any info would be great. Thank you for your time.
Now provide the response and nothing else.
|
I have been running Cassandra on EC2 for over 2 years. To address your concerns, you need to form a proper availability architecture on EC2 for your Cassandra cluster. Here is a bullet list for you to consider: Consider at least 3 zones for setting up your cluster; Use NetworkTopologyStrategy with EC2Snitch/EC2MultiRegionSnitch to propagate a replica of your data to each zone; this means that the machines in each zone will have your full data set combined; for example the strategy_options would be like {us-east:3}. The above two tips should satisfy basic availability in AWS and in case your queries are sent using LOCAL_QUORUM, your application will be fine even if one zone goes down. If you are concerned about 2 zones going down (don't recall it happened in AWS for the past 2 years of my use), then you can also add another region to your cluster. With the above, if any node dies for any reason, you can restore it from nodes in other zones. After all, CAssandra was designed to provide you with this kind of availability. About EBS vs Ephemeral: I have always been against using EBS volumes in anything production because it is one of the worst AWS service in terms of availability. They go down several times a year, and their downside usually cascades to other AWS services like ELBs and RDS. They are also like attached Network storage, so any read/write will have to go over the Network. Don't use them. Even DataStax doesn't recommend them: http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/architecture/../../cassandra/architecture/architecturePlanningEC2_c.html About Backups: I use a solution called Priam ( https://github.com/Netflix/Priam ) which was written by Netflix. It can take a nightly snapshot of your cluster and copy everything to S3. If you enable incremental_backups, it also uploads incremental backups to S3. In case a node goes down, you can trigger a restore on the specific node using a simple API call. It restores a lot faster and does not put a lot of streaming load on your other nodes. I also added a patch to it which let's you do fancy things like bringing up multiple DCs inside one AWS region. You can read about my setup here: http://aryanet.com/blog/shrinking-the-cassandra-cluster-to-fewer-nodes Hope above helps.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21386671', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1991065/']}
|
jdg_76256
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
375391
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am given this: $A=B$ iff $A\bigtriangleup B \subseteq C$. And $A\bigtriangleup B :=(A\setminus B)\cup(B\setminus A)$. I dont know how to prove this and I dont know where to start. please give me guidance
Now provide the response and nothing else.
|
Hint: For an arbitrary set $C$, what is the one and only set that is the subset of every set? So given $\,A\triangle\,B \subseteq C$, where $C$ is any arbitrary set, what does this tell you about the set $A\triangle B$? And what does that tell you about the relationship between $A$ and $B$?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/375391', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32885/']}
|
jdg_76257
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4382656
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems to me 'evil' (in the C++ FAQ sense of the word), for an operator which is generally used to access a data structure to suddenly be defined to insert data into a data structure. I guess the issue is 'what would be better'? This question is answered easily for certain types of mapped value; for example, if we map keys to pointers, you might really like operator[] to return nullptr for a non-existent key, but that clearly doesn't work for other types. It could throw an exception on non-existent key, or even default construct a temporary and return that without adding it to the map. What is the good reason for turning [] from read semantics to write semantics for this container type?
Now provide the response and nothing else.
|
The basic problem is that there is no syntactic way to reliably distinguish: dosomething(collection[foo]); from collection[foo] = something; in the operator's definition. Because it may appear in either location, the class makes sure that it can handle both, providing a default to overwrite, if necessary. If you find this to be unconscionable, then you need to avoid std::map::operator[] altogether. Another reason for this is there must be some defined behavior for when the key is not in the list. Since operator[] must return a value (either LValue or RValue), then it cannot return a null pointer, past-the-end iterator, or any other sentinel value. The only remaining option would be to raise an exception. The STL doesn't raise very many exceptions, because it is intended to be used even in cases where exceptions are not. Some other behavior must be chosen, and this is the result. The best way around this is to use a member function of std::map that doesn't have this behavior. That would be map::find() , which returns map::end if the key is not found.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4382656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/500690/']}
|
jdg_76258
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
446999
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the correct way to make this data stationary (without cutting it in half)? Quarterly Data: 1799666600017395339056173383770001731165100018043497534187160630001933585900019300627000196021200001953785400019888609000197602570001933930912518885771000203119220001970837100019326807000187337060002105981100020523668000219066480002446904000029176362000284559030003034815400031588708000335886020003266926700034233556000354703780003670904600035698357000368816800003756060300038169265000371474020003870714400039120015000404663320004103385800043142309000 Plot of data : Differences won't make it stationary adf.test(data) Augmented Dickey-Fuller Testdata: dataDickey-Fuller = -1.602, Lag order = 3, p-value = 0.7301alternative hypothesis: stationary> adf.test(diff(data)) Augmented Dickey-Fuller Testdata: diff(data)Dickey-Fuller = -1.4163, Lag order = 3, p-value = 0.8026alternative hypothesis: stationary> adf.test(diff(diff(data))) Augmented Dickey-Fuller Testdata: diff(diff(data))Dickey-Fuller = -2.6651, Lag order = 3, p-value = 0.3129alternative hypothesis: stationaryadf.test(diff(diff(diff(data)), lag=4)) Augmented Dickey-Fuller Testdata: diff(diff(diff(data)), lag = 4)Dickey-Fuller = -2.9252, Lag order = 3, p-value = 0.2138alternative hypothesis: stationary plot of differenced data: detrend with a line: Residuals definitely not stationary: 2 diffs and seasonal difference, still not unit-root stationary per ADF adf.test(diff(diff(diff(lm(y~t, g)$residuals)), lag=4, differences = 1)) Augmented Dickey-Fuller Testdata: diff(diff(diff(lm(y ~ t, g)$residuals)), lag = 4, differences = 1)Dickey-Fuller = -2.9252, Lag order = 3, p-value = 0.2138alternative hypothesis: stationary This data gets stationary if I cut it in half, which eliminates the beginning flat part of the curve. For learning I am interested in seeing how to make it stationary with this part included though, but just for reference: adf.test(diff(data[17:length(data)])) Augmented Dickey-Fuller Testdata: diff(data[17:length(data)])Dickey-Fuller = -3.8132, Lag order = 2, p-value = 0.03477alternative hypothesis: stationary I also tried tsoutliers package to look for interventions, but it gives me errors : tsoutliers::tso(data,types = c("AO", "TC","LS", "IO"),maxit.iloop=10, maxit.oloop=10)Error in arima(y, order = fit $arma[c(1, 6, 2)], seasonal = list(order = fit$ arma[c(3, : non-stationary seasonal AR part from CSStsoutliers::tso(diff(data),types = c("AO", "TC","LS", "IO"),maxit.iloop=10, maxit.oloop=10)Error in auto.arima(x = c(-601326943.954987, -56962056.0450134, -26726000, : No suitable ARIMA model foundIn addition: Warning message:In sqrt(diag(fit$var.coef)[id]) : NaNs produced How can I make this stationary without cutting the data in half? UPDATE: I performed an intervention analysis following steps from the paper by Tsay, using the model ARIMA(0, 1, 0)(1, 0, 0)4 as an estimate, and following the iterative procedure in the paper, I found the following : w v hyp types t11 102080630 9.828064e+14 3.256187 IO 224 69408316 5.806115e+14 2.880505 TC 222 39451617 3.232890e+14 2.194165 AO 223 70523673 6.465779e+14 2.773476 LC 22 The only hypothesis that passed the minimum criteria mentioned by Tsay is the innovative outlier at t=23 (23 because the data is differenced, so I add 1) This is what the transformed data looks like: The transformed data is mostly stationary after 2 differences: Also, auto.arima believes this to be the model of the transformed data: ARIMA(0,2,2)(1,0,0)[4]
Now provide the response and nothing else.
|
Differences do make it stationary when a level/step shift indicator is introduced to reflect a deterministic effect at period 23 . No need to segment the data ...just use a comprehensive hybrid model found via http://docplayer.net/12080848-Outliers-level-shifts-and-variance-changes-in-time-series.html . Here is a useful model using AUTOBOX (a time series forecasting package that I have helped to develop) (0,1,0)(1,0,0)4 with a level shift ( i.e. intercept change at period 23 and a pulse at period 22 .The Actual/Fit and Forecast is here . The Residuals are here with acf here suggesting model sufficiency . The model includes simple differencing and an intercept adjustment at period 23 along with a simple pulse and a seasonal ar component. Quite simple but powerful and elegant. Your comment "I also tried tsoutliers package to look for interventions, but it gives me errors " suggests to me possible serious software limitations or a possible "user error" which I can't immediately detect . The Actual/Cleansed graph is illuminating. while the forecast graph illustrates the 95% prediction limits using monte-carlo resampling. EDITED TO SHOW EXPLICITELY HOW A FORECAST IS MADE: The model can be expanded via algebra to present how a prediction can be seen as a "regression type model" . Here is a one-period out forecast computation where differences and autoregressive lag structure is incorporated..
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/446999', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/231639/']}
|
jdg_76259
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2447174
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It's known that there are sentences of first-order logic which only have infinite models, even if our language consists only of a binary relation $R$. An example of such a sentence is $$\forall x \exists y Rxy \wedge \forall x \forall y \forall z ((Rxy \wedge Ryz ) \to Rxz) \wedge \neg \exists x Rxx$$ What I'm curious about is whether there are still sentences with only infinite models when we mandate that our binary relation be symmetric? To phrase the question formally, consider a first-order language $\mathcal{L} = \{R^{2}\}$ (without equality). Let $\phi$ denote the formula $$\forall x \forall y (Rxy \to Ryx)$$ Is there a sentence $\psi$ of $\mathcal{L}$ such that $\phi \wedge \psi$ has an infinite model, but no finite models?
Now provide the response and nothing else.
|
Let us define the formulas: $$\kappa_3(x)=\exists u\exists v(Rxu\land Rxv\land Ruv)$$ $$\kappa_4(x)=\exists u\exists v\exists w(Rxu\land Rxv\land Rxw\land Ruv\land Ruw\land Rvw)$$ $$\alpha(x)=\neg\kappa_3(x)$$ $$\beta(x)=\kappa_3(x)\land\neg\kappa_4(x)$$ $$\gamma(x)=\kappa_4(x)$$ $$\sigma(x,y)=\alpha(x)\land\alpha(y)\land\exists u\exists v(Rxu\land Ruv\land Rvy\land\beta(u)\land\gamma(v))$$ Let $\psi$ be the conjunction of the sentences: $$\forall x\forall y\forall z(\sigma(x,y)\land\sigma(y,z)\to\sigma(x,z))$$ $$\forall x\neg\sigma(x,x)$$ $$\exists x\alpha(x)$$ $$\forall x\exists y(\alpha(x)\to\sigma(x,y))$$ Plainly, $\psi$ has no finite model. On the other hand, it is a straightforward exercise to construct an infinite model of $\phi\land\psi.$ Intuition behind this example. The (irreflexive) models of $\phi$ are just (undirected) graphs. The problem is to construct an asymmetric relation on an undirected graph. Given a vertex $x$ let $f(x)$ denote the maximum number of vertices in a clique containing $x$ . For vertices $x$ and $y$ , define $x\lt y$ to mean that $f(x),f(y)\le2$ and there is a path $x,u,v,y$ with $f(u)=3$ and $f(v)\ge4$ . Then we can write a first order sentence in the language of graph theory which says that the relation $\lt$ restricted to the set $\{x:f(x)\le2\}$ is an irreflexive transitive relation with no greatest element.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2447174', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/305615/']}
|
jdg_76260
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6793777
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does ClickOnce only look at the application manifest file to determine which dll files to copy to the client’s machine or does it also interrogate the internals of an assembly to determine dependency files? The reason I’m asking is because I’m getting the below ClickOnce error when trying to launch a WPF .NET 4 application that has been published with ClickOnce: The file C:\Users\CNelson\AppData\Local\Temp\Deployment\PGX6P33A.35N\AJQL8AC8.D60\tx16_rtf.dll already exists. This error began after I’ve included a reference to two 3rd party .NET dlls that both reference an unmanaged dll file (tx16_rtf.dll). I want tx16_rtf.dll to be copied to the bin folder on the client’s PC so I have included it in my project and set the Build Action to ‘Content’ and the Copy to Output Directory to ‘Copy Always’. However, for some reason when I try to launch the application ClickOnce is trying to copy the file ‘tx16_rtf.dll’ twice, which results in an error. If I look at the deployment manifest file I can clearly see one and only one entry for file ‘tx16_rtf.dll’. So, my question is, why does ClickOnce try to copy file ‘tx16_rtf.dll’ twice if it only exists once in the deployment manifest file? Below is a snippet of the deployment manifest file that references ‘tx16_rtf.dll’: <file name="tx16_rtf.dll" size="839680"> <hash> <dsig:Transforms> <dsig:Transform Algorithm="urn:schemas-microsoft-com:HashTransforms.Identity" /> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" /> <dsig:DigestValue>V6i2QcARl3+1SJHCugoazb9zrOY=</dsig:DigestValue> </hash> </file>
Now provide the response and nothing else.
|
In your Visual Studio solution, how is the file added? Please try the following. Add the dll to your project. If you have a reference to the dll in References, set the properties on the dll like this: Build Action = none, Copy to output directory = "do not copy". Then delete the reference and then re-add the reference, but point to that dll in your local project folder. On the REference, set "copy local" to true. If you don't have a Reference for the dll, set the properties on the dll like this: Build Action = "copy". Copy to Output Directory = "Copy always". If you have a reference, you want the reason for it to be included to be based on the reference, and not on the dll properties. If you don't have a reference, you want to set the dll specifically to be included. Also check the Application Files dialog and make sure the dll is not marked as Include(Prerequisite), but is Include or Include(Required).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6793777', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259273/']}
|
jdg_76261
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30759097
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why does this not work? my $myHashEncoded = encode_json \%myHash;my %myHashDecoded = decode_json($myHashEncoded); I get the error: Reference found where even-sized list expected at ... So I changed it to: my $myHashEncoded = encode_json \%myHash;my $myHashDecoded = decode_json($enableInputEncoded); But then obviously %myHash is not the same as $myHashDecoded . How do I restore a proper hash from the JSON string?
Now provide the response and nothing else.
|
Assuming you are using JSON.pm, the documentation says : The opposite of encode_json: expects an UTF-8 (binary) string and tries to parse that as an UTF-8 encoded JSON text, returning the resulting reference . So you are getting back what you put in. You're putting in a hashref and you're getting a hashref back. If you want a regular hash, then you just dereference it as you would any other hashref: my $myHashRefDecoded = decode_json($myHashEncoded);my %myHashDecoded = %$myHashRefDecoded;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30759097', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/617065/']}
|
jdg_76262
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34139230
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've created an AppBar layout like this <android.support.design.widget.AppBarLayout xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/appbar_layout" android:layout_height="@dimen/app_bar_height" android:layout_width="match_parent" android:fitsSystemWindows="true" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" app:elevation="20dp"> <android.support.design.widget.CollapsingToolbarLayout...></android.support.design.widget.AppBarLayout> it works and casts a shadow in the LinearLayout: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <include layout="@layout/app_bar_large" /></LinearLayout> However when I put it into the CoordinatorLayout shadow is gone: <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <include layout="@layout/app_bar_large" /></android.support.design.widget.CoordinatorLayout> How can I make appbar to show its shadow again?
Now provide the response and nothing else.
|
This is actually an implementation detail of CollapsingToolbarLayout , as seen in the source code : if (Math.abs(verticalOffset) == scrollRange) { // If we have some pinned children, and we're offset to only show those views, // we want to be elevate ViewCompat.setElevation(layout, layout.getTargetElevation());} else { // Otherwise, we're inline with the content ViewCompat.setElevation(layout, 0f);} Which removes the elevation when the CollapsingToolbarLayout is showing non-pinned elements - by default, it'll only have elevation when only pinned children are visible.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34139230', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4325925/']}
|
jdg_76263
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
216249
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to specify an element in a matrix and get all the elements in the diagonal of the element. For example: $$\left(\begin{array}{cccc} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\\end{array}\right)$$ when I specify an element 7 in the above matrix, I can get all the elements {4,7,10} in the reverse diagonal of the element. When the specified matrix elements have duplicates, the multiple lists of the opposite diagonals of containing the specified element are output. Diagonal[Reverse@({ {1, 2, 3, 4}, {5, 6, 7, 8}, {9, 10, 11, 12} }), 0 - 2]
Now provide the response and nothing else.
|
A slightly different formulation and output from Alan 's method: fn[a_?MatrixQ, x_] := a ~Reverse~ 2 /. b_ :> (Diagonal[b, #2 - #] & @@@ Position[b, x]) Test: Mod[Range@40, 12] ~Partition~ 8 // MatrixFormfn[%, 7] $\begin{array}{cccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 0 & 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 & 9 & 10 & 11 & 0 \\ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 0 & 1 & 2 & 3 & 4 \\\end{array}$ {{7, 2, 9, 4, 11}, {5, 0, 7, 2, 9}, {0, 7, 2}} Related: Is there a built in function to obtain the back diagonal of a matrix?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/216249', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/-1/']}
|
jdg_76264
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47647709
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I assume this is the easiest way to implement a method alias with TS: export class Foo { bar(){ } aliasOfBar(){ return this.bar.apply(this, arguments); }} but I am simply wondering if there is another way we can define an alias with TS (or JS). Perhaps ideally without an extra function call. If I do this, for example: let mySharedFn = function () {};export class Foo { public bar = mySharedFn public aliasBar = mySharedFn} it transpiles to this: "use strict";Object.defineProperty(exports, "__esModule", { value: true });var mySharedFn = function () {};var Foo = (function () { function Foo() { this.bar = mySharedFn; this.aliasBar = mySharedFn; } return Foo;}());exports.Foo = Foo; I'd like to avoid those extra calls that come with using the constructor to create the methods, etc.
Now provide the response and nothing else.
|
You can add the alias method to the class using an interface and the prototype, like this: class Person { constructor(public name: string) {} greet(greeting: string): string { return `${greeting}, ${this.name}`; }}interface Person { hi: typeof Person.prototype.greet;}Person.prototype.hi = Person.prototype.greet;const p = new Person("Alice");console.log(p.greet("Hey"));console.log(p.hi("Hi"));
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47647709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223975/']}
|
jdg_76265
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47372363
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to upload an image to imgur and hoping to get a url of the image back. Imgur requires and incoming images to be either a binary file, base64 data, or a URL for an image. I am converting an image to base64 and am getting an error stating that it is an invalid file type. Here is my code: let imageData = UIImagePNGRepresentation(checkView.image!) let base64Image = imageData?.base64EncodedString(options: .lineLength64Characters) let urlPath = "https://api.imgur.com/3/upload" let url = URL(string:urlPath) var request = URLRequest(url: url!) request.setValue("Client-ID MyClientIDKEy", forHTTPHeaderField: "Authorization") request.httpMethod = "POST" // create post string with username and password let postString = "image=" + base64Image! request.httpBody = postString.data(using: .utf8) let task = URLSession.shared.dataTask(with: request) { data, response, error in guard let data = data, error == nil else { // check for fundamental networking error print("Data empty or error=\(String(describing: error))") return } if let httpStatus = response as? HTTPURLResponse, httpStatus.statusCode != 405 { // check for http errors print("statusCode should be 200, but is \(httpStatus.statusCode)") print("response from status code = \(String(describing: response))") } // store data let json = try? JSONSerialization.jsonObject(with: data, options: []) as! [String:Any] let responseString = NSString(data: data, encoding: String.Encoding.utf8.rawValue) // printing feedback print("responseString = \(responseString)") print("--------------------------------") print(json) print("--------------------------------") } task.resume()} and the response I'm getting: statusCode should be 200, but is 415response from status code = Optional(<NSHTTPURLResponse: 0x1c0225280> { URL: https://api.imgur.com/3/upload } { Status Code: 415, Headers {"Access-Control-Allow-Origin" = ( "*");"Cache-Control" = ( "no-store, no-cache, must-revalidate, post-check=0, pre-check=0");"Content-Length" = ( 174);"Content-Type" = ( "application/json");Date = ( "Sat, 18 Nov 2017 23:14:27 GMT");Server = ( nginx);"access-control-allow-headers" = ( "Authorization, Content-Type, Accept, X-Mashape-Authorization, IMGURPLATFORM, IMGURUIDJAFO, SESSIONCOUNT, IMGURMWBETA, IMGURMWBETAOPTIN");"access-control-allow-methods" = ( "GET, PUT, POST, DELETE, OPTIONS");"access-control-expose-headers" = ( "X-RateLimit-ClientLimit, X-RateLimit-ClientRemaining, X-RateLimit-UserLimit, X-RateLimit-UserRemaining, X-RateLimit-UserReset");"x-post-rate-limit-limit" = ( 1250);"x-post-rate-limit-remaining" = ( 1246);"x-post-rate-limit-reset" = ( 3039);} }) responseString = Optional({"data":{"error":{"code":1003,"message":"File type invalid (2)","type":"ImgurException","exception":{}},"request":"\/3\/upload","method":"POST"},"success":false,"status":415})Optional(["status": 415, "data": { error = { code = 1003; exception = { }; message = "File type invalid (2)"; type = ImgurException; }; method = POST; request = "/3/upload";}, "success": 0])--------------------------------nil--------------------------------
Now provide the response and nothing else.
|
Your test uses a testing angular module which only has a CreatearchiveComponent , but doesn't import ReactiveFormsModule. So the FormBuilder service, provided by ReactiveFormsModule, is not available. You need to import the modules that are needed by the component under test: TestBed.configureTestingModule({ declarations: [ CreatearchiveComponent ], imports: [ReactiveFormsModule], schemas: [NO_ERRORS_SCHEMA]})
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47372363', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8808125/']}
|
jdg_76266
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
563072
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I read about Kepler's Laws and in one of them he mentions that the path of a planet is an ellipse, with the sun as one of its foci (I'm narrowing down this to only our solar system). However though I'm not experienced in this subject, I had a doubt. I read in places that the Sun is not stationary. Please correct me if it's not the case. But if it is the case, then the path of the planets is an ellipse only with respect to the Sun. So the actual path of a planet observed from, let's say, a point in space, would differ from an ellipse? Or is this already factored into the law? Please help me because I'm new to this concept.
Now provide the response and nothing else.
|
This is an interesting question, since it raises the problem of the reference frame where Kepler's laws are true, which is often neglected. As a consequence of Newton's laws, in the inertial reference frames where the center of mass (c.m.) is fixed (there is a triple infinity of them, differing only with respect to the position of the c.m.) both planet and Sun describe an elliptic motion having the center of mass as one focus of the ellipse. The two ellipses are similar, with a rescaling factor equal to the planet/Sun mass ratio. In every other inertial frame, the elliptic motion is combined with a uniform translation, therefore, in such systems, no closed orbit exist anymore. There are two additional reference frames where the orbit is an ellipse. Both are non-inertial. One is the non-inertial reference frame where the Sun is fixed. You correctly noticed that the Sun is non-stationary. But this is true in any inertial frame. If one picks precisely the non-rotating, non-inertial system where the Sun is fixed, it stays forever at the position of one focus of the elliptic orbit of the planet. Similarly, one could sit on the planet without rotations, and in that system the orbit of the Sun would be again an ellipse like the one of the planet, with the planet at one focus position. In conclusion, there is not the actual path . Shapes and properties of the orbits are not invariant with respect to changes of reference.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/563072', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/259549/']}
|
jdg_76267
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39558316
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I needed to install PyPdf2 to one of my programs using Anaconda. Unfortunately, I failed, but the URLs that was added to Anaconda environment prohibit the updates of all the Conda libraries.Every time I tried to update anaconda it gives the following conda update condaUsing Anaconda Cloud api site https://api.anaconda.orgFetching package metadata ..........Error: Invalid index file: https://pypi.python.org/pypi/PyPDF2/1.26.0/win-64/repodata.json: No JSON object could be decoded I typed the command conda info to see what causes the error, I found lots of URLs that points to PyPdf2! Simply, I want to remove all these URLS from anaconda's channel URLs, How can I do it? No matter manually or automatic. Note: I have uninstalled Anaconda, and reinstall, but no luck! C:\WINDOWS\system32>conda infoUsing Anaconda Cloud api site https://api.anaconda.orgCurrent conda install: platform : win-64 conda version : 4.1.6 conda-env version : 2.5.1 conda-build version : 1.21.3 python version : 2.7.12.final.0 requests version : 2.10.0 root environment : C:\Anaconda2 (writable) default environment : C:\Anaconda2 envs directories : C:\Anaconda2\envs package cache : C:\Anaconda2\pkgs channel URLs : https://pypi.python.org/pypi/PyPDF2/1.26.0/win-64/ https://pypi.python.org/pypi/PyPDF2/1.26.0/noarch/ https://conda.anaconda.org/C:\Python27\Lib\site-packages\PyPDF2/win-64/ https://conda.anaconda.org/C:\Python27\Lib\site-packages\PyPDF2/noarch/ https://conda.anaconda.org/X:\Downloads\Compressed\PyPDF2-master\/win-64/ https://conda.anaconda.org/X:\Downloads\Compressed\PyPDF2-master\/noarch/ https://github.com/mstamy2/PyPDF2/zipball/master/win-64/ https://github.com/mstamy2/PyPDF2/zipball/master/noarch/ https://pypi.python.org/pypi/PyPDF2/win-64/ https://pypi.python.org/pypi/PyPDF2/noarch/ https://pythonhosted.org/PyPDF2/win-64/ https://pythonhosted.org/PyPDF2/noarch/ https://github.com/mstamy2/PyPDF2/win-64/ https://github.com/mstamy2/PyPDF2/noarch/ https://repo.continuum.io/pkgs/free/win-64/ https://repo.continuum.io/pkgs/free/noarch/ https://repo.continuum.io/pkgs/pro/win-64/ https://repo.continuum.io/pkgs/pro/noarch/ config file : C:\Users\Dr. Mohammad Elnesr\.condarc offline mode : False is foreign system : False
Now provide the response and nothing else.
|
Expanding upon Mohammed's answer . All those URLs that you see in your conda info are your channel URLs. These are where conda will look for packages. As noted by @cel, these channels can be found in the .condarc file in your home directory. You can interact with the channels, and other data, in your .condarc file with the conda config command. For example, let's say your .condarc file lists the following channels: channels: - https://github.com/mstamy2/PyPDF2/ - defaults Then if we do conda config --get channels we will see returned: --add channels 'defaults' # lowest priority--add channels 'https://github.com/mstamy2/PyPDF2/' # highest priority If we then want to remove the github channel we would do conda config --remove channels 'https://github.com/mstamy2/PyPDF2/' . You can also add channels through the --add command so, for example, we could add back that channel with conda config --add channels 'https://github.com/mstamy2/PyPDF2/' . In this case, since there were several channels to remove, it was probably faster to simply edit the .condarc directly but it's useful to know how to do it through conda config .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39558316', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5820024/']}
|
jdg_76268
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21914632
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to implement pipe in C. eg - $ ls | wc | wc I have written the following code - #include<stdio.h>#include<stdlib.h>#include <unistd.h>void run_cmd(char *cmd, int* fd_in, int* fd_out){ int c = fork(); if (c==0) { if (fd_in != NULL) { close(fd_in[1]); dup2(fd_in[0], 0); } if (fd_out != NULL) { close(fd_out[0]); dup2(fd_out[1],1); } execlp(cmd, cmd, NULL); }}int main(int argc, char **argv){ int fd_1[2], fd_2[2], i; pipe(fd_1); pipe(fd_2); run_cmd(argv[1], NULL, fd_1); for( i=2; i<argc-1; i++) { if (i%2 == 0) run_cmd(argv[i], fd_1, fd_2); else run_cmd(argv[i], fd_2, fd_1); } if (i%2 == 0) run_cmd(argv[i], fd_1, NULL); else run_cmd(argv[i], fd_2, NULL);} This works fine with two arguments, eg - $./a.out ls wc But when I try with more than two arguments it does not work. Would anyone please tell me what's wrong with my code, or any other way to do this?
Now provide the response and nothing else.
|
This does virtually no error checking, but why so complicated? int main (int argc, char ** argv) { int i; for( i=1; i<argc-1; i++) { int pd[2]; pipe(pd); if (!fork()) { dup2(pd[1], 1); // remap output back to parent execlp(argv[i], argv[i], NULL); perror("exec"); abort(); } // remap output from previous child to input dup2(pd[0], 0); close(pd[1]); } execlp(argv[i], argv[i], NULL); perror("exec"); abort();}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21914632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2736800/']}
|
jdg_76269
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2121158
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand the details of the subversion merge commands. I think understanding the difference between a change that is also a conflict and a change that is not a conflict would help. This is a followup to this thread .
Now provide the response and nothing else.
|
A change that is a conflict is when 2 people have both made a change to the same file in such a way that the two changes can not be automatically resolved. 1) Lets start with an example of a non-conflicting merge. Original file line1line2line3 Person A changes it to this: line1CHANGEDline2line3 Person B changes it to this: line1line2CHANGEDline3 When those are both checked in and merged, there is no conflict because it can easily resolve to produce this final file: line1CHANGEDline2CHANGEDline3 Subversion will handle this automatically as a merge. 2) Now an example of conflicting changes. Original file line1line2line3 Person A changes it to this: line1CHANGED_BY_Aline2line3 Person B changes it to this: line1CHANGED_BY_Bline2line3 This can't be merged automatically, so it is a conflict. You will need to resolve, either by accepting person A's change or person B's change. In this case subversion will warn you of conflicts and require a decision from you on how to resolve them. 3) Finally, you can have both conflicting and non conflicting changes within the same revision. Original file line1line2line3 Person A changes it to this: line1CHANGED_BY_Aline2ALSO_CHANGED_BY_Aline3 Person B changes it to this: line1CHANGED_BY_Bline2line3ALSO_CHANGED_BY_B Now, with this example, both people have changed the file, and there is a conflicting change on line 1 that must be resolved, but lines 2 & 3 are non-conflicting changes and can be resolved automatically. You can choose to resolve this in several ways. Firstly, you can fully accept either A's or B's file and discard the other. This would result in the other persons non-conflicting changes being lost. Say, you choose to fully resolve using A, your final file would be: line1CHANGED_BY_Aline2ALSO_CHANGED_BY_Aline3 (Exactly A's file, and all changes by B are discarded) Secondly, you can resolve only the conflicting changes, and still retain all the non-conflicting changes. This was you would choose either A's or B's change for the first line, and still get both of the other line's changes, from both people. So, say for example you choose to resolve conflicts using A, your final file would be: line1CHANGED_BY_Aline2ALSO_CHANGED_BY_Aline3ALSO_CHANGED_BY_B Alternative you can use tools like KDiff that support reviewing of each conflict separately (because of course you can have mutliple changes, both conflicting and non-conflicting, within the same file), which will allow you to select different methods of resolution for each. If you are having trouble understanding merging with the command line tools I strongly recommend you take a look at KDiff (or some other GUI merge/diff tool) as they display the files alongside each other (along with the original) and allow you to visual what each resolution action would do.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2121158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/203736/']}
|
jdg_76270
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14378401
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this form : http://jsfiddle.net/dfJeN/ As you can see the name value for the input is statically set : name="username" , the form validation works fine (add something and remove all text from the input, a text must appears). Then I try to dynamically set the name value : http://jsfiddle.net/jNWB8/ name="{input.name}" Then I apply this to my validation login.{{input.name}}.$error.required (this pattern will be used in an ng-repeat) but my form validation is broken. It is correctly interpreted in my browser (if I inspect the element I saw login.username.$error.required). Any Idea ? EDIT: After logging the scope in the console it appears that the {{input.name}} expression is not interpolate. My form as an {{input.name}} attribute but no username. UPDATE: Since 1.3.0-rc.3 name="{{input.name}}" works as expected. Please see #1404
Now provide the response and nothing else.
|
You can't do what you're trying to do that way. Assuming what you're trying to do is you need to dynamically add elements to a form, with something like an ng-repeat, you need to use nested ng-form to allow validation of those individual items: <form name="outerForm"><div ng-repeat="item in items"> <ng-form name="innerForm"> <input type="text" name="foo" ng-model="item.foo" /> <span ng-show="innerForm.foo.$error.required">required</span> </ng-form></div><input type="submit" ng-disabled="outerForm.$invalid" /></form> Sadly, it's just not a well-documented feature of Angular.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14378401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1829288/']}
|
jdg_76271
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13763880
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Say I have the following file <block> <foo val="bar"/> <foo val="bar"/></block><block> <foo val="bar"/> <foo val="bar"/></block> How could I make that into <block> <foo val="bar1"/> <foo val="bar"/></block><block> <foo val="bar1"/> <foo val="bar"/></block> One thing I tried to do was record a macro with :%s/bar/bar1/gc and press y and n once each and then try to edit that macro. For some reason I cannot edit the macro. :(
Now provide the response and nothing else.
|
Just to show that this can be done in a substitution: :let a = ['', '1']:%s/bar\zs/\=reverse(a)[0]/g Overview Replace at the end of every bar with the first element of array in variable a after the array is reversed in-place upon every substitution. Glory of Details let a = ['', '1'] define an variable a to hold our array %s/.../.../ do a substitution on every line in the file %s/bar\zs/.../ do a substitution on bar but start the replacement after bar using \zs \= inside the replacement portion of the :s command uses the value of the following expression reverse(a) reverse simply reverses the array, but does so in-place reverse(a)[0] reverse returns the now reversed array so get the first element /g replace all occurances in the line (optional) General Case :let a = ['a', 'b', 'c']:%s/bar\zs/\=add(a, remove(a, 0))[-1]/g The general case "rotates" the array, a , in-place and uses the last position of the array as the value for the replacement of the substitution. For more help see :h :s:h range:h /\zs:h :s\=:h reverse(:h :s_flags:h Lists:h add(:h remove
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13763880', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/608128/']}
|
jdg_76272
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32006113
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm try to use ui-router to manage nested views on a single page app. Let's say I want to create a dashboard application with a common area and multiple views. The main and the nested states are handled like this: $stateProvider.state('home', { url: '/', template: 'MY HOME PAGE' }) .state('login', { url: '/login', templateUrl: '/pages/login.html' }) .state('registration', { url: '/registration', templateUrl: '/pages/registration.html' }) .state('dashboard', { url: '/dashboard', templateUrl: '/pages/dashboard/dashboard.html' }) .state('dashboard.me', { url: '/me', templateUrl: '/pages/dashboard/me.html' }) .state('dashboard.messages', { url: '/messages', templateUrl: '/pages/dashboard/messages.html' }) .state('dashboard.friends', { url: '/friends', templateUrl: '/pages/dashboard/friends.html' }); The dashboard HTML page is the following: <div class="container" ng-controller="dashboardCtrl" ng-init="init()"><h2>DASHBOARD</h2><ul> <li><a ui-sref="dashboard.me">Me</a></li> <li><a ui-sref="dashboard.messages">My Messages</a></li> <li><a ui-sref="dashboard.friends">My Friends</a></li></ul><div ui-view></div> The above HTML is also included inside an ui-view . Everything works fine if I navigate my application using the anchors.By the way if I try to go directly to myhost/dashboard/me or myhost/dashboard/friends (every path with two levels of nested views) the app doesn't work. I get an angular (unexpected token <) but I don't think it's relevant... It seems like it's not able to resolve the first level of nested view.The following images show the HTML obtained when the navigation is done using anchors: and the HTML obtained when the page is called directly from the browser address link: Any ideas? Thanks.
Now provide the response and nothing else.
|
The date_format validator takes only one date format as parameter. In order to be able to use multiple formats, you'll need to build a custom validation rule. Luckily, it's pretty simple. You can define the multi-format date validation in your AppServiceProvider with the following code: class AppServiceProvider extends ServiceProvider { public function boot() { Validator::extend('date_multi_format', function($attribute, $value, $formats) { // iterate through all formats foreach($formats as $format) { // parse date with current format $parsed = date_parse_from_format($format, $value); // if value matches given format return true=validation succeeded if ($parsed['error_count'] === 0 && $parsed['warning_count'] === 0) { return true; } } // value did not match any of the provided formats, so return false=validation failed return false; }); }} You can later use this new validation rule like that: 'trep_txn_date' => 'date_multi_format:"Y-m-d H:i:s.u","Y-m-d"' You can read more about how to create custom validation rules here: http://laravel.com/docs/5.1/validation#custom-validation-rules
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32006113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4094496/']}
|
jdg_76273
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19524714
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on some fortran-calling-C code and am unclear about the use of the iso_c_binding module. I have fortran and C interfaces working successfully without iso_c_binding, and the question is if I should still explicitly bind functions and variables. For example, this works: program testprog...interface subroutine c_parser(param) integer, intent(in) :: param end subroutineend interfaceinteger :: acall c_parser(a)..end program/****************/void c_parser_ (int* param) So append an underscore to a C function, write an interface to it, and call it from a fortran program.I dont use pointers or allocatables, all my code has ints, char, float and logical that need to be moved from a fortran subroutine to C. What exact purpose does iso_c_binding serve? Are there any gotchas? As an example, this mentions a caveat when using strings through the binding (see the part "Unfortunately, on at least the GNU and Intel compilers, the statement").
Now provide the response and nothing else.
|
The "working" approach in the question is inherently Fortran processor specific. It is a common arrangement on some operating systems due to historical convention, but it is by no means ubiquitous. Within a compiler family, aspects of the calling convention will vary with compile options and have varied with compiler version in a way that may break that approach. ISO_C_BINDING is just a module that provides some constants, some types and some procedures. It happens to be one that is specified by the standard and supplied by the compiler vendor (it is an intrinsic module), but otherwise it has no special powers. Those constants, types and procedures all help a Fortran programmer create data objects, data pointers and procedure pointers that are compatible with their C counterparts, in a portable way. Some of the the types and procedures are special - in that the programmer cannot necessarily create them by writing their own Fortran code. There is much more to C interoperability than the use of the intrinsic module. Putting USE ISO_C_BINDING at the top of a scope doesn't, on its own, change anything, bar making certain identifiers accessible. The BIND(C) attribute must be applied to variables, derived types, common blocks and procedures that need to be interoperable between C and Fortran. In some cases, this attribute also specifies the binding name (C name) for the thing. For derived types and common this might change the alignment and ordering of components, while for procedures it might change aspects of the procedure's calling convention. There are also a set of requirements around the nature of data objects and procedures that are interoperable, that the programmer must follow. (The "Unfortunately..." comment in the linked fortran wiki page seems to be a misunderstanding about how assignment to a character array works in Fortran - it has nothing to do with C interoperability per se, bar the requirement that it is LEN=1 character arrays in Fortran that are interoperable with C char arrays.) If the suite of targeted compilers for a particular project support the necessary parts of Fortran 2003, then I can think of no valid reason to not be using the C interoperability features of Fortran 2003. The improvement in robustness and portability of the resulting code that results from the use of this feature is compelling.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19524714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/504898/']}
|
jdg_76274
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
89
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Eventually, the Sun will run out of fuel, go dark, and other bad things will happen. About how much long is it predicted that we will have a sun functioning as it is currently?
Now provide the response and nothing else.
|
Just to add to Undo's answer, after the expansion to a red giant, the sun will become a planetary nebula , where (according to the link) the fusion reactions inside the star are 'overtaken' by the gravitational collapse, in turn causing the inner layers to condense and heat up causing the outer layers to be blown away. After which, the hot core forms the remnant, a White Dwarf star (NASA source), which is likely to last for several more billions of years. The image below depicts the current ideas of the expected lifecycle and timeline of the life of the sun: Image source How do we know what will happen to the sun? Currently, the main method to determine the solar lifecycle is described by the article "The Future of the Sun" (Cowing, 2013) is to: Studying stars with the same mass and composition as the Sun, the so-called "solar twins," can give us more information about our own Sun; solar twins of various ages offer snapshots of the Sun's evolution at different phases Where the mass and chemical composition of a star provide the information needed to determine its lifecycle.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/89', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/19/']}
|
jdg_76275
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14694852
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can overridden methods have different return types ?
Now provide the response and nothing else.
|
Java supports * covariant return types for overridden methods. This means an overridden method may have a more specific return type. That is, as long as the new return type is assignable to the return type of the method you are overriding, it's allowed. For example: class ShapeBuilder { ... public Shape build() { ....}class CircleBuilder extends ShapeBuilder{ ... @Override public Circle build() { ....} This is specified in section 8.4.5 of the Java Language Specification : Return types may vary among methods that override each other if the return types are reference types. The notion of return-type-substitutability supports covariant returns, that is, the specialization of the return type to a subtype. A method declaration d1 with return type R1 is return-type-substitutable for another method d2 with return type R2, if and only if the following conditions hold: If R1 is void then R2 is void. If R1 is a primitive type, then R2 is identical to R1. If R1 is a reference type then: R1 is either a subtype of R2 or R1 can be converted to a subtype of R2 by unchecked conversion (§5.1.9), or R1 = |R2| ("|R2|" refers to the erasure of R2, as defined in §4.6 of the JLS .) * Prior to Java 5, Java had invariant return types, which meant the return type of a method override needed to exactly match the method being overridden.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/14694852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1801269/']}
|
jdg_76276
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23776
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My question is how to interpret sigma algebra, especially in the context of probability theory (stochastic processes included). I would like to know if there is some clear and general way to interpret sigma algebra, which can unify various ways of saying it as history, future, collection of information, size/likelihood-measurable etc? Specifically,I hope to know how to interpret the following in some consistent way: being given/conditional on a sigma algebra a subset being measurable or nonmeasurable w.r.t. a sigmaalgebra a mapping being measurable or nonmeasurable w.r.t. asigma algebra in domain and anothersigma algebra in codomain a collection of increasing sigma algebras, i.e. a filtration of sigma algebras ... Following are a list of examples that I have met. They are nice examples, but I feel their ways of interpretation are not clear and consistent enough for me to apply in practice. Even if there is no unified way to interpret all the examples, I would like to know what some different ways of interpretation are. Stopping time Let $(I, \leq)$ be an ordered index set, and let $(\Omega, \mathcal{F},\mathcal{F}_t, \mathbb{P})$ be a filtered probability space. Then a random variable $\tau : \Omega \to I$ is called a stopping time if $\{ \tau \leq t \} \in \mathcal{F}_{t} \forall t \in I$. Speaking concretely, for τ to be a stopping time, it should be possible to decide whether or not $\{ \tau \leq t \}$ has occurred on the basis of the knowledge of $\mathcal{F}_t$, i.e., event $\{ \tau \leq t \}$ is $\mathcal{F}_t$-measurable. I was still wondering how exactly to "decide whether or not $\{ \tau \leq t \}$ has occurred on the basis of the knowledge of $\mathcal{F}_t$, i.e., event $\{ \tau \leq t \}$ is $\mathcal{F}_t$-measurable." Martingale process If a stochastic process $Y : T \times \Omega \rightarrow S$ is a martingale with respect to a filtration $\{ \Sigma_t\}$ and probability measure $P$, then for all s and t with $s < t$ and all $F \in \Sigma_s$, $$Y_s = \mathbf{E}_{\mathbf{P}} ( Y_t | \Sigma_s ),$$ where $\Sigma_s $ is interpreted as "history". I was also wondering how $\Sigma_s, s < t$ can act as history, $\Sigma_s, s=t$ as present, and $\Sigma_s, s > t$ as future? I originally interpret a measurablesubset wrt a sigma algebra as asubset whose "size"/"likelihood" is measurable,and the class of suchsize-measurable subsets must beclosed under complement andcountable union. In a post by Nate Eldredge , ameasurable subset wrt a sigmaalgebra is interpreted by analogy of questions being answered: If I know the answer to a question $A$, then I also know the answer to its negation, which corresponds to the set $A^c$ (e.g. "Is the dodo not-extinct?"). So any information that is enough to answer question $A$ is also enough to answer question $A^c$. Thus $\mathcal{F}$ should be closed under taking complements. Likewise, if I know the answer to questions $A,B$, I also know the answer to their disjunction $A \cup B$ ("Are either the dodo or the elephant extinct?"), so $\mathcal{F}$ must also be closed under (finite) unions. Countable unions require more of a stretch, but imagine asking an infinite sequence of questions "converging" on a final question. ("Can elephants live to be 90? Can they live to be 99? Can they live to be 99.9?" In the end, I know whether elephants can live to be 100.) Thanks in advance for sharing your views, and any reference that has related discussion is also appreciated!
Now provide the response and nothing else.
|
Gambling is a good starting-point for probability. We can treat $\sigma$-field as a structure of events as we need to define the addition and multiplication for numbers. The completeness of the real numbers is suitable for our calculations, and $\sigma$-field plays the same role. I hope the following gambling example helps you to understand the filtration and conditional expectation. Assuming that two people, say player A and player B, bet on the results of two coin tosses.H: head T: tail At the time $0$, A and B do not know anything about the result except that one of the events in $\Omega=\{HH,HT,TH,TT\}$ will happen. Hence the information at time $0$ that they both know is $\mathcal{F}_0=\{\emptyset,\Omega\}$. At the time $1$, the coin had been tossed only once; and they know that the events in the $\sigma$-field $\mathcal{F}_1=\{\emptyset, \Omega, \{HH,HT\},\{TH,TT\}\}\supset \mathcal{F}_0 $ could happen. At the time $2$, the coin had been tossed twice; and they know that the events in the $\sigma$-field $\mathcal{F}_2=\{\emptyset, \Omega,\{HH,HT\},\{TH,TT\},\{HH\},\{HT\},\{TH\},\{TT\}\}\supset \mathcal{F}_1$ could happen which means they know everything about the gambling results. Please notice the evolution of information characterized by the filtrations $\mathcal{F}_0,\mathcal{F}_1,\mathcal{F}_2.$ With time passing, the unknown world $\Omega$ is divided more finely. It is something like water flows through pipes. Assuming that they bet on the following results and the coin is fair.$$X(\omega)=\left\{ \begin{array}{l}2, \omega=HH,\mbox{means the first tossing is H, and the second tossing is H}\\1, \omega=HT,\mbox{means the first tossing is H, and the second tossing is T}\\1, \omega=TH,\mbox{means the first tossing is T, and the second tossing is H} \\0, \omega=TT,\mbox{means the first tossing is T, and the second tossing is T}\\\end{array} \right.$$ Then, we have $$E[X|\mathcal{F}_0](\omega)=1\qquad\text{for every}\ \omega $$$$E[X|\mathcal{F_2}](\omega)=X(\omega)\qquad\text{for every}\ \omega $$$$E[X|\{HH,HT\}]=2P(HH|\{HH,HT\})+1P(HT|\{HH,HT\})$$$$+1P(TH|\{HH,HT\})+0P(TT|\{HH,HT\})=\frac{3}{2}$$$$E[X|\{TH,TT\}]=2P(HH|\{TH,TT\})+1P(HT|\{TH,TT\})$$$$+1P(TH|\{TH,TT\})+0P(TT|\{TH,TT\})=\frac{1}{2} $$ $$E[X|\mathcal{F_1}](\omega)=\left\{ \begin{array}{l} \frac{3}{2}, \omega\in \{HH,HT\}\\\frac{1}{2}, \omega \in \{TH,TT\}\end{array} \right.$$ I hope those would be helpful.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/23776', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1281/']}
|
jdg_76277
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28848187
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What's the best way (fastest performance) to convert from []int8 to string? For []byte we could do string(byteslice) , but for []int8 it gives an error: cannot convert ba (type []int8) to type string I got the ba from SliceScan() method of *sqlx.Rows that produces []int8 instead of string Is this solution the fastest? func B2S(bs []int8) string { ba := []byte{} for _, b := range bs { ba = append(ba, byte(b)) } return string(ba)} EDIT my bad, it's uint8 instead of int8 .. so I can do string(ba) directly.
Now provide the response and nothing else.
|
Note beforehand: The asker first stated that input slice is []int8 so that is what the answer is for. Later he realized the input is []uint8 which can be directly converted to string because byte is an alias for uint8 (and []byte => string conversion is supported by the language spec ). You can't convert slices of different types, you have to do it manually. Question is what type of slice should we convert to? We have 2 candidates: []byte and []rune . Strings are stored as UTF-8 encoded byte sequences internally ( []byte ), and a string can also be converted to a slice of runes. The language supports converting both of these types ( []byte and []rune ) to string . A rune is a unicode codepoint. And if we try to convert an int8 to a rune in a one-to-one fashion, it will fail (meaning wrong output) if the input contains characters which are encoded to multiple bytes (using UTF-8) because in this case multiple int8 values should end up in one rune . Let's start from the string "世界" whose bytes are: fmt.Println([]byte("世界"))// Output: [228 184 150 231 149 140] And its runes: fmt.Println([]rune("世界"))// [19990 30028] It's only 2 runes and 6 bytes. So obviously 1-to-1 int8 -> rune mapping won't work, we have to go with 1-1 int8 -> byte mapping. byte is alias for uint8 having range 0..255 , to convert it to []int8 (having range -128..127 ) we have to use -256+bytevalue if the byte value is > 127 so the "世界" string in []int8 looks like this: [-28 -72 -106 -25 -107 -116] The backward conversion what we want is: bytevalue = 256 + int8value if the int8 is negative but we can't do this as int8 (range -128..127) and neither as byte (range 0..255) so we also have to convert it to int first (and back to byte at the end). This could look something like this: if v < 0 { b[i] = byte(256 + int(v))} else { b[i] = byte(v)} But actually since signed integers are represented using 2's complement , we get the same result if we simply use a byte(v) conversion (which in case of negative numbers this is equivalent to 256 + v ). Note: Since we know the length of the slice, it is much faster to allocate a slice with this length and just set its elements using indexing [] and not calling the built-in append function. So here is the final conversion: func B2S(bs []int8) string { b := make([]byte, len(bs)) for i, v := range bs { b[i] = byte(v) } return string(b)} Try it on the Go Playground .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28848187', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1620210/']}
|
jdg_76278
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
100715
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To calculate the volume of a pyramid (not tetrahedron!) you've to use the formula $\frac{1}{3}B\cdot H,$ where $B$ is the area of the base and $H$ is the height. My question is: why 1/3? Is a pyramid one-third of a cuboid?
Now provide the response and nothing else.
|
While using calculus to derive this is a bit heavy-handed, consider this: $V= \int A(h) \,\mathrm{d}h$, where $A$ is the area of a cross-section parallel to the base, but at distance $h$ from the apex. Since the length of the sides of the cross-section grows $\propto h$, $A(h)\propto h^2$. But $B=A(H)$, so $A(h)=\frac{h^2}{H^2}B$. Evaluate the integral:$$ V= \int_0^H A(h)\,\mathrm{d}h = \int_0^H B \frac{h^2}{H^2}\,\mathrm{d}h = \frac{1}{3}BH$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/100715', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14370/']}
|
jdg_76279
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8307620
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a webpage loaded in a UIWebView, and a javascript function of the page needs to data from native iOs app, a NSString. How can a Js function access the data in native app? Thanks, lvreiny
Now provide the response and nothing else.
|
You can execute JavaScript in your UIWebView from Obj-C. Simply call [webView stringByEvaluatingJavaScriptFromString:@"myJavaScript"]; . I could imagine a setup like this: Webpage <html> <head> <script type="text/javascript"> function callmeFromObjC(para1) { // do something alert(para1); } </script> </head> <body> </body></html> Objective-C NSString *myParameter = @"myParameter";[webView stringByEvaluatingJavaScriptFromString:[NSString stringWithFormat:@"callmeFromObjC('%@')", myParameter]];
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8307620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/737585/']}
|
jdg_76280
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
962214
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to set root 's password: root@OpenWrt:~# passwdChanging password for rootEnter the new password (minimum of 5, maximum of 8 characters)Please use a combination of upper and lower case letters and numbers. It seems the maximum length is 8. If I try to set a password longer than 8, only the first 8 characters are valid. How can I set a longer password for root ? My OpenWrt version: Linux OpenWrt 4.14.108 #0 SMP Wed Mar 27 21:59:03 2019 x86_64 GNU/Linux
Now provide the response and nothing else.
|
This is because DES-based crypt (AKA 'descrypt') truncates passwords at 8 bytes, and only checks the first 8 for the purpose of password verification. That's the answer to your direct question, but here's some general advice implied by your context: Fortunately, from my reading, MD5 in /etc/login.defs is actually md5crypt ($1$), which, while a little outdated and declared deprecated by its author , is still far superior to DES-based crypt (and definitely much better than a raw, unsalted hash like plain MD5! Most unsalted hashes can be cracked on commodity GPUs at rates of billions per second) It looks like SHA256 (actually sha256crypt) and SHA512 (actually sha512crypt) are also there. I would pick one of those instead. If you set your password to password or something under each scheme, you can visually verify whether or not my conclusion that they're the -crypt variants is correct (examples here are taken from the hashcat example hashes , all 'hashcat', some wrapped for readability): Not recommended - unsalted or legacy hash types, much too "fast" (cracking rates) for password storage: MD5 - 8743b52063cd84097a65d1633f5c74f5SHA256 - 127e6fbfe24a750e72930c220a8e138275656b8e5d8f48a98c3c92df2caba935SHA512 - 82a9dda829eb7f8ffe9fbe49e45d47d2dad9664fbb7adf72492e3c81ebd3e2 \ 9134d9bc12212bf83c6840f10e8246b9db54a4859b7ccd0123d86e5872c1e5082fdescrypt - 48c/R8JAv757A OK - much better than unsalted, no truncation, but no longer sufficiently resistant to brute force on modern hardware: md5crypt - $1$28772684$iEwNOgGugqO9.bIz5sk8k/ Better - relatively modern hashes with large salts and work factors: sha256crypt - $5$rounds=5000$GX7BopJZJxPc/KEK$le16UF8I2Anb.rOrn22AUPWvzUETDGefUmAV8AZkGcDsha512crypt - $6$52450745$k5ka2p8bFuSmoVT1tzOyyuaREkkKBcCNqoDKzYiJL9RaE8yMnPgh2XzzF0NDrUhgrcLwg78xs1w5pJiypEdFX/ Of these, only descrypt truncates at 8. The last two are your best bet. (Side note: the digits-only salts in the md5crypt and sha512crypt examples above are just side effects of how hashcat creates example hashes; real, healthy salts are usually drawn from a much larger keyspace). Note also that I'm only listing the hash types that are supported by /etc/login.defs on this platform. For general use, even sha256crypt and sha512crypt have been superseded - first by bcrypt, and then later by truly parallel-attack-resistant hashes like scrypt and the Argon2 family. (Note, however, that for interactive logins that should complete in under one second, bcrypt is actually more resistant to attack than the latter)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/962214', 'https://serverfault.com', 'https://serverfault.com/users/306531/']}
|
jdg_76281
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6654351
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am uploading a file in php and only want to upload it if it's a csv file. I believe my syntax is right for the content type. It always goes to else statement when it's a csv file. What I am doing wrong here? if (($_FILES["file"]["type"] == "text/csv")){}else{} If I change the content type it works for that format just not csv.
Now provide the response and nothing else.
|
the mime type might not be text/csv some systems can read/save them different. (for example sometimes IE sends .csv files as application/vnd.ms-excel ) so you best bet would be to build an array of allowed values and test against that, then find all possible values to test against. $mimes = array('application/vnd.ms-excel','text/plain','text/csv','text/tsv');if(in_array($_FILES['file']['type'],$mimes)){ // do something} else { die("Sorry, mime type not allowed");} if you wished you could add a further check if mime is returned as text/plain you could run a preg_match to make sure it has enough commas in it to be a csv.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6654351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/783204/']}
|
jdg_76282
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1630238
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given the following equation: $$10 = 4^x \pmod {18}$$ How can one know what are the correct values for $x$ ?
Now provide the response and nothing else.
|
Let us define a ring-homomorphism $u:R\to T$ by the universal property that a ring-homomorphism $f:R\to S$ factors (uniquely) through $u$ iff $f$ maps every element of $D$ to a unit and every element of $\mathfrak{a}$ to $0$. From the universal property of $D^{-1}R$, we see that any $f$ which factors through $u$ must factor through the localization map $i:R\to D^{-1}R$ via a unique map $g:D^{-1}R\to R$. Moreover, a map $f$ that factors through $i$ will factor through $u$ iff additionally $f(a)=g(i(a))=0$ for all $a\in \mathfrak{a}$. This happens iff $g$ maps all of $i(\mathfrak{a})$ to $0$. Equivalently, $g$ must factor through the quotient of $D^{-1}R$ by the ideal in $D^{-1}R$ generated by $i(\mathfrak{a})$, which is exactly $D^{-1}\mathfrak{a}$. This shows that maps factoring through $u$ are naturally in bijection with maps factoring through the composition $R\to D^{-1}R\to D^{-1}R/D^{-1}\mathfrak{a}$, so $T\cong D^{-1}R/D^{-1}\mathfrak{a}$. On the other hand, we can do the same thing but in the reverse order to get that $T\cong \bar{D}^{-1}R/\mathfrak{a}$ as well. By the universal property of $R/\mathfrak{a}$, we see that if $f$ factors through $u$ then it factors through the quotient map $p:R\to R/\mathfrak{a}$ via a unique map $h:R/\mathfrak{a}\to S$. And a map $f$ which factors through $h$ will factor through $u$ iff additionally $f(d)=h(p(d))$ is a unit for all $d\in D$. This is equivalent to saying that $h$ factors through the localization of $R/\mathfrak{a}$ with respect to $p(D)=\bar{D}$. So $f$ factors though $u$ iff it factors through the composition $R\to R/\mathfrak{a}\to \bar{D}^{-1}R/\mathfrak{a}$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1630238', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/308710/']}
|
jdg_76283
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2784117
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(x_n)$ be a sequence in a metric space $S$. Prove: if $x_n \rightarrow a$ and $x_n \rightarrow b$ then $a=b$. Assume $a \neq b$. Take two balls $B_r(a)$ and $B_r(b)$ with such an $r$ that $B_r(a) \cap B_r(b)=\emptyset$. Then WLOG assume there is a sequence $(x_n)$ that converges to $a$. Then for all $\varepsilon > 0$, there is $N \in \mathbb{N}$ so that $d(x_n,a) < \varepsilon$ for all $n \geq N$. Then for $\varepsilon < r$, all the terms of the sequence past the corresponding $N$ are in $B_r(a)$ and not in $B_r(b)$, so the sequence certainly does not converge to $b$. Is it correct enough?
Now provide the response and nothing else.
|
It's mostly fine, but it's even nicer if you say "and take $r = d(b, a)/2 > 0$"; then your reader knows that $B_r(a) \cap B_r(b) = \emptyset$. Also, "WLOG assume there is a sequence $(x_n)$ that converges to $a$" is a bunch of fancy words that say nothing. Why? Because the sequence $a, a, \ldots,$ is a sequence that converges to $a$. There's no "loss of generality" here, and no need to assume anything. Furthermore, you can't assume anything about $(x_n)$ -- it's given in the hypotheses. So a rewrite might be: Let $(x_n)$ be a sequence in a metric space $S$. Prove: if $x_n \rightarrow a$ and $x_n \rightarrow b$ then $a=b$. Assume $a \neq b$, and let $r = \frac{d(a, b)}{2}$. Because $a \ne b$, we have $r > 0$. Then $B_r(a) \cap B_r(b)=\emptyset$. By definition of convergence, for all $\varepsilon > 0$, there is $N \in \mathbb{N}$ so that $d(x_n,a) < \varepsilon$ for all $n \geq N$. In particular, for $\varepsilon = r$, there is such an $N$. That means that $$d(x_n, a) < r$$for all $n > N$, and hence (because of the empty intersection of balls) that$$d(x_n, b) > r$$for all $n > N$. But that contradicts the definition of $x_n \to b$, for the choice $\varepsilon = r$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2784117', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/204029/']}
|
jdg_76284
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29868078
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an loop Angular JS: angular.forEach($scope.message, function (item) { return (item.id_user == input.id_user) ? true : false;}); How to get index of array element in loop for each item ?I tried: angular.forEach($scope.message, function (item, $index) {});
Now provide the response and nothing else.
|
Sorry for all the vitriol of the community. You're very close to your solution but are a bit confused by documentation. It's okay, let me help clarify! In the documentation for angular.forEach you will see the following statement: Invokes the iterator function once for each item in obj collection, which can be either an object or an array. The iterator function is invoked with iterator(value, key, obj), where value is the value of an object property or an array element, key is the object property key or array element index and obj is the obj itself. Specifying a context for the function is optional. And then the following example: var values = {name: 'misko', gender: 'male'};var log = [];angular.forEach(values, function(value, key) { this.push(key + ': ' + value);}, log);expect(log).toEqual(['name: misko', 'gender: male']); Essentially, the code is like this: angular.forEach('name of list/array you want to loop through', 'callback function to be called for each element of the list') The important part that you're missing is that the 'callback...' mentioned above can be handed 3 variables which you can then use in your callback. Your callback will be called for each element in the list. Here is some explanation of those 3 variables: Value: The value of the i-th element/property in the list/array/object Key: i - the index belonging to the current item in the array Object: the the object itself (or array/list itself) Here is an example i put together for you where I use the Key to create a new string showing the index of each letter in $scope.message. Hope this helped!
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29868078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4828293/']}
|
jdg_76285
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
485083
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to ask if I understand correctly the feature importance in random forest.I am examine random forest by selecting 4 or 6 features and also with different number of trees.I would like to know if I get a result like using 25, 50, 75, 100 trees with 4 features and 6 features. the result for having 25 tree with 4 features better because those randomly selected features were more important than when build a model with 75 trees? Is it correct or I completely misunderstand feature importance?many thanks
Now provide the response and nothing else.
|
This post is an honest response to a common problem in the textbook presentation of regression, namely, the issue of what is random or fixed. Regression textbooks typically blithely state that the $X$ variables are fixed and go on their merry way, when in practice this assumption eliminates most of the interesting regression applications. Rather than assume the $X$ variables are fixed, a better route to understanding regression analysis is to take a conditional distribution approach, one where the $X$ 's are assumed random throughout, and then the case of fixed $X$ (which occurs only in very narrow experimental designs, and at that only when the experiment is performed without error) is subsumed as a special case where the distributions are degenerate. What the OP is missing is the link from random $X$ to fixed realizations of $X$ ( $X=x$ ), which all starts from the Law of Total Expectation: Assume $U$ and $V$ are random, with finite expectation. Let $E(U | V=v) = \mu(v)$ . Then $E(U) = E\{\mu(V)\}$ . This "Law" (which is actually a mathematical theorem) allows you to prove unbiasedness of the estimate $\hat \beta $ in two steps: (i) by first showing that it is unbiased, conditional on the $X$ data, and (ii) by using the Law of Total Expectation to then show that it is unbiased when averaged over all possible realizations of the $X$ data. (The average of 11,11, 11, 11, 11, 11, ... is 11, e.g.). Answers to the OP: Q1. Do we treat $(X_i,Y_i)$ 's as random variables? A1. Yes. They are random in the sense of the model, which describes the way that potentially observable values of such data might appear. Of course the actual observed data, $(x_i, y_i)$ , are not random. Instead, they are fixed values, one many possible realizations of the potentially observable random variables $(X_i, Y_i)$ . In rare cases, the $X$ data are fixed, but this is covered as a special case of randomness, so it is easier and safer just to assume randomness always. Q2. Do we treat $\beta_0$ and $\beta_1$ as random variables? A2. This is somewhat off topic from the OP, but still a very important question. From the scientist's conceptualization of reality, these are ordinarily fixed values. That is, the scientist assumes that there is a rigid structure responsible for the production of all of the $(Y_i | X_i = x_i)$ data values, and these $\beta_0, \beta_1$ values are part of that rigid structure. Now, the parameters $\beta_0, \beta_1$ are uncertain in the scientist's mind (which is why he or she is collecting data in the first place!), so the scientist may choose to view them, mentally, as "random." The scientist has some ideas about the possible values of these parameters based on logic, subject matter considerations, and past data, and these ideas form the scientist's "prior distribution." The scientist then may update this prior using current data to obtain her/his posterior. That, in a nutshell, in what Bayesian statistics is all about. But again, that issue is a little off topic from the OP, so let's consider everything conditional on the scientist's conceptualization that there is a rigid structure, and that these $\beta_0, \beta_1$ values are fixed in reality. In other words, all of my replies other than this one assume that the $\beta$ 's are fixed. Q3. Do we treat $\hat \beta_0$ and $\hat \beta_1$ as random variables? A3. Here is another place where typical regression teaching sources are slippery. In some cases, they refer to the estimates $\hat \beta_0$ and $\hat \beta_1$ as functions of the (fixed) data that has been collected, and sometimes they refer to them as functions of the (random) potentially observable data, but use the same symbols $\hat \beta_0$ and $\hat \beta_1$ in either case. Often, you just have to understand from context which is which. Whenever you see $E(\hat \beta)$ , you can assume that $\hat \beta$ is a function of the random data, i.e., that $\hat \beta$ is a function of the $(X_i, Y_i)$ . Whenever you see the value of $\hat \beta$ reported, e.g., following a computer printout of results from a regression analysis, you can assume that $\hat \beta$ is a function of the fixed data sample, i.e., that $\hat \beta$ is a function of the $(x_i, y_i)$ . Q4. What can have an expected value and what can't (what gets treated as a constant when finding expected values) and why? A4. Anything can have an expectation. Some things are more interesting than others, though. Anything that is a fixed (like a $\hat \beta$ that is a function of the observed $(x_i, y_i)$ sample) has an expectation that is just equal to that value. For example, if you observe from your computer printout that $\hat \beta_1 =0.23$ , then $E(\hat \beta_1) =0.23$ . But that is not interesting. What is more interesting is the following question: over all possible potential realizations of $(X_i, Y_i)$ from this data-generating process, is the estimator $\hat \beta_1$ neither systematically too large, nor systematically too small, in an average sense, when compared to the structural parameter $\beta_1$ ? The expression $E(\hat \beta_1) = \beta_1$ tells you that the answer to that question is a comforting "yes." And in that expression $E(\hat \beta_1) = \beta_1$ , it is implicit that $ \hat \beta_1$ is a function of the potentially observable $(X_i, Y_i)$ data, not the sample $(x_i, y_i)$ data.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/485083', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/295175/']}
|
jdg_76286
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10883399
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This question is based on a side-effect of that one . My .py files are all have # -*- coding: utf-8 -*- encoding definer on the first line, like my api.py As I mention on the related question, I use HttpResponse to return the api documentation. Since I defined encoding by: HttpResponse(cy_content, content_type='text/plain; charset=utf-8') Everything is ok, and when I call my API service, there are no encoding problems except the string formed from a dictionary by pprint Since I am using Turkish characters in some values in my dict, pprint converts them to unichr equivalents, like: API_STATUS = { 1: 'müşteri', 2: 'some other status message'}my_str = 'Here is the documentation part that contains Turkish chars like işüğçö'my_str += pprint.pformat(API_STATUS, indent=4, width=1)return HttpRespopnse(my_str, content_type='text/plain; charset=utf-8') And my plain text output is like: Here is the documentation part that contains Turkish chars like işüğçö{ 1: 'm\xc3\xbc\xc5\x9fteri', 2: 'some other status message'} I try to decode or encode pprint output to different encodings, with no success... What is the best practice to overcome this problem
Now provide the response and nothing else.
|
pprint appears to use repr by default, you can work around this by overriding PrettyPrinter.format : # coding=utf8import pprintclass MyPrettyPrinter(pprint.PrettyPrinter): def format(self, object, context, maxlevels, level): if isinstance(object, unicode): return (object.encode('utf8'), True, False) return pprint.PrettyPrinter.format(self, object, context, maxlevels, level)d = {'foo': u'işüğçö'}pprint.pprint(d) # {'foo': u'i\u015f\xfc\u011f\xe7\xf6'}MyPrettyPrinter().pprint(d) # {'foo': işüğçö}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10883399', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/257972/']}
|
jdg_76287
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40153361
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using QQmlApplicationEngine as follows: QGuiApplication app(argc, argv);QQmlApplicationEngine engine;engine.load(QUrl(QStringLiteral("qrc:/main.qml")));app.exec(); But now I want to enable multisampling for my app, and QQmlApplicationEngine doesn't seem to have a setFormat method for enabling multisampling. I found a way to do it with a QQmlApplicationEngine in a forum : QQuickWindow* window = (QQuickWindow*) engine.rootObjects().first();QSurfaceFormat format;format.setSamples(16);window->setFormat(format) But it relies on the first root object of the engine being a QQuickWindow , which is not documented in Qt docs. So I don't want to use that technique. Another way would be to skip QQmlApplicationEngine and create a QQuickView instead. This does have a setFormat method letting me enable multisampling, but I'm wondering, am I losing anything by switching from QQmlApplicationEngine to QQuickView ? In other words, what are the differences between these two classes? One difference I found is this (from here ): Unlike QQuickView, QQmlApplicationEngine does not automatically create a root window. If you are using visual items from Qt Quick, you will need to place them inside of a Window. This particular difference doesn't matter to me. Any other differences?
Now provide the response and nothing else.
|
Headline: QQmlApplicationEngine is newer and more powerful than QQuickView. QQmlApplicationEngine exposes some central application functionality to QML, which QQuickView application would normally control from C++: Connecting Qt.quit() to QCoreApplication::quit() Automatically loads translation files from an i18n directory adjacent to the main QML file. Automatically sets an incubation controller if the scene contains a QQuickWindow. Automatically sets a QQmlFileSelector as the url interceptor, applying file selectors to all QML files and assets. Ref: Qt docs At the time when QQmlApplicationEngine was introduced, the Qt Blog had this to say : In Qt 5.0 we generally created Qt Quick applications by declaring a QQuickView in C++ and setting the base url on it. The drawback of that approach is that you have to use C++ to set properties like width, height etc. In Qt 5.1 we encourage using Window or ApplicationWindow as the root item of your application, giving complete control to Qt Quick, so we are now introducing the QQmlApplicationEngine to make this use case a little bit simpler. The QmlApplicationEngine is all you need to set up your qt quick window, pick up the right translation files and it implicitly connects the application quit() signal to your root window. Qt Quick Controls 2.0 is able to make use of this extra application control, through the new item ApplicationWindow , which: is similar to the regular QQuickWindow, but adds support for setting a window specific MenuBar, ToolBar and StatusBar in QML. makes it convenient to add a header and footer item to the window. makes it possible to control the window's properties, appearance and layout from QML. supports popups via its overlay property, which ensures that popups are displayed above other content and that the background is dimmed when a modal popup is visible. So, in order to use some of the Qt Quick Controls features like MenuBar and Popup, we need to: use ApplicationWindow as our top-level QML item instead of Rectangle or Item use the new QQmlApplicationEngine to load the QML from C++ instead of the old QQuickView.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40153361', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122687/']}
|
jdg_76288
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
332186
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm interested in a purely mathematical argument as to why when I have a constant velocity perpendicular to a constant force, that it produces uniform circular motion? If I was given this information, how would I derive that it was uniform circular motion mathematically?
Now provide the response and nothing else.
|
Since speed is constant, we can write the velocity as $$\vec{v} = \begin{bmatrix}v_x(t) \\ v_y(t)\end{bmatrix} = v\begin{bmatrix}\cos(\theta_v(t)) \\ \sin(\theta_v(t))\end{bmatrix}$$ for some unknown function of time $\theta_v(t)$ . This is because $$|\vec{v}| = v = \sqrt{v_x^2 + v_y^2} = const.$$ Similarly, for a constant magnitude force $$\vec{a} = \vec{F}/m = a\begin{bmatrix}\cos(\theta_a(t)) \\ \sin(\theta_a(t))\end{bmatrix}$$ for another unknown function $\theta_a(t).$ With the force perpendicular to velocity and with $\vec{v}$ and $\vec{a}$ non-zero, we have $$\vec{a}\cdot\vec{v} = av\left(\cos(\theta_a(t))\cos(\theta_v(t)) + \sin(\theta_a(t))\sin(\theta_v(t))\right) = 0$$ $$\cos(\theta_a(t))\cos(\theta_v(t)) + \sin(\theta_a(t))\sin(\theta_v(t)) = 0.$$ This expression simplifies to $$\cos(\theta_a(t) - \theta_v(t)) = 0,$$ which implies that $$\theta_a(t) - \theta_v(t) = \left(n + \frac{1}{2}\right)\pi$$ for some integer $n$ . So, $$\cos(\theta_a(t)) = \cos\left(\left(n + \frac{1}{2}\right)\pi + \theta_v(t)\right) = -\sin(\theta_v(t))$$ $$\sin(\theta_a(t)) = \sin\left(\left(n + \frac{1}{2}\right)\pi + \theta_v(t)\right) = \phantom{-}\cos(\theta_v(t))$$ From the definition of acceleration, we have \begin{align}\vec{a} &= \frac{d\vec{v}}{dt} \\a\begin{bmatrix}\cos(\theta_a(t)) \\ \sin(\theta_a(t))\end{bmatrix} &= v\frac{d\theta_v}{dt}\begin{bmatrix}-\sin(\theta_v(t)) \\ \phantom{-}\cos(\theta_v(t))\end{bmatrix}.\end{align} Using the $\cos$ and $\sin$ relations above, \begin{align}a\begin{bmatrix}-\sin(\theta_v(t)) \\ \phantom{-}\cos(\theta_v(t))\end{bmatrix} &= v\frac{d\theta_v}{dt}\begin{bmatrix}-\sin(\theta_v(t)) \\ \phantom{-}\cos(\theta_v(t))\end{bmatrix}.\end{align} Thus, $$a = v\frac{d\theta_v}{dt}$$ Since, $a$ and $v$ are both constant, $d\theta_v/dt$ must also be constant. $$\frac{d\theta_v}{dt} = \omega$$ $$\theta_v(t) = \omega t + \theta_0$$ for constant $\theta_0$ and $\omega = a/v$ . Substituting back into the original $\vec{v}$ equation results in $$\vec{v} = v\begin{bmatrix}\cos(\omega t + \theta_0) \\ \sin(\omega t + \theta_0)\end{bmatrix}.$$ Integrating yields $$\vec{x} = \frac{v}{\omega}\begin{bmatrix}\phantom{-}\sin(\omega t + \theta_0) \\ -\cos(\omega t + \theta_0)\end{bmatrix} + \begin{bmatrix}x_0 \\y_0\end{bmatrix}$$ which is circular motion about the point $(x_0, y_0)$ with a radius of $$r = \frac{v}{\omega} = \frac{v^2}{a} = \frac{mv^2}{F}.\qquad\left(\textrm{using }\omega=\frac{a}{v}\right)$$ This gives us the formula for centripetal force $$F = \frac{mv^2}{r}$$ and acceleration $$a = \frac{v^2}{r}.$$ Addendum, the first: The following assumptions are redundant: Speed is constant. Force is perpendicular to velocity. Since $$\frac{d(|\vec{v}|^2)}{dt} = \frac{d(\vec{v}\cdot\vec{v})}{dt} = 2\vec{v}\cdot\frac{d\vec{v}}{dt} = 2\vec{v}\cdot\vec{a}$$ So, if (and only if) speed is constant, then the acceleration vector will be zero or perpendicular to the velocity vector. Addendum the second: This derivation is only valid in 2D. In 3D, you can add a velocity component that is perpendicular to the original velocity and the force (a constant $v_z$ , for example). This results in helical motion. Charged particles in a uniform magnetic field do this. The magnetic force is $\vec{F} = q\vec{v} \times \vec{B}$ , which is always perpendicular to the velocity of the particle. In fact, as long as the magnetic field is of constant magnitude, the force will be of constant magnitude, and the particle will follow a helical path around a curved line that follows the curved magnetic field lines.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/332186', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/111505/']}
|
jdg_76289
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41505
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The problem of representing bound variables in syntax, and in particular that of capture-avoiding substitution, is well-known and has a number of solutions: named variables with alpha-equivalence, de Bruijn indices, locally namelessness, nominal sets, etc. But there seems to be another fairly obvious approach, which I have nevertheless not seen used anywhere. Namely, in the basic syntax we have only one "variable" term, written say $\bullet$, and then separately we give a function that maps each variable to a binder in whose scope it lies. So a $\lambda$-term like $$ \lambda x. (\lambda y. x y)$$ would be written $\lambda. (\lambda. \bullet\bullet)$, and the function would map the first $\bullet$ to the first $\lambda$ and the second $\bullet$ to the second $\lambda$. So it's kind of like de Bruijn indices, only instead of having to "count $\lambda$s" as you back out of the term to find the corresponding binder, you just evaluate a function. (If representing this as a data structure in an implementation, I would think of equipping each variable-term object with a simple pointer/reference to the corresponding binder-term object.) Obviously this is not sensible for writing syntax on a page for humans to read, but then neither are de Bruijn indices. It seems to me that it makes perfect sense mathematically, and in particular it makes capture-avoiding substitution very easy: just drop in the term you are substituting and take the union of the binding functions. It's true that it doesn't have a notion of "free variable", but then (again) neither do de Bruijn indices really; in either case a term containing free variables is represented a term with a list of "context" binders in front. Am I missing something and there is some reason this representation doesn't work? Are there problems that make it so much worse than the others that it's not worth considering? (The only problem I can think of right now is that the set of terms (together with their binding functions) is not inductively defined, but that doesn't seem insurmountable.) Or are there actually places where it has been used?
Now provide the response and nothing else.
|
Andrej's and Łukasz's answers make good points, but I wanted to add additional comments. To echo what Damiano said, this way of representing binding using pointers is the one suggested by proof-nets, but the earliest place where I saw it for lambda terms was in an old essay by Knuth: Donald Knuth (1970). Examples of formal semantics. In Symposium on Semantics of Algorithmic Languages , E. Engeler (ed.), Lecture Notes in Mathematics 188, Springer. On page 234, he drew the following diagram (which he called an "information structure") representing the term $(\lambda y.\lambda z.yz)x$: This kind of graphical representation of lambda terms was also studied independently (and more deeply) in two theses in the early 1970s, both by Christopher Wadsworth (1971, Semantics and Pragmatics of the Lambda-Calculus ) and by Richard Statman (1974, Structural Complexity of Proofs ). Nowadays, such diagrams are often referred to as "λ-graphs" (see for example this paper ). Observe that the term in Knuth's diagram is linear , in the sense that every free or bound variable occurs exactly once -- as others have mentioned, there are non-trivial issues and choices to be made in trying to extend this kind of representation to non-linear terms. On the other hand, for linear terms I think it's great! Linearity precludes the need for copying, and so you get both $\alpha$-equivalence and substitution "for free". These are the same advantages as HOAS, and I actually agree with Rodolphe Lepigre that there is a connection (if not exactly an equivalence) between the two forms of representation: there is a sense in which these graphical structures may be naturally interpreted as string diagrams, representing endomorphisms of a reflexive object in a compact closed bicategory (I gave a brief explanation of that here ).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/41505', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/28847/']}
|
jdg_76290
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1096505
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was thinking of a solution to this problem. My input: 1. Have a tail pointer which points to the last node. 2. Once you know the last pointer you can easily add a new node next to it. Void Insert(Node N){ if (head == null) // linked list is empty { head = N; tail = N; tail.Next = head; } else { Node temp = tail.Next; // since this is circular tail will point to head Tail.Next = N; N.Next = temp; // correct tail = N; }} Can any1 think of better solution without using tail pointer? Also as stated in problem without traversing?This is an interview question, just need some inputs to find the best solution.
Now provide the response and nothing else.
|
I guess that you have a singly linked circular list, and only a pointer to one element (call this the head node). Each node of the list thus consists of a value and a pointer to the next element. The tail node points to the head node. Inserting a node directly after the head node is trivial. I guess that you want to insert a node directly before the head node. For that, you need the new node to become the last node, which is pointed to from the former last node, and which points to the head node. Now, you want to avoid traversing the list to find the last node. This means that you cannot access that last node and thus, not modify its pointer. The only other way to make this work is to modify the place that last node points to, i.e.: insert a new node after the head node copy the current head node's value to that new node put the new value into the current head node make the new node the new head node
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1096505', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/131448/']}
|
jdg_76291
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51067610
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To anyone who has done Google's Foobar challenge before, have you ever encountered an error like this? Verifying solution...{"bytes" : "CAAaIgogQ291bGQgbm90IGZpbmQgJ2Fuc3dlcicgZnVuY3Rpb24"} I've tested my solution in Visual Studio, and it works fine, and it gives accurate output. I only have 36 hours remaining to submit my solution, and this is the final challenge, so I would really like to get credit for completing it. Here's the code I'm trying to verify: from fractions import *from math import factorial as facdef cycle_index(n): return [(coeff(term), term) for term in foo(n, n)]def foo(n, lim): soln_set = [] if n > 0: for x in range(lim, 0, -1): if x == 1: soln_set.append([(1, n)]) else: for y in range(int(n / x), 0, -1): recurse = foo(n - x * y, x - 1) if len(recurse) == 0: soln_set.append([(x, y)]) for soln in recurse: soln_set.append([(x, y)] + soln) return soln_setdef coeff(term): val = 1 for x, y in term: val *= fac(y) * x ** y return Fraction(1, val)def cross(cycle_a, cycle_b): term = [] for len_a, freq_a in cycle_a: for len_b, freq_b in cycle_b: lcm = len_a * len_b / gcd(len_a, len_b) term.append((lcm, int(len_a * freq_a * len_b * freq_b / lcm))) return termdef answer(w, h, s): total = 0 cycidx_cols = cycle_index(w) cycidx_rows = cycle_index(h) for col_coeff, col_cycle in cycidx_cols: for row_coeff, row_cycle in cycidx_rows: coeff = col_coeff * row_coeff cycle = cross(col_cycle, row_cycle) value = 1 for _, power in cycle: value *= s ** power total += coeff * value return total I found someone else's solution last week and verified it through Foobar, but I wanted to write my own to get a deeper understanding. I've compared results side-by-side, and they are exact, so I know my code gives accurate results. Out of curiosity, I just retried verifying the other person's solution again, and now I am getting the same error with a slightly different output even though it worked just fine when I tried last week: Verifying solution...{"bytes" : "CAEQARABEAEQARABEAEQARABEAEQAQ"} I'm not sure where else to go. I was so excited that I had come up with my own solution to the challenge, but now I'm panicking that it won't matter. Any suggestions? UPDATE - June 29, 2018, 6:00pm CST The deadline passed for me last night, and I was unable to submit my code in time. I made sure to use the recruitme command before time ran out just in case I got booted out. However, I am still able to view my current status, and it let me request another Level 5 challenge. So, I will be checking periodically to see if the test cases start showing up again, and I will be sure to update this when they do. I would highly recommend that anyone on lower levels wait until I confirm that this issue is fixed before attempting to request another challenge. UPDATE - June 30, 2018, 5:00am CST According to the pattern found by @RobertAnsel, I have completed the challenge. I confirmed this pattern by hardcoding the answer function to output solutions to the test cases given in the instructions. The resulting errors matched exactly with the predicted output. I also found an interesting thread on Google's Support Forum ( linked here ) where something similar happened to a bunch of Foobar challengers. It seems like their error was fixed by Google after about three days, but unfortunately, it also looks like a lot of people who timed out were not given another chance. Nevertheless, this will probably resolve itself within the next day or two. I will continue trying to verify and submit my solution until I am successful. UPDATE - July 4, 2018, 12:00am CST This issue seems to have been more or less resolved. Refer to the chosen answer for more details. Thanks a ton to @RobertAnsel for all the help! Some additional info: I was able to verify my current challenge, but upon submission, I was told that my time for the problem had expired. After logging in again, I was able to request a new challenge. I noticed I was also able to use the recruitme command again. I am not sure if this means they did not receive the first request or if you are allowed to use that command multiple times. Regardless, I am relieved that I am able to continue forward with Foobar. Best of luck to the rest of you!
Now provide the response and nothing else.
|
This may not be the answer you're looking for, but that first "bytes" string is a base64 encoding of the following error message: "Could not find 'answer' function". I've done the Foobar challenge myself, and that should only happen if you are trying to verify a file that is missing a defined "answer" function, which obviously you are not. Are you certain that the spec they provide has 3 arguments vs an array with 3 items? The second message (CAEQARABEAEQARABEAEQARABEAEQAQ), while valid base64, doesn't map to ASCII or UTF-8. After some closer analysis of some of the other strings others have posted, I've concluded that this is the base64 encoded version of the test output. It isn't very human readable, but I believe it is 11 2-byte chunks, the first of which is unhelpful, but the following 10 are the test result for each of the corresponding test cases. In the case of this message it converts to binary as: 0000100000000001 <- unknown pre-pended info0001000000000001 <- passing test 10001000000000001 <- passing test 20001000000000001 <- passing test 30001000000000001 <- passing test 40001000000000001 <- passing test 50001000000000001 <- passing test 60001000000000001 <- passing test 70001000000000001 <- passing test 80001000000000001 <- passing test 90001000000000001 <- passing test 10 The '1's at the end of each of these lines indicates that all 10 tests are passing. A failing test case is represented by the following string: 0001000000000000 <- failing test case This should help you (and others) continue testing to achieve fully passing tests (you can complete your own analysis with tools like this one: https://cryptii.com/base64-to-binary ), but unfortunately this will not help you move forward with your final submission until Google remediates the issue on their end. UPDATE: July 2, 8PM PDT After reaching out to a couple of Google recruiters about the issue they were able to confirm that the issue was identified and is believed to be resolved today.If you re-save your code after making a change to it (whitespace should be fine), you should be able to test and submit correctly. Alternatively, you may now be able to request a new challenge anyway.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51067610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7214433/']}
|
jdg_76292
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
70368821
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I make my app bar buttons (notification button and, I don't know, menu?) look like that?I mean, how can I copy this border or background . Maybe I should use something else than appear? Now I am using this: class _CustomAppBarState extends State<CustomAppBar> { @override Widget build(BuildContext context) { return AppBar( backgroundColor: Colors.transparent, elevation: 0, leading: IconButton( icon: Icon( Icons.menu, color: Color(0xff736f71), ), onPressed: () {}, ), actions: [ IconButton( icon: Icon(Icons.add), color: Color(0xff736f71), onPressed: () {}, ), ], ); }}```
Now provide the response and nothing else.
|
Upgrading to react-scripts v5 is not always the solution. The full reason for this bug is described here . In short here is a brief summary: The error is as a result of react-error-overlay ( which many people would never have heard of because it is a dependency of react-scripts ). This package's dependencies were update to support webpack v5, which unfortunately is not compatible with react-scripts v4. Method 1 (Override a package version) If upgrading to react-scripts v5 is not working for you, you can also try another workaround which is to pin react-error-overlay to version 6.0.9 : Delete your yarn.lock or package-lock.json , then install your dependencies again. Using yarn yarn will take the resolutions field into consideration out of the box. "resolutions": { "//": "See https://github.com/facebook/create-react-app/issues/11773", "react-error-overlay": "6.0.9"} For yarn workspaces , place the above resolution in the root package.json , not in the problematic folder. See this issue comment. Using npm ( >=v8.3.0 ) The equivalent of resolutions for npm is overrides . "overrides": { "react-error-overlay": "6.0.9"}, Using npm (<8.3.0) You need to make sure npm uses the resolutions field when you run npm install . To automate the installation, see this answer Method 2 (Use a webpack plugin) Yet another (not so popular) workaround is to use a webpack plugin: plugins:[ new webpack.DefinePlugin({ process: {env: {}} })] If you use craco (v6.3.0+), you just need to modify your craco.config.js file to add that plugin: { ... webpack: { plugins: { add: [ new webpack.DefinePlugin({ process: {env: {}} }) ] } }} For customize-cra users, see this answer or this github comment . This last method is not popular because not many CRA users ever have to touch webpack directly to work with react.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/70368821', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15754194/']}
|
jdg_76293
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
327755
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have installed Debian 8 with sysvinit and Xfce on a ThinkPad X220. The video card is Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) / Integrated Graphics Chipset: Intel(R) HD Graphics 3000 . Kernel versions available to me are 4.7 from backports and 3.16 (heavily patched by Canonical kernel team) from stable. When I hibernate using pure kernel ( echo disk > /sys/power/state done by pm-hibernate run by xfce4-pm-helper) and XScreenSaver turns the screen off, on resume the screen is still turned off (not just the backlight: I checked with a flashlight), and nothing seems to turn it on again (I tried Ctrl + Alt + F* suggested at ThinkWiki and Alt + SysRq + V ), but the remaining system seems to be working (at least, when I do an Alt + SysRq + E , I , S , U , B , I can find messages from a successful resume and syslogd being terminated by signal 15 in /var/log/syslog ). I'm able to connect via ssh, but both chvt 1; chvt 7 and various combinations of export DISPLAY=:0; xrandr --output LVDS1 --off; xrandr --output LVDS1 --auto do nothing. I disabled LVDS1 and tried to reenable it, I got: xrandr: Configure crtc 0 failed and the following lines in dmesg : [ 390.432051] [drm:drm_framebuffer_remove [drm]] *ERROR* failed to reset crtc ffff9ae6caa2f000 when fb was deleted[ 390.432066] [drm:drm_plane_force_disable [drm]] *ERROR* failed to disable plane with busy fb Some people suggested ddccontrol , but it doesn't detect DDC on my system. Is screen is on while hibernating, it stays working after resume. If I install uswsusp , hibernate works (as a bonus, I get some form of compression and ability to cancel hibernating at the last second), but only most of the time. The usual resume procedure looks like this: resume: Loading image data pages (in default video mode) Video mode is switched to native resolution, screen is filled with noise (full screen when EFI-booting, a small horizontal stripe on the top when BIOS-booting) Loud click from speakers, screen momentarily turns black with s2disk: returned to userspace , then I get my XScreenSaver lock dialog. Sooner or later, right after the Loading data pages screen turns black and the laptop resets (I see the BIOS boot logo). I installed grub-efi on a thumbdrive (to avoid repartitioning) for the sake of efi-backed pstore. For a week, I hibernated and resumed the laptop once or twice per day without problems and almost beleived that EFI has solved the problem, but then the failure occured again, and no logs were found in /sys/fs/pstore ( pstore: Registered efi as persistent store backend is visible in dmesg). I think that when resuming successfully for the last time before the next one failed the screen was not fully filled with noise and a black stripe was wisible on the bottom before I got returned to userspace and X screen back. As far as I know, suspend to RAM works flawlessly both with echo mem > /sys/power/state and s2ram . I do that more frequently than hibernating (several times per day) and so far, no glitches have occured. For now, I set up XScreenSaver to never turn the screen off, thus preventing the most common cause of the failure (XScreenSaver locking and turning screen off before hibernation), but I'm still able to encounter the glitch by closing the lid fast enough after choosing "hibernate". UPD : I've been using pure kernel hibernate for some time now, and I've just encountered the same reset after loading saved image which has been happening to me when using uswsusp. Apparently I've never used it on this X220 for long enough for it to happen before. So, as it turns out, I have no reliable ways to hibernate: both echo disk > /sys/power/state and s2disk fail after 5-7 hibernations, but pure kernel doesn't restore video card state properly, too. What else can I try to make some form of hibernate working?
Now provide the response and nothing else.
|
Here's a slightly more idiomatic AWK variant to process current.txt ( steve's second answer is even more idiomatic!): awk '/^To(|min|max) / { print $1, $2 + 1; next } 1' current.txt This looks for lines starting with To , followed by nothing, min , or max , followed by a space; for matching lines it prints the first field and the second field, incremented, separated by the default output field separator (space). Then it skips to the next line. All other lines are printed as-is ( 1 is a shortcut for this in AWK). Note that overwriting the file with the new values is probably not a good idea: you won't know whether the values have been corrected or not... If you retrieve the file from the device every time then that doesn't apply. The same reasoning applies to ws2308.log , so let's just process it in its entirety every time: $ awk 'NF >= 5 { $5 = $5 + 1 } 1' ws2308.log20161203150600 2016-Dec-03 15:06:00 11.8 -0.1 -3.2 65 87 0.0 157.5 SSE -1.1 569.80 1015.70020161203152100 2016-Dec-03 15:21:00 12.3 -0.1 -3.2 64 87 0.0 157.5 SSE -1.1 569.80 1015.60020161203153600 2016-Dec-03 15:36:00 12.2 -0.2 -3.3 64 87 0.0 135.0 SE -1.2 569.80 1015.700 If you want only the last line: $ awk 'NF >= 5 { $5 = $5 + 1; lastline = $0 } END { print lastline }' ws2308.log20161203153600 2016-Dec-03 15:36:00 12.2 -0.2 -3.3 64 87 0.0 135.0 SE -1.2 569.80 1015.700 or if you want the file with only the last line changed : $ awk 'length(prevline) > 0 { print prevline } NF >= 5 { prevline = $0; $5 = $5 + 1; lastline = $0 } END { print lastline }' ws2308.log20161203150600 2016-Dec-03 15:06:00 11.8 -1.1 -3.2 65 87 0.0 157.5 SSE -1.1 569.80 1015.700 20161203152100 2016-Dec-03 15:21:00 12.3 -1.1 -3.2 64 87 0.0 157.5 SSE -1.1 569.80 1015.600 20161203153600 2016-Dec-03 15:36:00 12.2 -0.2 -3.3 64 87 0.0 135.0 SE -1.2 569.80 1015.700
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/327755', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/202461/']}
|
jdg_76294
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7323465
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My url looks like this: customer/login?ReturnUrl=home In the login view, I have used this pattern of code which works fine. @using(Html.BeginForm()){ ...} This magically generates following html <form action="customer/login?ReturnUrl=home" method="post"> But now, I need to add an attribute (e.g., data-id="something" ) in the form. How can I do that? If I don't have any query string, I know I can do something like this: @using(Html.BeginForm(action, controller, FormMethod.Post, new { data_id="something" })) But don't know how to add querystring which should be in html: <form action="customer/login?ReturnUrl=home" method="post" data-id="something"> I thought about using <form> directly but don't know how to specify querystring which is variable. And I have no idea how to achieve it with Html.BeginForm . Any tip would be appreciated. RESOLUTION: For now, I used <form> with following hint How to get current url value in View . The resulting view looks like <form action="@Request.Url.PathAndQuery" data-id="something" method="POST"> But it would be nice to have an overloaded method of BeginForm for this.
Now provide the response and nothing else.
|
I guess this doesn't directly answer the question, but why not just use a plain old form tag? <form action='customer/[email protected]["ReturnUrl"]' method="post" data-id="something"> Alternatively, you can create a custom HtmlHelperExtension that renders a form with path and querystring. In this HtmlHelperExtension you can iterate through your querystring values and populate the routeValueDictionary which you then pass to a Html.BeginForm constructor. If you don't want something so extensible you can just use the overloaded constructor of Html.BeginForm using @Html.BeginForm("login", "customer", new {ReturnUrl = @Request.QueryString["ReturnUrl"]},FormMethod.Post, new {data-id="something"});
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7323465', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/749973/']}
|
jdg_76295
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9479607
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have set every encoding set variable I can figure out to utf8 . In database.yml : development: &development adapter: mysql2 encoding: utf8 In my.cnf : [client]default-character-set = utf8[mysqld]default-character-set = utf8skip-character-set-client-handshakecharacter-set-server = utf8collation-server = utf8_general_ciinit-connect = SET NAMES utf8 And if I run mysql client in terminal: mysql> show variables like 'character%';+--------------------------+----------------------------+| Variable_name | Value |+--------------------------+----------------------------+| character_set_client | utf8 || character_set_connection | utf8 || character_set_database | utf8 || character_set_filesystem | binary || character_set_results | utf8 || character_set_server | utf8 || character_set_system | utf8 || character_sets_dir | /usr/share/mysql/charsets/ |+--------------------------+----------------------------+mysql> show variables like 'collation%';+----------------------+-----------------+| Variable_name | Value |+----------------------+-----------------+| collation_connection | utf8_general_ci || collation_database | utf8_general_ci || collation_server | utf8_general_ci |+----------------------+-----------------+ But it's to beat the air. When I insert utf8 data from Rails app, it finally becomes ????????????. What do I miss?
Now provide the response and nothing else.
|
Check not global settings but when you are connected to specific database for application. When you changed settings for mysql you have also change settings for your app database. Simple way to check it is to log to mysql into app db: mysql app_db_production -u db_user -p or rails command: rails dbconsole production For my app it looks like this: mysql> show variables like 'character%';+--------------------------+----------------------------+| Variable_name | Value |+--------------------------+----------------------------+| character_set_client | utf8 || character_set_connection | utf8 || character_set_database | latin1 || character_set_filesystem | binary || character_set_results | utf8 || character_set_server | utf8 || character_set_system | utf8 || character_sets_dir | /usr/share/mysql/charsets/ |+--------------------------+----------------------------+8 rows in set (0.00 sec)mysql> show variables like 'collation%';+----------------------+-------------------+| Variable_name | Value |+----------------------+-------------------+| collation_connection | utf8_general_ci || collation_database | latin1_swedish_ci || collation_server | utf8_general_ci |+----------------------+-------------------+3 rows in set (0.00 sec) Command for changing database collation and charset: mysql> alter database app_db_production CHARACTER SET utf8 COLLATE utf8_general_ci ;Query OK, 1 row affected (0.00 sec) And remeber to change charset and collation for all your tables: ALTER TABLE tablename CHARACTER SET utf8 COLLATE utf8_general_ci; # changes for new recordsALTER TABLE tablename CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; # migrates old records Now it should work.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9479607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/590083/']}
|
jdg_76296
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3107514
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a function which removes html tags and attributes which are not in a white list.I have the following HTML: <b>first text </b><b>second text here <a>some text here</a> <a>some text here</a> </b><a>some twxt here</a> I am using HTML agility pack and the code I have so far is: static List<string> WhiteNodeList = new List<string> { "b" };static List<string> WhiteAttrList = new List<string> { };static HtmlNode htmlNode;public static void RemoveNotInWhiteList(out string _output, HtmlNode pNode, List<string> pWhiteList, List<string> attrWhiteList){ // remove all attributes not on white list foreach (var item in pNode.ChildNodes) { item.Attributes.Where(u => attrWhiteList.Contains(u.Name) == false).ToList().ForEach(u => RemoveAttribute(u)); } // remove all html and their innerText and attributes if not on whitelist. //pNode.ChildNodes.Where(u => pWhiteList.Contains(u.Name) == false).ToList().ForEach(u => u.Remove()); //pNode.ChildNodes.Where(u => pWhiteList.Contains(u.Name) == false).ToList().ForEach(u => u.ParentNode.ReplaceChild(ConvertHtmlToNode(u.InnerHtml),u)); //pNode.ChildNodes.Where(u => pWhiteList.Contains(u.Name) == false).ToList().ForEach(u => u.Remove()); for (int i = 0; i < pNode.ChildNodes.Count; i++) { if (!pWhiteList.Contains(pNode.ChildNodes[i].Name)) { HtmlNode _newNode = ConvertHtmlToNode(pNode.ChildNodes[i].InnerHtml); pNode.ChildNodes[i].ParentNode.ReplaceChild(_newNode, pNode.ChildNodes[i]); if (pNode.ChildNodes[i].HasChildNodes && !string.IsNullOrEmpty(pNode.ChildNodes[i].InnerText.Trim().Replace("\r\n", ""))) { HtmlNode outputNode1 = pNode.ChildNodes[i]; for (int j = 0; j < pNode.ChildNodes[i].ChildNodes.Count; j++) { string _childNodeOutput; RemoveNotInWhiteList(out _childNodeOutput, pNode.ChildNodes[i], WhiteNodeList, WhiteAttrList); pNode.ChildNodes[i].ReplaceChild(ConvertHtmlToNode(_childNodeOutput), pNode.ChildNodes[i].ChildNodes[j]); i++; } } } } // Console.WriteLine(pNode.OuterHtml); _output = pNode.OuterHtml;} private static void RemoveAttribute(HtmlAttribute u){ u.Value = u.Value.ToLower().Replace("javascript", ""); u.Remove();}public static HtmlNode ConvertHtmlToNode(string html){ HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument(); doc.LoadHtml(html); if (doc.DocumentNode.ChildNodes.Count == 1) return doc.DocumentNode.ChildNodes[0]; else return doc.DocumentNode;} The output I am tryig to achieve is <b>first text </b><b>second text here some text here some text here </b>some twxt here That means that I only want to keep the <b> tags. The reason i'm doing this is because Some of the users do cpoy-paste from MS WORD into ny WYSYWYG html editor. Thanks.!
Now provide the response and nothing else.
|
heh, apparently I ALMOST found an answer in a blog post someone made.... using System.Collections.Generic;using System.Linq;using HtmlAgilityPack;namespace Wayloop.Blog.Core.Markup{ public static class HtmlSanitizer { private static readonly IDictionary<string, string[]> Whitelist; static HtmlSanitizer() { Whitelist = new Dictionary<string, string[]> { { "a", new[] { "href" } }, { "strong", null }, { "em", null }, { "blockquote", null }, }; } public static string Sanitize(string input) { var htmlDocument = new HtmlDocument(); htmlDocument.LoadHtml(input); SanitizeNode(htmlDocument.DocumentNode); return htmlDocument.DocumentNode.WriteTo().Trim(); } private static void SanitizeChildren(HtmlNode parentNode) { for (int i = parentNode.ChildNodes.Count - 1; i >= 0; i--) { SanitizeNode(parentNode.ChildNodes[i]); } } private static void SanitizeNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { if (!Whitelist.ContainsKey(node.Name)) { node.ParentNode.RemoveChild(node); return; } if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; string[] allowedAttributes = Whitelist[node.Name]; if (!allowedAttributes.Contains(currentAttribute.Name)) { node.Attributes.Remove(currentAttribute); } } } } if (node.HasChildNodes) { SanitizeChildren(node); } } }} I got HtmlSanitizer from here Apparently it does not strip th tags, but removes the element altoghether. OK, here is the solution for those who will need it later. public static class HtmlSanitizer { private static readonly IDictionary<string, string[]> Whitelist; private static List<string> DeletableNodesXpath = new List<string>(); static HtmlSanitizer() { Whitelist = new Dictionary<string, string[]> { { "a", new[] { "href" } }, { "strong", null }, { "em", null }, { "blockquote", null }, { "b", null}, { "p", null}, { "ul", null}, { "ol", null}, { "li", null}, { "div", new[] { "align" } }, { "strike", null}, { "u", null}, { "sub", null}, { "sup", null}, { "table", null }, { "tr", null }, { "td", null }, { "th", null } }; } public static string Sanitize(string input) { if (input.Trim().Length < 1) return string.Empty; var htmlDocument = new HtmlDocument(); htmlDocument.LoadHtml(input); SanitizeNode(htmlDocument.DocumentNode); string xPath = HtmlSanitizer.CreateXPath(); return StripHtml(htmlDocument.DocumentNode.WriteTo().Trim(), xPath); } private static void SanitizeChildren(HtmlNode parentNode) { for (int i = parentNode.ChildNodes.Count - 1; i >= 0; i--) { SanitizeNode(parentNode.ChildNodes[i]); } } private static void SanitizeNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { if (!Whitelist.ContainsKey(node.Name)) { if (!DeletableNodesXpath.Contains(node.Name)) { //DeletableNodesXpath.Add(node.Name.Replace("?","")); node.Name = "removeableNode"; DeletableNodesXpath.Add(node.Name); } if (node.HasChildNodes) { SanitizeChildren(node); } return; } if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; string[] allowedAttributes = Whitelist[node.Name]; if (allowedAttributes != null) { if (!allowedAttributes.Contains(currentAttribute.Name)) { node.Attributes.Remove(currentAttribute); } } else { node.Attributes.Remove(currentAttribute); } } } } if (node.HasChildNodes) { SanitizeChildren(node); } } private static string StripHtml(string html, string xPath) { HtmlDocument htmlDoc = new HtmlDocument(); htmlDoc.LoadHtml(html); if (xPath.Length > 0) { HtmlNodeCollection invalidNodes = htmlDoc.DocumentNode.SelectNodes(@xPath); foreach (HtmlNode node in invalidNodes) { node.ParentNode.RemoveChild(node, true); } } return htmlDoc.DocumentNode.WriteContentTo(); ; } private static string CreateXPath() { string _xPath = string.Empty; for (int i = 0; i < DeletableNodesXpath.Count; i++) { if (i != DeletableNodesXpath.Count - 1) { _xPath += string.Format("//{0}|", DeletableNodesXpath[i].ToString()); } else _xPath += string.Format("//{0}", DeletableNodesXpath[i].ToString()); } return _xPath; } } I renamed the node because if I had to parse an XML namespace node it would crash on the xpath parsing.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3107514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/249895/']}
|
jdg_76297
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
61255807
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need a reference to coroutine scope on my android Application . i did the following class TodoApplication : Application() { private var job = Job() private val applicationScope = CoroutineScope(Dispatchers.Main + job) val tasksRepository: TasksRepository get() = ServiceLocator.provideTasksRepository(this, applicationScope)} Is this the way to do it. If so how can I cancel coroutines launched on this scope job.cancel() Application class don't have onDestroy method as Activities
Now provide the response and nothing else.
|
NO , GlobalScope will NOT be suitable for Application instance. As mention here in this article here : There are multiple reasons why you shouldn’t use GlobalScope: Promotes hard-coding values. It might be tempting to hardcode Dispatchers if you use GlobalScope straight-away. That’s a bad practice! It makes testing very hard. As your code is going to be executed in an uncontrolled scope, you won’t be able to manage execution of work started by it. You can’t have a common CoroutineContext for all coroutines built into the scope as we did with the applicationScope . Instead, you’d have to pass a common CoroutineContext to all coroutines started by GlobalScope . So, one solution for this is to create your own scope like this: val applicationScope = CoroutineScope(SupervisorJob() + Dispatchers.Main) But better yet , as pointed out by @Raman in the comments, use the equivalent that's already available to you: kotlinx.coroutines.MainScope() We don’t need to cancel this scope since we want it to remain active as long asthe application process is alive, so we don’t hold a reference to the SupervisorJob.We can use this scope to run coroutines that need a longer lifetime than the callingscope might offer in our app.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/61255807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5942579/']}
|
jdg_76298
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7642855
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im having troubles trying to get the autocomplete to work properly. It all looks ok to me but.... <script>$(function () { $("#customer-search").autocomplete({ source: 'Customer/GetCustomerByName', minLength: 3, select: function (event, ui) { $("#customer-search").val(ui.item.label); $("#selected-customer").val(ui.item.label); } });});</script><div><input id="customer-search" /> </div>@Html.Hidden("selected-customer") However when I select an item from the dropdown the value is been applied to the textbox instead of the label. What have I done wrong? If I look at the source using firebug I can see that my hidden field is being updated correctly.
Now provide the response and nothing else.
|
The default behavior of the select event is to update the input with ui.item.value . This code runs after your event handler. Simply return false or call event.preventDefault() to prevent this from occurring. I would also recommend doing something similar for the focus event to prevent ui.item.value from being placed in the input as the user hovers over choices: $("#customer-search").autocomplete({ /* snip */ select: function(event, ui) { event.preventDefault(); $("#customer-search").val(ui.item.label); $("#selected-customer").val(ui.item.label); }, focus: function(event, ui) { event.preventDefault(); $("#customer-search").val(ui.item.label); }}); Example: http://jsfiddle.net/andrewwhitaker/LCv8L/
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7642855', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/293545/']}
|
jdg_76299
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46229568
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm successfully printing my response as String from my YouTube JSON url, but when I try to serialize through the "items" I get the following error Unhandled exception: type 'List' is not a subtype of type 'Map' of 'json' where List is from dart:core Map is from dart:core Here is my code... class CardInfo { //Constructor String id; String description; String role; //int score; CardInfo.fromJson(Map json) { this.id = json['vieoId']; this.description = json['description']; this.role = json['title']; //this.score = json['score']; }}Future getData() async { String url = 'YouTube url'; var httpClient = createHttpClient(); var response = await httpClient.get(url); Map data = JSON.decode(response.body); //String ip = data['items']; var ci = new CardInfo.fromJson(data['items']); //print(data['items']); print(ci.id); //print(ci.description); //print(ci.role); if (!mounted) return; setState(() {}); } print(data['items'] is printing, but print(ci.id) or any Card Info variables throws the above error. **** Log of print(data) ; {kind: youtube#searchListResponse, etag: "VPWTmrH7dFmi4s1RqrK4tLejnRI/P9wyOxsXEuXOCvj7znCun2-EykU", nextPageToken: CAMQAA, regionCode: US, pageInfo: {totalResults: 1000000, resultsPerPage: 3}, items: [{kind: youtube#searchResult, etag: "VPWTmrH7dFmi4s1RqrK4tLejnRI/Csl1kQhnOsbs0j4_336zJAN176k", id: {kind: youtube#video, videoId: e3pUxU_bE6w}, snippet: {publishedAt: 2017-09-14T09:43:17.000Z, channelId: UCbD8EppRX3ZwJSou-TVo90A, title: [PRISTIN - We Like] KPOP TV Show | M COUNTDOWN 170914 EP.541, description: KPOP Chart Show M COUNTDOWN | EP.541 - PRISTIN - We Like ▷Watch more video clips: http://MCOUNTDOWN-KPOP2017 [Kor Ver.] 프리티 ..., thumbnails: {default: {url: https://i.ytimg.com/vi/e3pUxU_bE6w/default.jpg, width: 120, height: 90}, medium: {url: https://i.ytimg.com/vi/e3pUxU_bE6w/mqdefault.jpg, width: 320, height: 180}, high: {url: https://i.ytimg.com/vi/e3pUxU_bE6w/hqdefault.jpg, width: 480, height: 360}}, channelTitle: Mnet K-POP, liveBroadcastContent: none}}, {kind: youtube#searchResult, etag: "VPWTmrH7dFmi4s1RqrK4tLejnRI/1JCCNBPNbFeusCp_9-pl4i8q5OU", id: {kind: youtube#video, videoId: Cc4hO9RLdl4}, snippet: {publishedAt: 2017-09-14T10:37:29.000Z, channelId: UCbD8EppRX3ZwJSou-TVo90A, title: [EXO - Power] KPOP TV Show | M COUNTDOWN 170914 EP.541, description: KPOP Chart Show M COUNTDOWN | EP.541 - EXO - Power ▷Watch more video clips: http://MCOUNTDOWN-KPOP2017 [Kor Ver.] Power Up! '#EXO' 여기 ..., thumbnails: {default: {url: https://i.ytimg.com/vi/Cc4hO9RLdl4/default.jpg, width: 120, height: 90}, medium: {url: https://i.ytimg.com/vi/Cc4hO9RLdl4/mqdefault.jpg, width: 320, height: 180}, high: {url: https://i.ytimg.com/vi/Cc4hO9RLdl4/hqdefault.jpg, width: 480, height: 360}}, channelTitle: Mnet K-POP, liveBroadcastContent: none}}, {kind: youtube#searchResult, etag: "VPWTmrH7dFmi4s1RqrK4tLejnRI/ZnYC4e5evyfldkM67HsDuV8Yh3E", id: {kind: youtube#video, videoId: BBcOM25wrVo}, snippet: {publishedAt: 2017-08-18T15:21:48.000Z, channelId: UCtFtO4By4czgkYGvEXvJu0A, title: Kpop Banned Dance: MV vs LIVE, description: Kpop Banned Dance: MV vs LIVE Koreas biggest broadcasting companies has strict rules and standards on what lyrics and dances moves can be performed., thumbnails: {default: {url: https://i.ytimg.com/vi/BBcOM25wrVo/default.jpg, width: 120, height: 90}, medium: {url: https://i.ytimg.com/vi/BBcOM25wrVo/mqdefault.jpg, width: 320, height: 180}, high: {url: https://i.ytimg.com/vi/BBcOM25wrVo/hqdefault.jpg, width: 480, height: 360}}, channelTitle: Kpop Corn, liveBroadcastContent: none}}]} *** UPDATE WITH FOR LOOP STATEMENT Here is code for my for loop that's returning a type 'String' is not a subtype of type 'int' of 'index' error... Map data = JSON.decode(response);var videos = data['items'];for (var items in videos['snippet']){ print(items); } Running a loop through items in videos gives me 3 separate entries for the 3 videos I'm looking for - including snippets. Trying to get the individual snippets is failing. Please point me in the right direction.
Now provide the response and nothing else.
|
It looks like data['items'] is a List (i.e. a JSON Array), not a Map . You can use list comprehension methods to help here: final items = (data['items'] as List).map((i) => new CardInfo.fromJson(i));for (final item in items) { print(item.id);}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46229568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4139698/']}
|
jdg_76300
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
178747
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
How short can we state a principle which is equivalent with the Axiom of Choice under $ZF$? The principle should be a sentence in the language of set theory with only $\in$ and$=$ as extralogical relation signs; I thus disregard solutions that appeal to selectors as the epsilon operator. My motivation is to extend an interpretation of $ZF$ to one of $ZFC$, and a short sentence schema will make my work - simpler and shorter. Update: On the basis of comments I have developed an answer with a challenge as to whether we may improve.
Now provide the response and nothing else.
|
The following paper by Kurt Maes is focused on a version of the question at hand here, namely, finding an equivalent formulation of AC in the language of set theory using the fewest number of quantifiers, rather than merely the shortest length. In his main result, Maes finds a 5-quantifier assertion equivalentto the axiom of choice. The statement is built on the samestatement as in François's answer, but modified to use fewer quantifiers (Maes has five, in comparison with ten for François; but of course François wasn't trying to minimize that quantity). Kurt Maes, A 5-quantifier (\in,=)-expression ZF-equivalent to the Axiom of Choice . Maes's result refuted a conjecture of HarveyFriedman, which in the introduction the author mentions was stated on F.O.M., that itwould not be possible to state a formulation of the axiom ofchoice using only five quantifiers. Please see Maes's solution in his paper. When I first heard about the Maes result (August 2004, apparentlyan earlier draft of his paper—I haven't checked thedifferences), I naturally set myself the task of proving the main resultmyself, without looking at Maes's argument. I would encourage the same of all of you---before reading further, try to express AC in the language of set theory using only five quantifiers! Here is what I had come up with (retrieved after digging around in my old computer files): Theorem. AC is equivalent (in ZF) to the following assertion: $$\forall A\exists B\forall a\in A\, \exists x\forall z$$$$(x \in a \cap B) \wedge (z \in a \cap B \implies z=x) \wedge (a\neq B)$$ $$\text{or }\quad(B \in x) \wedge (x \in A) \wedge (a\neq x)$$ $$\text{or }\quad(B \in A) \wedge (z \notin B).$$ Proof. The point is that in order to get down to only five quantifiers, you have to essentially reuse the quantifiers to cover the various cases. The idea is that clause 1 expresses that $B$ is a selectionset for $A$, when $A$ is a family of disjoint nonempty sets (plus something extra useful when $A$ is not like that). Clause 2 expresses that $A$ has elements that are not disjoint (at least two contain $B$). Clause 3 expresses that $A$ contains the emptyset ($B=\emptyset$). AC easily implies the assertion. If $A$ is a family of disjoint nonempty sets, then we can let $B$ be a selection set for $A$, and verify clause 1. (note: in order to get $(a \neq B)$ in the case that $A$ is a singleton, we can freely add irrelevant elements to $B$ outside of $\bigcup A$.) If $A$ contains non-disjoint sets, we let $B$ be any element which is in at least two elements of $A$, and then we can always be in clause 2, since for any element of $A$ we can find another element of $A$ containing $B$. Finally, if $A$ contains the empty set, we can set $B=\emptyset$, and verify always clause 3. Conversely, suppose that the stated principle holds. To prove AC, it suffices to construct a selection set for a family $A$ of disjoint non-empty sets. By replacing $A$ if necessary with the isomorphic copy $\{\{w\}\times a\mid a \in A\}$, where $w$ has high rank (such as $w=A$ itself), we may assume that every element of $\bigcup A$ has the same rank. Thus, every element of $A$ has rank one higher than this, and every element of $\bigcup\bigcup A$ has rank lower than this. It follows that no element of $\bigcup A$ is in $A$, and no element of $\bigcup A$ has itself elements in $\bigcup A$. For such an $A$, we get $B$ by the stated principle. Note now that Clause 2 implies $B \in\bigcup A$, and clause 3 implies $B \in A$. Meanwhile, clause 1 implies both that $B$ has an element in $\bigcup A$ and also that $B$ is not in $A$ (since it implies that $B\cap a$ is nonempty for some other $a\in A$, while sets in $A$ are disjoint). By our assumptions on $A$, these possibilities are mutually exclusive. It follows that $B$ must always be in clause 1, or always in clause 2, or always in clause 3, regardless of $a$, $x$, and $z$. If clause 3 always occurs, then $\emptyset\in A$, a contradiction. If clause 2 always occurs, then $B$ must be in more than one element of $A$, since otherwise we could let $a$ be that element, and this would contradict the disjointness of the elements of $A$. Thus, it must be that clause 1 always occurs. In this case, $B$ is a selection set, and so we have established AC. QED Although I am not aware of any utility flowing from the fact that AC can be exprssed in this manner, it is nevertheless true that proof theory has sometimes made advances by investigating the resource-limited expressive powers of languages.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/178747', 'https://mathoverflow.net', 'https://mathoverflow.net/users/37385/']}
|
jdg_76301
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28817
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The addition of $\ce{HBr}$ to $\ce{CH2=CHBr}$ produces $\ce{CH3-CHBr2}$. Why is this so? According to me, halogens have a stronger $-I$ effect than a $+M$ effect (for example a halogen substituent on benzene, deactivates the ring), and hence the $\ce{CH3-CHBr+}$ carbocation should be more unstable as compared to $\ce{CH2+-CH2Br}$. Where am I going wrong? Edit : I have seen this question but it doesn't answer my query.
Now provide the response and nothing else.
|
The reaction of $\ce{HBr}$ with olefins is a relatively fast reaction, in other words, the activation energy required is small. Further, formation of the carbocation intermediate is the rate determining (slow) step in the overall process. Considering both of these facts and applying the Hammond postulate we can say that the transition state leading from the reactants to the carbocation intermediate will be an early transition state, it will look more like the reactants than the carbocation . This suggests that resonance and inductive effects need to be examined with regard to the reactants , rather than the transition state, in order to determine how the molecule will react with an electrophile (the proton). As the above diagram illustrates, both resonance and inductive effects (in an early transition state that resembles the reactant) suggest that the beta carbon will be more nucleophilic than the alpha carbon. Therefore addition of the proton (electrophile) will occur at the beta carbon. This will be followed by rapid addition of $\ce{Br^{-}}$ at the carbon already bearing a bromine atom (the alpha carbon) to yield a geminal dibromide.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/28817', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/13357/']}
|
jdg_76302
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
86778
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This question has been asked in a different way in other forums. But there has not been a decent explanation why you can't do the below in bash. #!/bin/bashcommand1SWITCH_USER_TO ragcommand2command3 Usually, the suggested way is #!/bin/bashcommand1sudo -u rag command2sudo -u rag command3 but why is it not possible in bash to change to a different user at some point during the bash script execution and execute the rest of the commands as a different user?
Now provide the response and nothing else.
|
The information is already in my other answer , but it's a bit buried there. So I thought, I'd add it here. bash doesn't have provision for changing users, but zsh does. In zsh , you change users by assigning values to those variables: $EUID : change the effective user id (and nothing else). Usually, you can change your euid between your real userid and the saved set user id (if called from a setuid executable) or change to anything if your euid is 0. $UID : change the effective user id, real user id and saved set user id to the new value. Unless that new value is 0, there's no coming back, as once all 3 have been set to the same value, there's no way to change it to anything else. $EGID and $GID : same thing but for group ids. $USERNAME . That is like using sudo or su . It sets your euid, ruid, ssuid to the uid of that user. It also sets the egid, rgid and ssgid and supplementary groups based on the group memberships as defined in the user database. Like for $UID , unless you set $USERNAME to root , there's no coming back, but like for $UID , you can change the user only for a subshell. If you run these scripts as "root": #! /bin/zsh -UID=0 # make sure all our uids are 0id -u # run a command as rootEUID=1000id -u # run a command as uid 1000 (but the real user id is still 0 # so that command would be able to change its euid to that. # As for the gids, we only have those we had initially, so # if started as "sudo the-script", only the groups root is a # member of.EUID=0 # we're allowed to do that because our ruid is 0. We need to do # that because as a non-priviledged user, we can't set our euid # to anything else.EUID=1001 # now we can change our euid since we're superuser again.id -u # same as above Now, to change user as in sudo or su , we can only do it by using subshells, otherwise we could only do it once: #! /bin/zsh -id -u # run as root( USERNAME=rag # that's a subshell running as "rag" id # see all relevant group memberships are applied)# now back to the parent shell process running as root( USERNAME=stephane # another subshell this time running as "stephane" id)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/86778', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/23448/']}
|
jdg_76303
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
64082588
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm developing ASP Core Web API using dotnet core v3.1. I'm using JWT tokens for authentication. And for authorization I use the [Authorize] attribute. How can I create my own response if the user is not logged in (while trying to access the action marked with the [Authorize] attribute) or the user's token is not authenticated. I came across a solution using a custom authorization attribute inherited from the default one. And in this example, the HandleUnauthorizedRequest method is overridden. But I don't see such a method inside the AuthorizeAttribute class. Is there a way to create custom unauthorized responses with http body?
Now provide the response and nothing else.
|
Since you are using JWT bearer authentication, one way to override the default Challenge logic (which executes to handle 401 Unauthorized concerns) is to hook a handler to the JwtBearerEvents.OnChallenge callback in Startup.ConfigureServices : services.AddAuthentication().AddJwtBearer(options =>{ // Other configs... options.Events = new JwtBearerEvents { OnChallenge = async context => { // Call this to skip the default logic and avoid using the default response context.HandleResponse(); // Write to the response in any way you wish context.Response.StatusCode = 401; context.Response.Headers.Append("my-custom-header", "custom-value"); await context.Response.WriteAsync("You are not authorized! (or some other custom message)"); } };}); This will override the default challenge logic in JwtBearerHandler.HandleChallengeAsync , which you can find here for reference purposes. The default logic does not write any content to response (it only sets the status code and set some headers). So to keep using the default logic and add content on top of it, you can use something like this: options.Events = new JwtBearerEvents{ OnChallenge = context => { context.Response.OnStarting(async () => { // Write to the response in any way you wish await context.Response.WriteAsync("You are not authorized! (or some other custom message)"); }); return Task.CompletedTask; }};
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/64082588', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13851956/']}
|
jdg_76304
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32204184
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using the new android.support.design.widget.TabLayout of v7-appcompat library, and found a problem, there is no way to set the divider between the tabs, dont know if there is. I have successfully configured the pager adapter and the tabs are looking good but cant set the divider between the tabs. I want this type of tabs Tab1 | Tab2 | Tab3 but currently its showing Tab1 Tab2 Tab3 My xml is <android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" > <android.support.design.widget.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" > <include layout="@layout/toolbar" /> <android.support.design.widget.TabLayout android:id="@+id/tablayout" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@drawable/shape_tabbar_background" app:tabIndicatorColor="@android:color/white" app:tabIndicatorHeight="4dp" /> </android.support.design.widget.AppBarLayout> <android.support.v4.view.ViewPager android:id="@+id/viewpager" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior" /></android.support.design.widget.CoordinatorLayout> I am adding tabs by this viewPager = (ViewPager) findViewById(R.id.viewpager); viewPager.setOffscreenPageLimit(2); adapter = new TabAdapterLoginActivity(getSupportFragmentManager(), titles); viewPager.setAdapter(adapter); tabLayout = (TabLayout) findViewById(R.id.tablayout); tabLayout.setupWithViewPager(viewPager);
Now provide the response and nothing else.
|
There is a way to add divider by using Tab setCustomView method: TabLayout tabLayout = (TabLayout) findViewById(R.id.tablayout);tabLayout.setupWithViewPager(viewPager);for (int i = 0; i < tabLayout.getTabCount(); i++) { TabLayout.Tab tab = tabLayout.getTabAt(i); RelativeLayout relativeLayout = (RelativeLayout) LayoutInflater.from(this).inflate(R.layout.tab_layout, tabLayout, false); TextView tabTextView = (TextView) relativeLayout.findViewById(R.id.tab_title); tabTextView.setText(tab.getText()); tab.setCustomView(relativeLayout); tab.select();} Tab custom layout with divider (tab_layout.xml): <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" ><!-- Tab title --><TextView android:id="@+id/tab_title" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:textColor="@drawable/tab_item_selector"/><!-- Tab divider --><View android:layout_width="1dp" android:layout_height="match_parent" android:layout_alignParentLeft="true" android:background="@android:color/black" /></RelativeLayout> Set TabLayout tab horizontal padding to 0dp : <android.support.design.widget.TabLayout android:id="@+id/tablayout" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@drawable/shape_tabbar_background" app:tabIndicatorColor="@android:color/white" app:tabIndicatorHeight="4dp" app:tabPaddingStart="0dp" app:tabPaddingEnd="0dp" /> And a selector for tab title text color when it's selected (tab_item_selector.xml): <?xml version="1.0" encoding="utf-8"?><selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_selected="true" android:color="@color/abc_primary_text_material_dark" /> <item android:state_focused="true" android:color="@color/abc_primary_text_material_dark" /> <item android:state_pressed="true" android:color="@color/abc_primary_text_material_dark" /> <item android:color="@color/abc_secondary_text_material_dark" /></selector>
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32204184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3789686/']}
|
jdg_76305
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
233571
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a commercial software developer and I want to make use of open-source in my proprietary software. As I understand the concept of copy-left, it uses the original authors copyright to ensure that their ultimate end users (which would include my customers) are not deprived of benefits of the original work that the original authors of the OS components intended for them to have. I also understand that there is an incentive to produce more free software and I often find that sites that clarify and recommend various licensing terms are mixing the legalities with the ideology. All I am asking for is a clear picture. The people who want to create OS exclusively for other OS projects should be allowed to do so. The people who want to put code out there for anybody to use should be allowed to do so. Also, there are many claims which have no legal precedence and are not as clear cut as authors of OS interest sites would like them to be. And we all know that code is not just code. There is a difference between: Copying somebodys source code into your source code. Consuming a library compiled as-is from github. Consuming a library compiled from a locally altered version from github. Let's pretend that I use an open source library for accessing Excel spreadsheets. It makes sense that I would contribute back bugfixes/additions etc that had anything to do with the general problem of accessing excel spreadsheets. This is to make an excel plugin for an application that already accepts XML, SQL db etc as alternative input mechanisms for the same data. ** Does this mean that the application is considered "derivative" even when it does not expand into the problem domain of importing excel sheets? **
Now provide the response and nothing else.
|
From a copyright standpoint, a program that is statically linked to a library is a derived work of that library. The reasoning to reach that conclusion is as follows: The (binary) code of the library is physically contained in the program when the library is statically linked. This makes the (binary) program a derived work of the (binary) library. The (binary) code of the library is derived from the library source code by way of translation. This makes the binary code a derived work of the source code. For dynamically linked libraries, it is less clear if a program that uses them is legally a derived work under copyright law. To my knowledge, even the law experts don't agree on that, but at least the copyleft proponents claim that it is. What all of this means for using programs using a library in a program depends on the copyright license that the library is distributed under and the amount of legal risk that you are willing to take. If the library is distributed with a copyleft license, then the license applies terms to the distribution of all derived works, including the binary programs that use the library and one of those terms is that source code must be provided. If the library is distributed with a non-copyleft license, then the license only applies terms to the distribution of the library itself (and machine-translated derived works, like the binary version). This means that you can typically use a non-copyleft library in a closed source project without problem. There is no need to disclose the source of the application using the library. Whether you need to distribute the source code for changes you made to the library depends on the exact license that the library is distributed under. If you do the same with a copyleft library, then you had better seek good legal council to determine if you are or are not violating the license terms.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/233571', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/30376/']}
|
jdg_76306
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
706094
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When a pulse on a rope gets reflected off a fixed boundary, it phase shifts 180° and inverts itselfs. If particles are also waves, will doing an equivalent thing create antiparticles? Is it possible for the quantum field to have non conductive properties in certain conditions and thus allow for the invertion of particles?
Now provide the response and nothing else.
|
Here is a simulation of the Schrödinger equation I made a while ago: In the gif you see the Schrödinger equation for a single, one-dimensional particle. The particle moves on the left-to-right axis and the other two axes show the real and imaginary parts of the wavefunction; the value of the wavefunction consists of complex numbers. If you have never heard of them I invite you to learn more about them because they are used extensively in quantum physics and also other areas of math and physics. The distance $^\dagger$ from the red line to the axis shows the probability to find a particle at that certain location: a large distance means a high probability. The boundary conditions is a fixed boundary just like in your example. What you see after the reflection is just the same particle but travelling in the other direction. No particles are created/destroyed. Regular quantum mechanics mostly works with a fixed number of particles; if you want to model particle creation/annihilation you have to resort to Quantum Field Theory, which is even more complicated than quantum mechanics. Also the creation of antiparticles is limited by strict rules. To name an example a photon can be used to create an electron/anti-electron pair . An anti-electron is also called a positron. They come in a pair because charge and something called "lepton number" has to be conserved. Also the photon has to have enough energy to ensure energy conservation. It also has to happen near a nucleus because otherwise momentum cannot be conserved: we can position ourselves such that the electron/positron together have zero momentum but a lone photon can never have zero momentum. $\dagger$ actually the distance squared but this is just a detail.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/706094', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/226419/']}
|
jdg_76307
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
498433
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following for creating a Global Object is resulting in compilation errors. #include "stdafx.h" #include <iostream> using namespace System; using namespace std; #pragma hdrstop class Tester;void input();class Tester{ static int number = 5;public: Tester(){}; ~Tester(){}; void setNumber(int newNumber) { number = newNumber; } int getNumber() { return number; }}Tester testerObject;void main(void){ cout << "Welcome!" << endl; while(1) { input(); }}void input(){ int newNumber = 0; cout << "The current number is " << testerObject.getNumber(); cout << "Change number to: "; cin >> newNumber; cout << endl; testerObject.setNumber(newNumber); cout << "The number has been changed to " << testerObject.getNumber() << endl;} Here are the compile errors: 1>------ Build started: Project: test, Configuration: Debug Win32 ------1>Compiling...1>test.cpp1>.\test.cpp(15) : error C2864: 'Tester::number' : only static const integral data members can be initialized within a class1>.\test.cpp(33) : error C2146: syntax error : missing ';' before identifier 'testerObject'1>.\test.cpp(33) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int1>.\test.cpp(49) : error C2039: 'getNumber' : is not a member of 'System::Int32'1> c:\windows\microsoft.net\framework\v2.0.50727\mscorlib.dll : see declaration of 'System::Int32'1>.\test.cpp(55) : error C2039: 'setNumber' : is not a member of 'System::Int32'1> c:\windows\microsoft.net\framework\v2.0.50727\mscorlib.dll : see declaration of 'System::Int32'1>.\test.cpp(57) : error C2039: 'getNumber' : is not a member of 'System::Int32'1> c:\windows\microsoft.net\framework\v2.0.50727\mscorlib.dll : see declaration of 'System::Int32'1>Build log was saved at "file://c:\Users\Owner\Documents\Visual Studio 2008\Projects\test\test\Debug\BuildLog.htm"1>test - 6 error(s), 0 warning(s)========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== How do I create a Global ClassObject correctly like I've attemptedhere. And how do I fix that "only staticconst integral data members can beinitialized within a class" And basically how do I fix the restof the errors so I can get this tocompile? I like declaring Global Class Objects at file scope (I like declaring all globals at file scope) because when I have to create separate source files and do "extern" and everything it becomes extremely complicated and never works for me. Although, I do want to figure out how to do that eventually... it seems every tutorial I look at won't compile though and unless it compiles I have no idea how to recreate it! If I can just get this to compile...then I can successfully learn how to do this. So if someone could rewrite the above to where it literally copies & pastes into Visual C++ Express 2008 and works I will finally be able to figure out how to recreate it. I'm extremely excited on seeing the fix for this! It is just I can't get Global Objects to work right! Any other information on declaring Global Class Objects...or anything for that matter is welcome!
Now provide the response and nothing else.
|
Just start addressing the errors one by one. A lot of the errors are just cascaded from the initial errors, so it looks like there are a lot of problems when there's only a couple. Just start from the top: 1>.\test.cpp(15) : error C2864: 'Tester::number' : only static const integral data members can be initialized within a class You can't initialize a member in the class definition unless it's static, const, and one of the integral types. Leave the " = 5 " off of the declaration of number . Then you'll need to have a definition of Tester::number outside of the class definition, like so: int Tester::number = 5; Problem #2: 1>.\test.cpp(33) : error C2146: syntax error : missing ';' before identifier 'testerObject' Almost exactly what it says (missing semi-colon errors can be a bit inexact in saying where the semicolon should be) - you need a semi-colon after the definition of the Tester class. Fix those and your compilation problems go away. The key thing is to try and take compiler errors one at a time from the top. If you get more than about 3 of them, you can probably just ignore everything after the 3rd or so because the initial error just cause the compile to into the weeds (and if they are real errors, they'll show up again in the next compile anyway).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/498433', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60615/']}
|
jdg_76308
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8742454
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created a .NET Windows Service which performs certain actions and generates reports. These reports are XPS documents which I save in a certain directory. Being familiar with WPF, the way I have chosen to create the reports is to instantiate a System.Windows.Documents.FixedDocument , adding FixedPage objects with content as required. My problem is that the Service memory usage goes up and up and up over time as it runs. At first, I went through my code rigorously, ensuring all disposable objects were disposed, etc, and other obvious memory leak candidates, but still had the problem. I then used the CLR Profiler to look at the memory usage of the Service in detail. I found that as the service generates these FixedDocument reports, and saves them as XPS files, all the various UI elements associated with FixedDocument objects ( Dispatcher , FixedPage , UIElementCollection , Visual , etc) are staying in memory. This doesn't seem to happen when I do the same thing in my WPF apps, and so my hunch is that it has something to do with the WPF UI Dispatcher model being used outside of a WPF app. How can I "dispose" my FixedDocument objects when using them in a service like this (or outside a WPF app in general)? ======== EDIT ========= OK, I've found that my memory leak is not specifically to do with creating/populating a FixedDocument. If I do so, but don't actually ever save it to disk as a XPS, the memory leak doesn't happen. So, my problem must be to do with the save as XPS file. Here's my code: var paginator = myFixedDocument.DocumentPaginator;var xpsDocument = new XpsDocument(filePath, FileAccess.Write);var documentWriter = XpsDocument.CreateXpsDocumentWriter(xpsDocument); documentWriter.Write(paginator);xpsDocument.Close(); What I've tried: Manual garbage collection Calling UpdateLayout() on each page of myFixedDocument before getting it's paginator (as suggested in the answer below) - I've also tried passing myFixedDocument directly into Write() i.e. not the paginator Putting those lines of code in their own thread and manually shutting down Dispatchers Still no luck. ========== WORKAROUND ========== By isolating the above code into its own AppDomain using the general method shown in the example at http://msdn.microsoft.com/en-us/library/system.appdomain.aspx , the memory leak no longer affects my service (I say "no longer affects" because it still happens, but when the AppDomain is unloaded, all leaked resources are unloaded with it). I would still be keen to see a real solution. (On a related note, for those interested, using a separate AppDomain caused a memory leak in the PDFSharp component I was using to turn certain XPS files into PDF files. Turns out PDFSharp uses a global font cache that in normal circumstances doesn't grow significantly. But the cache was growing and growing after using these AppDomains. I edited the PDFSharp source code to enable me to manually clear out the FontDescriptorStock and FontDataStock, solving the issue.) ========== SOLUTION ========== See my answer below for final solution.
Now provide the response and nothing else.
|
I eventually found an answer, which is two parts. Firstly, after saving my XPS document to disk and closing/disposing the XpsDocument , I run the following line of code: Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.SystemIdle, new DispatcherOperationCallback(delegate { return null; }), null); This gets rid of all the Dispatcher objects hanging around in memory. While the above sorts out most of the memory issues, I noticed there were still FixedPage objects along with other UI objects still in memory. Manually clearing out my FixedDcoument seems to get rid of them: foreach (var fixedPage in FixedDocument.Pages.Select(pageContent => pageContent.Child)) { fixedPage.Children.Clear();}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8742454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/672827/']}
|
jdg_76309
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
315565
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say our data set contains 1 million examples, i.e., $x_1, \ldots, x_{10^6}$, and we wish to use gradient descent to perform a logistic or linear regression on these data set. What is it with the gradient descent method that makes it inefficient? Recall that the gradient descent step at time $t$ is given by: $$w_{t+1} = w_{t} + \eta_t \nabla f(x)$$ where $f$ is the loss function. I am not seeing anything out of the ordinary with the above step that causes the algorithm to be inefficient. Is it the computation of $\nabla f(x)$? Couldn't this operation be pre-computed, i.e., each $\frac{\partial f}{\partial x}$ already computed, and simply evaluate them at each data point $x_i?$
Now provide the response and nothing else.
|
First let me suggest an improvement to your notation. In particular, let's denote the loss function by $L(w)$ rather than $f(x)$. Using the letter $L$ is simply a personal preference of mine since it reminds me that we're dealing with the L oss. The more substantive change is making it clear that the loss is a function of the weights $w$ rather than the data $x$. Importantly, the gradient is with respect to $w$ not $x$. So$$\nabla L(w) = \left(\frac{\partial L}{\partial w_1}, \dots, \frac{\partial L}{\partial w_D} \right),$$where $D$ is the dimensionality of your data. Despite the fact that we should think of the loss as a function of the weights $w$, any reasonable loss function will still depend on the entire dataset $x$ (if it didn't, it wouldn't be possible to learn anything from the data!). In linear regression, for example, we typically use the sum-of-squares loss function$$L(w) = \sum_{i=1}^N (y_i - w^Tx_i)^2.$$So evaluating the gradient $\nabla L(w)$ for a particular set of weights $w$ will require a sum over all $N$ points in the dataset $x$. If $N = 10^6$, then every incremental step in the gradient descent optimization will require on the order of a million operations, which is quite expensive.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/315565', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/58450/']}
|
jdg_76310
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
550959
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given 2k pieces on a k by k chessboard, prove that there is always a sequence of pieces $K_1, K_2 \ldots K_{2n}$ such that $K_1$ and $K_2$ are in the same row, $K_2$ and $K_3$ are in the same column, $K_3$ and $K_4$ are in the same row ... $K_{2n-1}$ and $K_{2n}$ are in the same row, and $K_{2n}$ and $K_1$ are in the same column. This sequence does not necessarily contain all the pieces on the board.
Now provide the response and nothing else.
|
Another way to do this is motivated by graph theory. Let $R=\{r_1,r_2,\dots, r_k\}$ and $C=\{c_1,c_2,\dots,c_k\}$, and consider $R\cup C$ as a set of vertices. For each position $(i,j)$ on the chessboard with a checker, draw an edge between the vertices $r_i$ and $c_j$. This will create a graph with $2k$ vertices and $2k$ edges. A graph with at least as many edges as vertices must contain a cycle. Follow the cycle (which must alternate between vertices in $R$ and vertices in $C$ by construction, so must have even length) to get your sequence.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/550959', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/96217/']}
|
jdg_76311
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42417018
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to have multiple fixed-top navbars in Bootstrap 4 showing under each other? The 1 currently overlays the other which is not what I want.
Now provide the response and nothing else.
|
Yes, it's possible but you have to position the 2nd one accordingly. The height of the Navbar is ~56px. .fixed-top-2 { margin-top: 56px;}body { padding-top: 105px;}<nav class="navbar navbar-toggleable-sm bg-faded navbar-light fixed-top fixed-top-2"> <button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#navbar1"> <span class="navbar-toggler-icon"></span> </button> <a href="/" class="navbar-brand">One</a> <div class="navbar-collapse collapse" id="navbar1"> <ul class="navbar-nav"> .. </ul> </div></nav><nav class="navbar navbar-toggleable-sm bg-inverse navbar-inverse fixed-top"> <button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#navbar2"> <span class="navbar-toggler-icon"></span> </button> <a href="/" class="navbar-brand">Two</a> <div class="navbar-collapse collapse" id="navbar2"> <ul class="navbar-nav"> .. </ul> </div></nav> Demo: Bootstrap 4 Multiple fixed-top Navbars In some scenarios, it may be better to use a single fixed-top DIV to contain both . Also see: Bootstrap 4 Navbar sticky after header Bootstrap 4 collapsing two navbars into one toggle button
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42417018', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7611264/']}
|
jdg_76312
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13095488
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a learning exercise. I'm attempting to augment memcpy by notifying the user if the copy operation will pass or fail before it begins. My biggest question is the following. If I allocate two char arrays of 100 bytes each, and have two pointers that reference each array, how do I know which direction I am copying? If I copy everything from the first array to the second how do I ensure that the user will not be overwriting the original array? My current solution compares the distance of the pointers from the size of the destination array. If the size between is smaller than I say an overwrite will occur. But what if its copying in the other direction? I'm just kind of confused. int memcpy2(void *target, void *source, size_t nbytes) { char * ptr1 = (char *)target; char * ptr2 = (char *)source; int i, val; val = abs(ptr1 - ptr2); printf("%d, %d\n", val, nbytes + 0); if (val > nbytes) { for (i = 0; i < val; i++){ ptr1[i] = ptr2[i]; } return 0; /*success */ } return -1; /* error */}int main(int argc, char **argv){ char src [100] = "Copy this string to dst1"; char dst [20]; int p; p = memcpy2(dst, src, sizeof(dst)); if (p == 0) printf("The element\n'%s'\nwas copied to \n'%s'\nSuccesfully\n", src, dst); else printf("There was an error!!\n\nWhile attempting to copy the elements:\n '%s'\nto\n'%s', \n Memory was overlapping", src, dst); return 0;}
Now provide the response and nothing else.
|
The only portable way to determine if two memory ranges overlap is: int overlap_p(void *a, void *b, size_t n){ char *x = a, *y = b; for (i=0; i<n; i++) if (x+i==y || y+i==x) return 1; return 0;} This is because comparison of pointers with the relational operators is undefined unless they point into the same array. In reality, the comparison does work on most real-world implementations, so you could do something like: int overlap_p(void *a, void *b, size_t n){ char *x = a, *y = b; return (x<=y && x+n>y) || (y<=x && y+n>x);} I hope I got that logic right; you should check it. You can simplify it even more if you want to assume you can take differences of arbitrary pointers.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13095488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/453261/']}
|
jdg_76313
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12783339
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to add a basic click handler to an element using ID. Normally I would use jQuery, but I have forgotten how to do it in vanilla js. What is the best way to do this, without using jQuery: $("#myBasic").click(function(){ alert(“testString”);});
Now provide the response and nothing else.
|
Very basic way is to just assign a function to the on[event] property of the element document.getElementById("myBasic").onclick = function() { alert("testString"); }; This has very broad browswer support, and is useful for most basic cases There are other event handling features as well, like .addEventListener(evt, fn, capture) , but IE only started supporting it in IE9. Before that, you'd need to use attachEvent(onevt, fn) , which is similar, but not identical. The main trouble with .attachEvent() is that it doesn't give you an automatic reference to the element that bound the handler via this . But there are workarounds for that. To create a fairly browser compatible solution, you could do something like this: function addListener(elem, type, fn) { if (elem.addEventListener) { elem.addEventListener(type, fn, false); } else if (elem.attachEvent) { elem.attachEvent("on" + type, function() { return fn.call(elem, window.event); }); } else { elem["on" + type] = fn; }} And you'd call it like this: var el = document.getElementById("myBasic");addListener(el, "click", function(e) { alert(this.id); }); This should cover most situations, though I'd be a little concerned about memory leaks for the .attachEvent version. All this only deals with binding the handler. To obtain consistent cross-browser behavior, you'd likely need to have a fix ready for the event object.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12783339', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/106261/']}
|
jdg_76314
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12598221
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following example template<class Type = void> class MyClass{ public: double getValue() { // if "Type == void" return _x, if "Type != void" return _y return (/* SOMETHING */) ? (_x) : (_y); } protected: double _x; static const double _y;}; What could be the /* SOMETHING */ condition ? I want to return _x if the template parameter is void, and return _y if not. How to do that ?
Now provide the response and nothing else.
|
First off, you can't return anything because the function return type is void . (fixed) Second, you can specialize that function to act differently when Type is void : template<class Type> class MyClass{ public: double getValue() { return _y; } protected: double _x; static const double _y;}; template<>inline double MyClass<void>::getValue(){ return _x;}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12598221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/882932/']}
|
jdg_76315
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
115492
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the best way to backup a SQL Server 2005 database nightly? I'm looking for a solution/strategy that would make the database backup file, and place it in an "outgoing" directory. We're wanting our FTP software to move it out to an offsite server. Any suggestions on how to make this work as desired?
Now provide the response and nothing else.
|
You can do this free. Create a maintenance plan to back up the DB, you can define the location you want to send the file, and add a clean up task. If it's express and you can't use maint plans, use this tool to run the backup job automatically: http://www.codeplex.com/ExpressMaint and use a simple VB script to clean up the folder as a windows scheduled task Then create a script to FTP the logs home nightly as windows scheduled task. We do pretty the exact same thing in one of our setups. If the offsite server is on a constant VPN with the DB Server, you could DFS the backup folder. EDIT: If you are wanting to this as "extra" full backup along side local bks, you'll need to use a TSQL statement for the backup job in the plan, and throw in the "COPY_ONLY" for the backup, so differentials aren't using that as their reference, but your local fulls as planned.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/115492', 'https://serverfault.com', 'https://serverfault.com/users/19511/']}
|
jdg_76316
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2251034
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Nhibernate but still i am confused about (lazy-loading and Eagerly loading) these two topics due to poor understanding of Nhibernate. Please define me lazy-loading and Eagerly loading in simple words.Also why we use castle.Dynamic Proxy ?
Now provide the response and nothing else.
|
Lazy loading and eager loading are concepts that must be present in every ORM.Let's take a parent-child relation: class Parent { String name; List<Child> childs;}class Child { String name;} When you load an element of Parent, the ORM must decide if it loads the childs collection also (through a join for example) or if it delays the query to load the childs until you actually access the collection parent.childs.get(0) When you load the childs collection ahead, ie, before accessing the collection, it is eagerly loading because you are expecting to access the childs. This can be done in a single query, with the disadvantage of bringing more data from the DB. When you delay the load until the actual collection is accessed, it is called lazy loading, because you are only getting the childs when they are strictly required. This has the benefit of getting only the data when its needed, at the cost of extra queries (for mor on this topic you can query google for "N+1 select hibernate" for example). If you want to trigger the query for retrieving the childs when the collection is being accessed, you need some sort of callback/interception on the childs collection. This is done through a proxy on the collection, so you can intercept every access to the collection to get data from the db. That's why you need a proxy/interception library such as Castle.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2251034', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217603/']}
|
jdg_76317
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
45707
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is part of problem 5.23 in A First Course in Linear Model Theory, Dey and Ravishanker. It was on a previous midterm and I didn't know how to do it, but now I am studying for the final and would like to figure it out. Suppose $\mathbf{x} \sim N_{m}(\mu, \Sigma)$ and $A$ is a symmetric idempotent matrix with rank $p<m$. Edit: Show that $Cov(\mathbf{x}, \mathbf{x}^\prime A \mathbf{x})=2\Sigma A \mu$ I can find $E(\mathbf{x}^\prime A \mathbf{x})$ and $Var(\mathbf{x}^\prime A \mathbf{x})$ using the formula for the cumulants of $\mathbf{x}^\prime A \mathbf{x}$, $$\kappa (\mathbf{x}^\prime A \mathbf{x})=2^{r-1}(r-1)![tr(A\Sigma)^r+r\mu^\prime A(\Sigma A)^{r-1}\mu] $$ but I don't see how I can get the covariance above using the cumulants without more information. Somehow I need to calculate $E(\mathbf{x} \mathbf{x}^\prime A \mathbf{x})$ but I can't figure out how to find this quantity. Hints at how to get started would be great.
Now provide the response and nothing else.
|
The standard normal distribution makes everything OK. If $\newcommand{\Cov}{\mathrm{Cov}}\mathbf{x}\sim N_m(\mu , \Sigma)$, there exists a matrix $V$ such that $VV^\prime=\Sigma$ and $\mathbf{x}=\mu+V\mathbf{z}$, where $\mathbf{z}\sim N_m(\mathbf{0}, \mathbf{I})$. Then my problem reduces to finding, $$\begin{equation}\begin{split}\Cov(\mu+V\mathbf{z},(\mu+V\mathbf{z})^\prime A (\mu+V\mathbf{z}))&=V\Cov(\mathbf{z},\mathbf{z}^\prime V^\prime AV\mathbf{z})+V\Cov(\mathbf{z},\mathbf{z}^\prime V^\prime A \mu)\\&\quad+V\Cov(\mathbf{z},\mu^\prime AV\mathbf{z})\end{split}\end{equation}$$ Note that $\mathbf{z}\mathbf{z}^\prime V^\prime AV\mathbf{z} $ is equal in distribution to $(-\mathbf{z})(-\mathbf{z})^\prime V^\prime AV(-\mathbf{z})=-\mathbf{z}\mathbf{z}^\prime V^\prime AV\mathbf{z} $, so $$\Cov(\mathbf{z},\mathbf{z}^\prime V^\prime AV\mathbf{z})=E(\mathbf{z}\mathbf{z}^\prime V^\prime AV\mathbf{z})=\mathbf{0}.$$ The term $\mathbf{z}^\prime V^\prime A \mu$ is a scalar and is equal to its transpose, giving $$\Cov(\mathbf{z},\mathbf{z}^\prime V^\prime A \mu)=\Cov(\mathbf{z},\mu^\prime AV\mathbf{z})=E(\mathbf{z}\mathbf{z}^\prime V^\prime A \mu)=E(\mathbf{z}\mathbf{z}^\prime)V^\prime A\mu=V^\prime A\mu.$$ The result follows since $VV^\prime A \mu=\Sigma A\mu$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/45707', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/12518/']}
|
jdg_76318
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32365692
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For help you help me i'm going to illustrate the problem with an example: var SuperClass = function() { this.propertieOfSuperClass = 'A'; this.init();};SuperClass.prototype.init = function() { console.log(this.propertieOfSuperClass); // ouput 'A';}; // Some code here to do the magic for SubClass1// Some code here to do the magic for SubClass2 SubClass1: var SubClass1 = function() { this.methodOfSubClass1();};SubClass1.prototype.methodOfSubClass1 = function() { console.log(this.propertieOfSuperClass); // output 'A';}; SubClass2: var SubClass2 = function() { this.methodOfSubClass2();};SubClass2.prototype.methodOfSubClass = function() { console.log(this.propertieOfSuperClass); // output 'A';}; I want be able to have this SuperClass which i set the properties and other two subclasses which i can access the properties of SuperClass, but without lose the scope. I was trying to use inside of my SuperClass init method: SubClass1.call(this);SubClass2.call(this); And this will make the properties of SuperClass accessible, but the SubClass's will lost their scope, so i cant call methodOfSubClass1 and methodOfSubClass2 , because they doesn't exist in the SuperClass . This could be solved? Thanks very much in advance.
Now provide the response and nothing else.
|
As already mentioned by Hot Licks you can use NSDictionary method isEqualToDictionary () to check if they are equal as follow: let dic1: [String: AnyObject] = ["key1": 100, "key2": 200]let dic2: [String: AnyObject] = ["key1": 100, "key2": 200]let dic3: [String: AnyObject] = ["key1": 100, "key2": 250]println( NSDictionary(dictionary: dic1).isEqualToDictionary(dic2) ) // trueprintln( NSDictionary(dictionary: dic1).isEqualToDictionary(dic3) ) // false you can also implement a custom operator "==" as follow: public func ==(lhs: [String: AnyObject], rhs: [String: AnyObject] ) -> Bool { return NSDictionary(dictionary: lhs).isEqualToDictionary(rhs)}println(dic1 == dic2) // trueprintln(dic1 == dic3) // false Xcode 9 • Swift 4 From the docs, dictionary is now defined as a struct: struct Dictionary<Key : Hashable, Value> : Collection, ExpressibleByDictionaryLiteral Description A collection whose elements are key-value pairs. A dictionary is a type of hash table, providing fast access to the entries it contains. Each entry in the table is identified using its key, which is a hashable type such as a string or number. You use that key to retrieve the corresponding value, which can be any object. In other languages, similar data types are known as hashes or associated arrays. Create a new dictionary by using a dictionary literal. A dictionary literal is a comma-separated list of key-value pairs, in which a colon separates each key from its associated value, surrounded by square brackets. You can assign a dictionary literal to a variable or constant or pass it to a function that expects a dictionary. Here’s how you would create a dictionary of HTTP response codes and their related messages: var responseMessages = [200: "OK", 403: "Access forbidden", 404: "File not found", 500: "Internal server error"] The responseMessages variable is inferred to have type [Int: String] . The Key type of the dictionary is Int , and the Value type of the dictionary is String . To create a dictionary with no key-value pairs, use an empty dictionary literal ([:]). var emptyDict: [String: String] = [:] Any type that conforms to the Hashable protocol can be used as a dictionary’s Key type, including all of Swift’s basic types. You can use your own custom types as dictionary keys by making them conform to the Hashable protocol. We don't need to define a custom operator anymore: From the docs: static func ==(lhs: [Key : Value], rhs: [Key : Value]) -> Bool Testing: let dic1 = ["key1": 100, "key2": 200]let dic2 = ["key1": 100, "key2": 200]let dic3 = ["key1": 100, "key2": 250]print(dic1 == dic2) // trueprint(dic1 == dic3) // false In the example above all dictionary keys and values are the same type.If we try to compare two dictionaries of type [String: Any] Xcode will complain that Binary operator == cannot be applied to two [String: Any] operands. let dic4: [String: Any] = ["key1": 100, "key2": "200"]let dic5: [String: Any] = ["key1": 100, "key2": "200"]let dic6: [String: Any] = ["key1": 100, "key2": Date()]print(dic4 == dic5) // Binary operator == cannot be applied to two `[String: Any]` operands But we can extend the == operator functionality implementing an infix operator, casting Swift Dictionary to NSDictionary and constraining the dictionary values to Hashable Protocol: Xcode 11 • Swift 5.1 public func ==<K, L: Hashable, R: Hashable>(lhs: [K: L], rhs: [K: R] ) -> Bool { (lhs as NSDictionary).isEqual(to: rhs)} Testing: let dic4: [String: AnyHashable] = ["key1": 100, "key2": "200"]let dic5: [String: AnyHashable] = ["key1": 100, "key2": "200"]let dic6: [String: AnyHashable] = ["key1": 100, "key2": Date()] print(dic4 == dic5) // trueprint(dic4 == dic6) // false let dic7: [String: String] = [ "key2": "200"]let dic8: [String: Date] = [ "key2": Date()]print(dic7 == dic8) // false
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/32365692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_76319
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28860069
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple data frame that looks like this: x <- c("aa", "aa", "aa", "bb", "cc", "cc", "cc")y <- c(101, 102, 113, 201, 202, 344, 407)df = data.frame(x, y) x y1 aa 1012 aa 1023 aa 1134 bb 2015 cc 2026 cc 3447 cc 407 I would like to use a dplyr::filter() and a RegEx to filter out all the y observations that start with the number 1 I'm imagining that the code will look something like this: df %>% filter(y != grep("^1")) But I am getting an Error in grep("^1") : argument "x" is missing, with no default
Now provide the response and nothing else.
|
You need to double check the documentations for grepl and filter . For grep / grepl you have to also supply the vector that you want to check in (y in this case) and filter takes a logical vector (i.e. you need to use grepl ). If you want to supply an index vector (from grep ) you can use slice instead. df %>% filter(!grepl("^1", y)) Or with an index derived from grep : df %>% slice(grep("^1", y, invert = TRUE)) But you can also just use substr because you are only interested in the first character: df %>% filter(substr(y, 1, 1) != 1)
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28860069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3731467/']}
|
jdg_76320
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
491863
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Please explain what the circled components are for. What is the difference between the \$V_{cc}\$ and \$V_{in}\$ , can they be the same?
Now provide the response and nothing else.
|
Before answering, we should know what the function of that circuit is. The function of the above circuit is when \$V_{in}\$ is high, the relay will work closing the switch so the circuit of the output load will work. But in case \$V_{in}\$ is low,the switch will open and the value of \$I_C\$ will suddenly change to be \$0\$ , making the inductor produce high current and the current find no path except the transistor causing damage for the transistor. So we treat that problem using a diode and the diode will work only when \$I_C=0\$ . So in the schematic below we can consider the transistor as a switch, so when the \$V_{in}\$ is high the switch will be closed and when \$V_{in}\$ is low, the switch will be open and the inductor(relay) will produce high current making very large voltage drop across the switch(transistor) damaging it. For \$R_s\$ and \$R\$ , they are used as current limiting resistors, and their values depend on the components used in the circuit and the gain needed to produce \$I_C\$ . For \$V_{in}\$ and \$V_{CC}\$ , them difference between the is that \$V_{in}\$ is square wave and \$V_{CC}\$ is constant dc voltage source.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/491863', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/248211/']}
|
jdg_76321
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1582662
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does there exist a classification of finite topologies? I define a finite topology as a finite Set $T$ of Sets which respects the following properties: $\forall a,b \in T: a \cap b \in T$, $\forall a,b \in T: a \cup b \in T$, $ \emptyset \in T$, $\exists S\in T\ |\ \forall a \in T , a \subseteq S$. This seems like a natural thing to do in the vein of classifying finite groups, so i'm curious what current research in this area looks like.
Now provide the response and nothing else.
|
Finite topologies and finite preorders (reflexive & transitive relations) are equivalent: Let $T$ be a topological space with finite topology $\mathcal{O}$. Define $\leq$ on $T$ by: $$x\leq y \Leftrightarrow \forall U\in \mathcal{O} : x\in U \Rightarrow y\in U$$ Then $\leq$ is clearly a preorder, called the specialization order of $T$. Given a preorder $\leq$ on $T$, define the set $\mathcal{O}$ to be set of all upwards-closed sets in $(T,\leq)$, that is all sets $U$ with: $$\forall x,y\in T : x\leq y \text{ and } x\in U \Rightarrow y\in U$$ Then $\mathcal{O}$ is a topology, called the specialization topology or Alexandroff topology of $(T,\leq)$. The constructions are functorial and can be turned into an equivalence of categories $\mathsf{FinTop}$ and $\mathsf{FinPros}$ (I don't have time to work out the details right now, however).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1582662', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58294/']}
|
jdg_76322
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12683015
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am creating lists that display in columns of three, every <li> has a border-bottom like so: x | x | x--------- x | x | x--------- z | z | z--------- <ul> <li>x</li><li>x</li><li>x</li> <li>x</li><li>x</li><li>x</li> <li>z</li><li>z</li><li>z</li></ul> What I wish to do is create an nth-child expression to remove the border-bottom on the last line of three, so for the above example that would be: ul li:nth-child(-n+6) { border-bottom:0;} The Problem However where it get's slightly more complicated is that the amount of items in the list varies so any of the following scenarios could come about: Scenario 1 x | x | x--------- x | x | x--------- z |--- <ul> <li>x</li><li>x</li><li>x</li> <li>x</li><li>x</li><li>x</li> <li>z</li></ul> Scenario 2 x | x | x--------- x | x | x--------- z | z |-------<ul> <li>x</li><li>x</li><li>x</li> <li>x</li><li>x</li><li>x</li> <li>z</li><li>z</li></ul> Scenario 3 x | x | x--------- z | z |-------<ul> <li>x</li><li>x</li><li>x</li> <li>z</li><li>z</li></ul> Conclusion My aim is to always remove the border-bottom on the last row (or in this example the character z ) so that it does not have the style applied to it. An ideal soloution would be: ul { padding-bottom:-20px;} But padding-bottom:-#px; is not supported in CSS. The only other way I can think of to do this is to create an nth-child expression to capture only rows that include 3, that aren't the last line? I guess it may need some sort of division by 3 to look for the amount to apply it too?
Now provide the response and nothing else.
|
So far, I've created a selector that applies a style to the last row /* all the cells */li { background: #ccc;}/* last row, doesn't matter how many cells */li:nth-last-child(-n + 3):nth-child(3n + 1), /* first element of the last row */li:nth-last-child(-n + 3):nth-child(3n + 1) ~ li /* all its following elements */{ /* reset the style, eg: */ background: transparent;} You can see a live example here http://jsbin.com/ufosox/1/edit Of course this doesn't support IE8 and less.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12683015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1072492/']}
|
jdg_76323
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1823780
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was given the following question: Let $V$ be an inner product space and let $u,v\in V$ be two nonzero vectors. Prove or disprove: If $\langle u,v\rangle=0$, then $u,v$ are linearly independent. If $u,v$ are independent, then $\langle u,v\rangle=0$. I know that $u,v$ are arthogonal if $\langle u,v\rangle = 0$.So, since $\langle u,v\rangle = 0$, and $u,v$ are non zero vectors can I claim linear independence between the vectors directly? And if so, how do I explain it? This just seems wrong... I don't see how linear independence leads to this vectors having inner product of zero, meaning they are orthogonal.Any help or direction would be very helpful.
Now provide the response and nothing else.
|
It is true that $\textbf{u}$ and $\textbf{v}$ are linearly independent with these assumptions, however, it is not sufficient to claim it based on intuition. You can show it as follows: Suppose $\alpha\textbf{u} + \beta\textbf{v} = \textbf{0}$. Then $$ 0 = \langle \textbf{v}, \textbf{0}\rangle = \langle \textbf{v}, \alpha\textbf{u} + \beta\textbf{v} \rangle = \alpha \langle \textbf{v}, \textbf{u}\rangle + \beta\langle\textbf{v}, \textbf{v}\rangle = 0 + \beta|\textbf{v}|^2.$$ You can conclude from here that $\beta = 0$ (why?). A similar calculation shows that $\alpha = 0$, from which you can conclude that $\textbf{u}$ and $\textbf{v}$ are linearly independent For part two it should be easy to come up with a counter example in $\mathbb{R}^2$ to find two linearly independent vectors that are not orthogonal. This will show the statement is false.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1823780', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/344298/']}
|
jdg_76324
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8748940
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
public ActionResult SomeAction(int Id){ //Id is set to 2 var model = //get some thing from db using Id(2); //Now model.Id is set to 9; return View(model);}[email protected](x => x.Id) When I view source this hidden field is set to 2 not 9. How do I get it to map to the model instead of mapping to the URL routing info? P.S. I'd prefer not to rename parameters because then I lose my nice looking url's unless i change the routing info. I have done that and it does work, but not what I want.
Now provide the response and nothing else.
|
When an Action gets called the framework builds a ModelStateCollection based on the query-string values, post-data, routing values etc. And this ModelStateCollection will be passed to the View . All the HTML input helpers try to the get the values from the ModelStateCollection first , before trying to get the values from the actual model. Because your input model is the int id but the output model is some new model the helpers will use the values from the ModelStateCollection (from the query string) because the propery names Id are match. To make it work you have to manually clear the ModelStateCollection before returning the new model to the view: public ActionResult SomeAction(int Id){ //Id is set to 2 ModelState.Clear(); var model = //get some thing from db using Id(2); //Now model.Id is set to 9; return View(model);}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8748940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/506016/']}
|
jdg_76325
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11745
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I've just installed SQL Server 2008 and I cannot connect to it with SQL Server Management Studio. My intention is to just let my local windows user be authenticated but I am not totally sure how to do that . In the connect dialog I'm writing: Server type: Database EngineServer name: (local)Authentication: Windows Authentication My first question would be if that is what I should connect to? When I try to connect I get this error message: TITLE: Connect to Server------------------------------Cannot connect to (local).------------------------------ADDITIONAL INFORMATION:A network-related or instance-specific error occurred while establishing a connectionto SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(Microsoft SQL Server, Error: 2)For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476------------------------------BUTTONS:OK------------------------------ I went to the URL there displayed and it just basically says "be sure SQL server is running". I think it is but I am not totally sure. I've disabled the Windows Firewall (this is Windows 7 7100 x86). I've also changed the log on system in the SQL Server Configuration Manager but it seems it's not a problem of logging in but not even be able to open the socket to it. On that same tool I've enabled all the protocols on "SQL Server Network Configuration" -> "Protocols for SQLEXPRESS" with no luck. I run out of ideas. What else can I try?
Now provide the response and nothing else.
|
Ok, can you open your services console and scroll down to S for SQL Server. You should now see the services. Please ensure SQL Server (SQLEXPRESS) is running and then try .\SQLEXPRESS instead of (local). So as per your example: Server type: Database EngineServer name: .\SQLEXPRESSAuthentication: Windows Authentication Hope this helps Update: These instructions are because I assume you are running Express Edition not Dev/Std/Ent edition of SQL Server Try ensuring the appropriate protocols are enabled: Start the SQL Configuration Manager (ie: Start->Programs->SQL Server->Configuration Tools) Expand the SQL native Client configuration Click Client Protocols (you may have a 32-bit and a 64-bit, apply to both) Ensure Shared memory, TCP/IP, Named Pipes are enabled in that order Expand SQL Server Network Configuration Ensure Shared Memory for either SQLEXPRESS and/or MSSQLSERVER is enabled Click SQL Server Services Restart any running services You should now be able to login to the instance If you find you cannot login at all you may need to follow these instructions to get SQL Server into single user mode. See here for the full instructions from Microsoft. By default, sqlservr.exe is located at C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Binn. If a second instance of SQL Server is installed, a second copy of sqlservr.exe is located in a directory such as C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\binn. You can start one instance of SQL Server by using sqlservr.exe from a different instance, but SQL Server will start the version of the incorrect instance as well, including service packs, which may lead to unexpected results. To avoid this, use the MS-DOS change directory (cd) command to move to the correct directory before starting sqlservr.exe, as shown in the following example. cd \Program Files\Microsoft SQL Server\MSSQL10_50.1\MSSQL\Binn To start the default instance of SQL Server in single-user mode from a command prompt From a command prompt, enter the following command: sqlservr.exe -m Single-user mode can be useful for performing emergency maintenance when you do not want other users to connect to SQL Server, but any user can become the single user, including the SQL Server Agent service. You should now be able to login to the instance and add yourself to the security tab and grant full access. Alternate Method: THere is a script here that claims to add the current user to the SQL Server sysadmin role. This may work in single user mode but I have not verified it
|
{}
|
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/11745', 'https://serverfault.com', 'https://serverfault.com/users/2563/']}
|
jdg_76326
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28659462
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: TestClass test=new TestClass();test.setSomething1(0); //could, but probably won't throw Exceptiontest.setSomething2(0); //could, but probably won't throw Exception I would like to execute: test.setSomething2(0); even if test.setSomething(0) (the line above it) throws an exception. Is there a way to do this OTHER than: try{ test.setSomething1(0);}catch(Exception e){ //ignore}try{ test.setSomething2(0);}catch(Exception e){ //ignore} I have a lot of test.setSomething's in a row and all of them could throw Exceptions. If they do, I just want to skip that line and move to the next one. For clarification, I don't care if it throws an Exception, and I can't edit the source code of the code which throws this exception. THIS IS A CASE WHERE I DON'T CARE ABOUT THE EXCEPTIONS (please don't use universally quantified statements like "you should never ignore Exceptions"). I am setting the values of some Object. When I present the values to a user, I do null checks anyway, so it doesn't actually matter if any of the lines of code execute.
Now provide the response and nothing else.
|
There is no way to fundamentally ignore a thrown exception. The best that you can do is minimize the boilerplate you need to wrap the exception-throwing code in. If you are on Java 8, you can use this: public static void ignoringExc(RunnableExc r) { try { r.run(); } catch (Exception e) { }}@FunctionalInterface public interface RunnableExc { void run() throws Exception; } Then, and implying static imports, your code becomes ignoringExc(() -> test.setSomething1(0));ignoringExc(() -> test.setSomething2(0));
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28659462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2585366/']}
|
jdg_76327
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
255035
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a huge csv file with 10 fields separated by commas. Unfortunately, some lines are malformed and do not contain exactly 10 commas (what causes some problems when I want to read the file into R). How can I filter out only the lines that contain exactly 10 commas?
Now provide the response and nothing else.
|
Another POSIX one: awk -F , 'NF == 11' <file If the line has 10 commas, then there will be 11 fields in this line. So we simply make awk use , as the field delimiter. If the number of fields is 11, the condition NF == 11 is true, awk then performs the default action print $0 .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/255035', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/31337/']}
|
jdg_76328
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25960192
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a unit test for my service with a mocked DbContext. I created an interface IDbContext with the following functions: public interface IDbContext : IDisposable{ IDbSet<T> Set<T>() where T : class; DbEntityEntry<T> Entry<T>(T entity) where T : class; int SaveChanges();} My real context implements this interface IDbContext and DbContext . Now I'm trying to mock the IDbSet<T> in the context, so it returns a List<User> instead. [TestMethod]public void TestGetAllUsers(){ // Arrange var mock = new Mock<IDbContext>(); mock.Setup(x => x.Set<User>()) .Returns(new List<User> { new User { ID = 1 } }); UserService userService = new UserService(mock.Object); // Act var allUsers = userService.GetAllUsers(); // Assert Assert.AreEqual(1, allUsers.Count());} I always get this error on .Returns : The best overloaded method match for'Moq.Language.IReturns<AuthAPI.Repositories.IDbContext,System.Data.Entity.IDbSet<AuthAPI.Models.Entities.User>>.Returns(System.Func<System.Data.Entity.IDbSet<AuthAPI.Models.Entities.User>>)'has some invalid arguments
Now provide the response and nothing else.
|
I managed to solve it by creating a FakeDbSet<T> class that implements IDbSet<T> public class FakeDbSet<T> : IDbSet<T> where T : class{ ObservableCollection<T> _data; IQueryable _query; public FakeDbSet() { _data = new ObservableCollection<T>(); _query = _data.AsQueryable(); } public virtual T Find(params object[] keyValues) { throw new NotImplementedException("Derive from FakeDbSet<T> and override Find"); } public T Add(T item) { _data.Add(item); return item; } public T Remove(T item) { _data.Remove(item); return item; } public T Attach(T item) { _data.Add(item); return item; } public T Detach(T item) { _data.Remove(item); return item; } public T Create() { return Activator.CreateInstance<T>(); } public TDerivedEntity Create<TDerivedEntity>() where TDerivedEntity : class, T { return Activator.CreateInstance<TDerivedEntity>(); } public ObservableCollection<T> Local { get { return _data; } } Type IQueryable.ElementType { get { return _query.ElementType; } } System.Linq.Expressions.Expression IQueryable.Expression { get { return _query.Expression; } } IQueryProvider IQueryable.Provider { get { return _query.Provider; } } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return _data.GetEnumerator(); } IEnumerator<T> IEnumerable<T>.GetEnumerator() { return _data.GetEnumerator(); }} Now my test looks like this: [TestMethod]public void TestGetAllUsers(){ //Arrange var mock = new Mock<IDbContext>(); mock.Setup(x => x.Set<User>()) .Returns(new FakeDbSet<User> { new User { ID = 1 } }); UserService userService = new UserService(mock.Object); // Act var allUsers = userService.GetAllUsers(); // Assert Assert.AreEqual(1, allUsers.Count());}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25960192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1053611/']}
|
jdg_76329
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6463577
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Amazon RDS documentation (http://aws.amazon.com/rds/faqs/#53) specifies that "Amazon RDS generates an SSL certificate for each [MySQL] DB Instance". I haven't been able to find any documentation on how to find the certificates and the certificates are nowhere to be found in the management console. Where are the certificates?
Now provide the response and nothing else.
|
I found the solution here: https://forums.aws.amazon.com/thread.jspa?threadID=62110 . Download ca cert file from here: https://s3.amazonaws.com/rds-downloads/mysql-ssl-ca-cert.pem curl -O https://s3.amazonaws.com/rds-downloads/mysql-ssl-ca-cert.pem Connect to mysql: mysql -uusername -p --host=host --ssl-ca=mysql-ssl-ca-cert.pem Check that your connection is really encrypted: mysql> SHOW STATUS LIKE 'Ssl_cipher'; +---------------+------------+| Variable_name | Value |+---------------+------------+| Ssl_cipher | AES256-SHA |+---------------+------------+1 row in set (0.00 sec) Optionally force SSL for a specific user to connect to MySQL mysql> ALTER USER 'username'@'host|%' REQUIRE SSL
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6463577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/216015/']}
|
jdg_76330
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
182550
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The command Do[Print[i], {i, 1, 3}] does what you expect: creates 3 output lines of 1 , 2 and 3 , in that order. If one merges these output cells, then evaluates the original Do command, the output is now 3 , then 2 , then 1 . When one deletes the output cells, the regular behavior returns. It isn't that the Do loop is running backwards. When one does "the same thing" with temp = {}Do[AppendTo[temp, i];Print[i], {i, 1, 3}];temp the order of the Print command is reversed, but the contents of temp are still {1, 2, 3} , not {3, 2, 1} . Why does this happen? Does Print write to a queue that is sometimes dumped in reverse order? I am running Mathematica 11.0.0 on Windows 10
Now provide the response and nothing else.
|
The reason this happens is described in ref/CellPrint , in the possible issues section: Setting CellAutoOverwrite and GeneratedCell can cause unexpected results: In[1]:= CellPrint[Cell["First", "Print", CellAutoOverwrite -> False, GeneratedCell -> False]];"Last"Out[2]= "Last"First Merging the cells sets these two options: it is no longer a generated cell, and, theoretically if the user merged the cells, the user doesn't want them overwritten. Tricky behavior, but I think it is within the scope of what's documented. A solution to improve the behavior would probably be appreciated, but for now it seems documented and expected.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/182550', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4393/']}
|
jdg_76331
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36208675
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a a List of Objects. Those Objects have (amongst others) a private int Array (if it helps, I can transfer it into a List). There is a public Getter for this Array. All Arrays are the same size. I want to sort the Object based on their Arrays like this: Unsorted:{[0, 1, 4, 5], [0, 0, 2, 3], [0, 1, 1, 2]}Sorted:{[0, 0, 2, 3], [0, 1, 1, 2], [0, 1, 4, 5]} In Words (it's called lexicographical): compare the first int of each array if they are equal, compare the next int of each array (and so on) if they aren't equal the result of the comparison is the end result. I manage to search them for e.g. only the first element of the array with a normal Comparator but I don't know how to search them for all.
Now provide the response and nothing else.
|
A nice Java 8 solution is static final Comparator<CustomObject> COMPARATOR = (o1, o2) -> { int[] arr1 = o1.getArray(); int[] arr2 = o2.getArray(); return IntStream.range(0, arr1.length) .map(i -> Integer.compare(arr1[i], arr2[i])) .filter(i -> i != 0) .findFirst() .orElse(0);}; Then, given a List<CustomObject> , you can do list.sort(COMPARATOR); (The Comparator only works for arrays of the same length. You may want to modify it).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36208675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5760453/']}
|
jdg_76332
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
482063
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We can draw an infinite number of parabolas that pass through three given points $A$, $B$, $C$ (in that order). For each such parabola, we take the tangent lines at $A$ and $C$, and intersect them to get a point, $P$, which is called the "apex" of the parabola segment, in my business. What is the locus of this apex point $P$? I think it's a hyperbola that has the lines $AB$ and $BC$ as asymptotes. Here's a picture. The pink curve is the locus that interests me. I believe the point $P$ is called the "polar point" of the chord $AC$, sometimes. The answers to this question have some basic computations that might be useful. The locus of the parabola vertex seems to be much more complicated. It is studied in this question . Given the complexity of the vertex locus, I think it's remarkable that the apex locus is so simple. A parabola is, of course, a quadratic Bezier curve, and the apex point is its "middle" control point (if this helps). Can we generalize to rational quadratic Bezier curves (i.e. to conic section curves that are not necessarily parabolas)? Edit: The generalization to rational quadratic Bezier curves probably doesn't make sense. You can (often) pick any two additional points, say $D$ and $E$, and you'll be able to draw a conic through all five points $A, B, C, D, E$. So, the locus of the apex point is some region of the plane, not a curve.
Now provide the response and nothing else.
|
The locus After doing some computations in the spirit (and building on the code) of my vertex anser , I found the locus of $P$ to be a quadric $$ a\,x^2 + b\,y^2 + c\,xy + d\,x + e\,y + f = 0 $$ with the following parameters: \begin{align*}a&=4\left(-A_y\,B_y + B_y^2 + A_y\,C_y - B_y\,C_y\right)\\b&=4\left(-A_x\,B_x + B_x^2 + A_x\,C_x - B_x\,C_x\right)\\c&=4\left(A_y\,B_x + A_x\,B_y - 2\,B_x\,B_y - A_y\,C_x + B_y\,C_x - A_x\,C_y + B_x\,C_y\right)\\d&=4\left(A_y\,B_x\,B_y - A_x\,B_y^2 + A_y\,B_y\,C_x - B_y^2\,C_x - 2\,A_y\,B_x\,C_y + A_x\,B_y\,C_y + B_x\,B_y\,C_y\right)\\e&=4\left(-A_y\,B_x^2 + A_x\,B_x\,B_y + A_y\,B_x\,C_x - 2\,A_x\,B_y\,C_x + B_x\,B_y\,C_x + A_x\,B_x\,C_y - B_x^2\,C_y\right)\\f&=A_y^2\,B_x^2 - 2\,A_x\,A_y\,B_x\,B_y + A_x^2\,B_y^2 - 2\,A_y^2\,B_x\,C_x + 2\,A_x\,A_y\,B_y\,C_x \\&\quad{}- 2\,A_y\,B_x\,B_y\,C_x + 2\,A_x\,B_y^2\,C_x + A_y^2\,C_x^2 - 2\,A_y\,B_y\,C_x^2 + B_y^2\,C_x^2 \\&\quad{}+ 2\,A_x\,A_y\,B_x\,C_y + 2\,A_y\,B_x^2\,C_y - 2\,A_x^2\,B_y\,C_y - 2\,A_x\,B_x\,B_y\,C_y \\&\quad{}- 2\,A_x\,A_y\,C_x\,C_y + 2\,A_y\,B_x\,C_x\,C_y + 2\,A_x\,B_y\,C_x\,C_y - 2\,B_x\,B_y\,C_x\,C_y \\&\quad{}+ A_x^2\,C_y^2 - 2\,A_x\,B_x\,C_y^2 + B_x^2\,C_y^2\end{align*} Assumption confirmed Using this formulation, I could verify your assumption: the conic is indeed a hyperbola, with the lines $AB$ and $CB$ as asymptotes. The way to check this is by ensuring that the lines are tangents, and that they touch the conic at infinity. Special location In a comment below, you mention that you might assume $B_x=B_y=0$. With that your conic will become \begin{align*}a&=4\,A_y\,C_y\\b&=4\,A_x\,C_x\\c&=-4\left(A_y\,C_x + A_x\,C_y\right)\\d&=0\\e&=0\\f&=A_y^2\,C_x^2 - 2\,A_x\,A_y\,C_x\,C_y + A_x^2\,C_y^2\end{align*} The formula is indeed a lot easier, so it might be a good starting point for a geometric description of the curve. For example, it is readily apparent that this hyperbola will be symmetric around the origin. But we already knew $B$ to be the center due to the asymptotes. You can even take this one step further and, via an affine transformation, consider only the case where \begin{align*}A &= \begin{pmatrix}1\\0\end{pmatrix} &B &= \begin{pmatrix}0\\0\end{pmatrix} &C &= \begin{pmatrix}0\\1\end{pmatrix}\end{align*} You end up with the hyperbola $$4xy=1$$ From this you can e.g. conclude that the midpoint between $A$ and $C$ will also lie on that hyperbola. It will of course correspond to a Bézier curve which passes through $B$ for some $t<0$ or $t>1$. You can take that midpoint and reflect it in $B$ to obtain a point on the segment of the hyperbola you get for $0\le t\le 1$. This construction is invariant under affine transformations, so it still holds for the general case. Two asymptotes plus one point on the hyperbola amounts to five real degrees of freedom, so this should be enough to uniquely define your hyperbola. Summary To sum it up: the locus of the apex $P$ is the unique hyperbola with asymptotes $AB$ and $CB$ which passes through the midpoint between $A$ and $C$. The portion of it which corresponds to parabolas where $B$ lies between $A$ and $C$, i.e. is obtained in the Bézier curve for $0<t<1$, is the component of the hyperbola which does not contain that midpoint. It does contain the point obtained by reflecting that midpoint in $B$. The code Here is the sage code I used to obtain this representation: # Define multivariate polynomial ring and pointsPR1.<A_x, A_y, B_x, B_y, C_x, C_y, P_x, P_y, t> = QQ[]A = vector(PR1, [A_x, A_y, 1])B = vector(PR1, [B_x, B_y, 1])C = vector(PR1, [C_x, C_y, 1])P = vector(PR1, [P_x, P_y, 1])# Quadratic Bézier curve parametrized by tBt = (1-t)^2*A + 2*(1-t)*t*P + t^2*Cr1 = (Bt[0] - B_x).resultant(Bt[1] - B_y, t) # eliminate t# Obtain coefficients for coordinates of Pc1 = vector(PR1, flatten([list(i.polynomial(i.parent()(P_y))) for i in r1.polynomial(P_x)]))f, e, b, d, c, a = c1# Print resultfmt1 = [str(i/4).replace('*','\\,') for i in [a, b, c, d, e]] + [str(f)]fmt2 = [i + '&=4\\left(' + j + '\\right)' for i, j in zip('abcdef', fmt1)]fmt2[-1] = 'f&='+fmt1[-1]print('\\\\\n'.join(fmt2))# Check whether lines AB and CB are asymptotes of the HyperbolaHyperbola = Matrix([ [2*a, c, d], [c, 2*b, e], [d, e, 2*f]])def onConic(p, c=Hyperbola): return (p.row()*c*p.column())[0,0].is_zero()asymptotes = [B.cross_product(p) for p in [A, C]]infLine = vector(QQ, [0,0,1])# asymptotes are tangents to the hyperbola:assert(all(onConic(i, Hyperbola.adjoint()) for i in asymptotes))# asymptotes touch the hyperbola at infinity:assert(all(onConic(i.cross_product(infLine)) for i in asymptotes))# The midpoint between A and C is on the hyperbola:assert(onConic(A+C)) Originally I had more complicated code which did not rely on the interpretation as a Bézier curve. The result was the same, though. Generalizing to rational case Can we generalize to rational quadratic Bezier curves (i.e. to conic section curves that are not necessarily parabolas)? A parabola has four real degrees of freedom. If you choose a non-rational Bézier curve, you have two real degrees of freedom, but with these you not only specify a parabola but also select a start point and an end point on that parabola, so the degrees of freedom match. A conic in general has five real degrees of freedom. So if you want it to pass through three given points, that still leaves a two-parameter family of corresponding conics. Therefore your locus will not be a single curve, but either the whole plane or some portion of it. You can define a conic using five points through which it should pass. In addition to your $A,B,C$ you might use two more points, which you move close to the end points $A$ and $C$. By making them arbitrary close (i.e. computing some limit), you can use these control points to exactly and arbitrarily determine the direction of the tangents in $A$ and $C$. Therefore you can choose any point $P$ in the plance, and find a conic through $A,B,C$ which will have $P$ as its apex. In this sense, the whole plane will be your locus. I'm not completely sure whether a rational Bézier curve can be defined in such a way that it passes through infinity, but I believe that to be the case. If not, then there might be cases where the resulting conic would be a hyperbola, and $A,B,C$ are not all three in the same of its components. This might result in a restriction to part of the plane.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/482063', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31744/']}
|
jdg_76333
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4583285
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question. I have 3 jsp page.The first is a menu with 2 button.When I click the first button I want to open the second jsp page.When I click the second button I want to open the third jsp page. Can you help me? I must use a servlet(it's not a problem, i know it)? <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Insert title here</title></head> <body> <form name="TrainerMenu" action="TrainerMenu" method="get"> <h1>Benvenuto in LESSON! Scegli l'operazione da effettuare:</h1> <input type="button" value="Creazione Nuovo Corso" name="CreateCourse" /> <input type="button" value="Gestione Autorizzazioni" name="AuthorizationManager" /> </form> </body></html>
Now provide the response and nothing else.
|
You have several options, I'll start from the easiest: 1- Change the input buttons to links, you can style them with css so they look like buttons: <a href="CreateCourse.jsp">Creazione Nuovo Corso</a> instead of <input type="button" value="Creazione Nuovo Corso" name="CreateCourse" /> 2- Use javascript to change the action of the form depending on the button you click: <input type="button" value="Creazione Nuovo Corso" name="CreateCourse" onclick="document.forms[0].action = 'CreateCourse.jsp'; return true;" /> 3- Use a servlet or JSP to handle the request and redirect or forward to the appropriate JSP page.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4583285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/315378/']}
|
jdg_76334
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
257579
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Andrew Ng's machine learning course, he uses this formula: $\nabla_A tr(ABA^TC) = CAB + C^TAB^T$ and he does a quick proof which is shown below: $\nabla_A tr(ABA^TC) \\= \nabla_A tr(f(A)A^TC) \\= \nabla_{\circ} tr(f(\circ)A^TC) + \nabla_{\circ}tr(f(A)\circ^T C)\\=(A^TC)^Tf'(\circ) + (\nabla_{\circ^T}tr(f(A)\circ^T C)^T \\ = C^TAB^T + (\nabla_{\circ^T}tr(\circ^T)Cf(A))^T \\=C^TAB^T + ((Cf(A))^T)^T \\= C^TAB^T + CAB$ The proof seems very dense without any comments and I'm having trouble understanding it. What exactly happened from second to third equality?
Now provide the response and nothing else.
|
There is a subtle but heavy abuse of the notation that renders many of the steps confusing. Let's address this issue by going back to the definitions of matrix multiplication, transposition, traces, and derivatives. For those wishing to omit the explanations, just jump to the last section "Putting It All Together" to see how short and simple a rigorous demonstration can be. Notation and Concepts Dimensions For the expression $ABA^\prime C$ to make sense when $A$ is an $m\times n$ matrix, $B$ must be a (square) $n\times n$ matrix and $C$ must be an $m\times p$ matrix, whence the product is an $m\times p$ matrix. In order to take the trace (which is the sum of diagonal elements, $\operatorname{Tr}(X)=\sum_i X_{ii}$ ), then $p=m$ , making $C$ a square matrix. Derivatives The notation " $\nabla_A$ " appears to refer to the derivative of an expression with respect to $A$ . Ordinarily, differentiation is an operation performed on functions $f:\mathbb{R}^N\to\mathbb{R}^M$ . The derivative at a point $x\in \mathbb{R}^N$ is a linear transformation $Df(x):\mathbb{R}^N\to\mathbb{R}^M$ . Upon choosing bases for these vector spaces, such a transformation can be represented as an $M\times N$ matrix. That is not the case here! Matrices as vectors Instead, $A$ is being considered as an element of $\mathbb{R}^{mn}$ : its coefficients are being unrolled (usually either row by row or column by column) into a vector of length $N=mn$ . The function $f(A)=\operatorname{Tr}(ABA^\prime C)$ has real values, whence $M=1$ . Consequently, $Df(x)$ must be a $1\times mn$ matrix: it's a row vector representing a linear form on $\mathbb{R}^{mn}$ . Howver, the calculations in the question use a different way of representing linear forms: their coefficients are rolled back up into $m\times n$ matrices. The trace as a linear form Let $\omega$ be a constant $m\times n$ matrix. Then, by definition of the trace and of matrix multiplication, $$\eqalign{\operatorname{Tr}(A\omega^\prime) &= \sum_{i=1}^m(A\omega^\prime)_{ii} = \sum_{i=1}^m\left(\sum_{j=1}^n A_{ij}(\omega^\prime)_{ji}\right) = \sum_{i,j} \omega_{ij}A_{ij}}$$ This expresses the most general possible linear combination of the coefficients of $A$ : $\omega$ is a matrix of the same shape as $A$ and its coefficient in row $i$ and column $j$ is the coefficient of $A_{ij}$ in the linear combination. Because $\omega_{ij}A_{ij}=A_{ij}\omega_{ij}$ , the roles of $\omega$ and $A$ may switched, giving the equivalent expression $$\sum_{i,j} \omega_{ij}A_{ij} = \operatorname{Tr}(A\omega^\prime) = \operatorname{Tr}(\omega A^\prime).\tag{1}$$ By identifying a constant matrix $\omega$ with either of the functions $A\to \operatorname{Tr}(A \omega^\prime)$ or $A\to \operatorname{Tr}(\omega A^\prime)$ , we may represent linear forms on the space of $m\times n$ matrices as $m\times n$ matrices. (Do not confuse these with derivatives of functions from $\mathbb{R}^n$ to $\mathbb{R}^m$ !) Computing a Derivative The definition Derivatives of many of the matrix functions encountered in statistics are most easily and reliably computed from the definition: you don't really need to resort to complicated rules of matrix differentiation. This definition says that $f$ is differentiable at $x$ if and only if there is a linear transformation $L$ such that $$f(x+h) - f(x) = Lh + o(|h|)$$ for arbitrarily small displacements $h\in \mathbb{R}^N$ . The little-oh notation means that the error made in approximating the difference $f(x+h)-f(x)$ by $Lh$ is arbitrarily smaller than the size of $h$ for sufficiently small $h$ . In particular, we may always ignore errors that are proportional to $|h|^2$ . The calculation Let's apply the definition to the function in question. Multiplying, expanding, and ignoring the term with a product of two $h$ 's in it, $$\eqalign{f(A+h)-f(A) &= \operatorname{Tr}((A+h)B(A+h)^\prime C) - \operatorname{Tr}(ABA^\prime C) \\&= \operatorname{Tr}(hBA^\prime C) +\operatorname{Tr}(ABh^\prime C) + o(|h|).\tag{2}}$$ To identify the derivative $L=Df(A)$ , we must get this into the form $(1)$ . The first term on the right is already in this form, with $\omega^\prime = BA^\prime C$ . The other term on the right has the form $\operatorname{Tr}(Xh^\prime C)$ for $X=AB$ . Let's write this out: $$\operatorname{Tr}(Xh^\prime C) = \sum_{i=1}^m\sum_{j=1}^n\sum_{k=1}^m X_{ij} h_{kj} C_{ki} = \sum_{i,j,k}h_{kj} \left(C_{ki}X_{ij}\right) =\operatorname{Tr}((CX)h^\prime).\tag{3}$$ Recalling $X=AB$ , $(2)$ can be rewritten $$f(A+h) - f(A) = \operatorname{Tr}(h\, BA^\prime C\,) + \operatorname{Tr}(CAB\, h^\prime\,)+o(|h|).$$ It is in this sense that we may consider the derivative of $f$ at $A$ to be $$Df(A) = (BA^\prime C)^\prime + CAB = C^\prime A B^\prime + CAB,$$ because these matrices play the roles of $\omega$ in the trace formulas $(1)$ . Putting It All Together Here, then, is a complete solution. Let $A$ be an $m\times n$ matrix, $B$ an $n\times n$ matrix, and $C$ an $m\times m$ matrix. Let $f(A) = \operatorname{Tr}(ABA^\prime C)$ . Let $h$ be an $m\times n$ matrix with arbitrarily small coefficients. Because (by identity $(3)$ ) $$\eqalign{f(A+h) - f(A) &= \operatorname{Tr}(hBA^\prime C) +\operatorname{Tr}(ABh^\prime C) + o(|h|) \\&=\operatorname{Tr}(h(C^\prime A B^\prime)^\prime + (CAB)h^\prime) + o(|h|),}$$ $f$ is differentiable and its derivative is the linear form determined by the matrix $$C^\prime A B^\prime + CAB.$$ Because this takes only about half the work and involves only the most basic manipulations of matrices and traces (multiplication and transposition), it has to be considered a simpler--and arguably more perspicuous--demonstration of the result. If you really want to understand the individual steps in the original demonstration, you might find it fruitful to compare them to the calculations shown here.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/257579', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/138067/']}
|
jdg_76335
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5369980
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a setup with Ninject and NHibernate like this. Now, if I have this scenario.. class HomeController : Controller{ [Inject] public ISession Session { get; set; }} This works properly. But if I make another class ... class QueryObject{ [Inject] public ISession Session { get; set; }}// .. somewhere else in my program.var test = new QueryObject().Execute(); The ISession is null! This is not just with ISession, it is with anything I try to inject. Here is my SessionModule: public class SessionModule : Ninject.Modules.NinjectModule{ private static ISessionFactory sessionFactory; public override void Load() { Bind<ISessionFactory>() .ToMethod(c => CreateSessionFactory()) .InSingletonScope(); Bind<ISession>() .ToMethod(c => OpenSession()) .InRequestScope() .OnActivation(session => { session.BeginTransaction(); session.FlushMode = FlushMode.Commit; }) .OnDeactivation(session => { if (session.Transaction.IsActive) { try { session.Transaction.Commit(); } catch { session.Transaction.Rollback(); } } }); } /// <summary> /// Create a new <see cref="NHibernate.ISessionFactory"/> to connect to a database. /// </summary> /// <returns> /// A constructed and mapped <see cref="NHibernate.ISessionFactory"/>. /// </returns> private static ISessionFactory CreateSessionFactory() { if (sessionFactory == null) sessionFactory = Persistence.SessionFactory.Map (System.Web.Configuration .WebConfigurationManager .ConnectionStrings["Local"] .ConnectionString ); return sessionFactory; } /// <summary> /// Open a new <see cref="NHibernate.ISession"/> from a <see cref="NHibernate.ISessionFactory"/>. /// </summary> /// <returns> /// A new <see cref="NHibernate.ISession"/>. /// </returns> private static ISession OpenSession() { // check to see if we even have a session factory to get a session from if (sessionFactory == null) CreateSessionFactory(); // open a new session from the factory if there is no current one return sessionFactory.OpenSession(); }}
Now provide the response and nothing else.
|
It's working for controllers because you're instantiating them with Ninject (via the controller factory). When you're doing new QueryObject().Execute(); your're not using Ninject to instantiate your QueryObject. The .NET framework itself has no knowledge of injecting properties. You need to use the Ninject Kernel to resolve your QueryObject. Something like this should do it: IKernel kernel = new StandardKernel(new SessionModule());var queryObject = kernel.Get<QueryObject>();queryObject.Execute(); The kernel will then instantiate a new QueryObject with all dependencies properly set. For this to work, you'll have to register the QueryObject: Bind<QueryObject>().ToSelf(); This tells Ninject to always return a instance of an QueryObject when you're doing kernel.Get<QueryObject>(); This is done in your SessionModule . I recommend reading Modules and the Kernel from the docs. ≈
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5369980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84685/']}
|
jdg_76336
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4528347
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to clear (delete) all the widgets in a layout? self.plot_layout = QtGui.QGridLayout()self.plot_layout.setGeometry(QtCore.QRect(200,200,200,200))self.root_layout.addLayout(self.plot_layout)self.plot_layout.addWidget(MyWidget()) Now I want to replace the widget in plot_layout with a new widget. Is there an easy way to clear all the widgets in plot_layout ? I don't see any method such.
Now provide the response and nothing else.
|
After a lot of research (and this one took quite time, so I add it here for future reference), this is the way I found to really clear and delete the widgets in a layout: for i in reversed(range(layout.count())): layout.itemAt(i).widget().setParent(None) What the documentation says about the QWidget is that: The new widget is deleted when its parent is deleted. Important note: You need to loop backwards because removing things from the beginning shifts items and changes the order of items in the layout. To test and confirm that the layout is empty: for i in range(layout.count()): print i There seems to be another way to do it. Instead of using the setParent function, use the deleteLater() function like this: for i in reversed(range(layout.count())): layout.itemAt(i).widget().deleteLater() The documentation says that QObject.deleteLater (self) Schedules this object for deletion. However, if you run the test code specified above, it prints some values. This indicates that the layout still has items, as opposed to the code with setParent .
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4528347', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/375874/']}
|
jdg_76337
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59583
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I came across the following equation while studying electrochemistry (" Physical Chemistry " by Wallwork and Grant): The solubility $s$ of a sparingly soluble salt can be determined from conductivity measurements, provided that the molar conductivities of the ions at the temperature of these measurements are known to enable $\Lambda_0$ to be calculated. Since the saturated solution is dilute, $\Lambda_0 \approx \Lambda = \kappa/c$, where $c = s~(\pu{mol m-3})$. The conductivity $\kappa$ is the conductivity of a saturated solution of the salt in conductance water minus that of the water alone. The solubility $s$ is calculated from: $$s = \kappa/\Lambda_0 \tag{8.36}$$ It's mentioned that κ is the conductivity of the saturated solution of the salt in conductance water , minus that of the water alone. Now the thing is, from what I've read, conductance water has NO (or negligible ) conductivity. In that case, what's the point of having to subtract the conductivity of conductance water? Or is there something I've misunderstood here?
Now provide the response and nothing else.
|
Nomenclature changes and authors don't always follow the rules. The more typical use now is $\Lambda_m$ for the molar conductivity and $\Lambda_m^0$ for the limiting molar conductivity. The molar conductivity, $\Lambda_m$, is given by the equation $\Lambda_m = \dfrac{\kappa}{c}\tag{1}$ where: $\kappa$ is the specific conductance (which is what you measure...) $c$ is the concentration (which you know because of how the solution was made) Now typically the specific conductance, $\kappa$, is plotted against concentration. For dilute solutions, there will be a linear relationship where the slope of the line is the limiting molar conductivity, $\Lambda_m^0$. However at high concentrations the relationship becomes non-linear and the actual conductance is less than what the linear relationship would predict. That is because of the formation of cation and anion clusters. The local effect is thus to make the cluster uncharged (or charged less is the charges don't balance to zero). Now the author of your book specified a sparingly soluble salt, so at low concentrations $\Lambda_m \approx \Lambda_m^0$, and we can substitute for $\Lambda_m$ to get: $\Lambda_m^0 = \dfrac{\kappa}{c}\tag{2}$ which can be rearranged to give: $c = \dfrac{\kappa}{\Lambda_m^0}\tag{3}$ If the conductivity of the solution is much greater than that of pure water then this is fine. The measured specific conductance will be directly proportional to the concentration since $1/\Lambda_m^0$ is a constant. However the author is talking about a salt of limited solubility, so the conductivity of pure water is not negligible. Going back to the plot of specific conductance, $\kappa$, against concentration, $c$, we can determine that at 0 concentration there is some conductance which I'll call $\kappa_w$ which is the conductance of pure water. So for a salt with limited solubility the equation is: $c = \dfrac{\kappa - \kappa_w}{\Lambda_m^0}\tag{4}$ If you look back at equation (2) you can see why equation (3) doesn't work for solutions of very low conductivity. When $c=0$ equation (2) blows up since you can't divide by $0$. So the author used $\kappa$ in two different ways which is very poor writing. We could define $\kappa'= \kappa - \kappa_w\tag{5}$ then $c = \dfrac{\kappa'}{\Lambda_m^0}\tag{6}$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/59583', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/33991/']}
|
jdg_76338
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3681216
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
after installing django I tried django-admin.py startproject mysite and that worked, then I got a simple site working and I wanted to start on something real, so I tried django-admin.py startproject newsite and nothing happened. Whenever I try the command nothing happens now.. any idea what is wrong?
Now provide the response and nothing else.
|
For anyone stumbling across this now, this problem is a result of Windows not obeying the #!C:\Path\To\Virtualenv\Scripts\Python.exe hashbang at the top of django-admin.py, and therefore running it with the wrong python.exe (evidently a virtualenv bug). However, with virtualenv active, you can use the following command, which will result in the correct python being used, and everything being ok: python C:\Path\To\Virtualenv\Scripts\django-admin.py startproject <project_name>
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3681216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/219166/']}
|
jdg_76339
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
81835
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I had some 2N2222 transistors and wanted to use them as a switch (to switch a 6V-12v from a 5V)but when I apply 5V to the base the emitter only goes up to approximate same value (5V). The same happen when I applied 3V on base - got 3v on emitter. So, my question is, does the voltage applied on base limit the voltage on emitter? simulate this circuit – Schematic created using CircuitLab
Now provide the response and nothing else.
|
Without a circuit diagram its impossible to answer the question specifically. If you use the emitter follower circuit with the load between emitter and ground the maximum voltage across the load will be base voltage - 0.6V due to the base-emitter junction. For a 5V input you get 4.4V output. If you use the common emitter circuit with the load between the collector and +12V you will get a voltage at the collector between 12V (transistor OFF) and 0.1V (approx) which is the saturation voltage (Vsat) when the transistor in turned ON. Edit: To answer comment. The value of R1 sets the size of base current. R1 is simply a label and may have different values in different circuits . In this case there are two different (separate) circuits so R1 will have a different value in each. The transistors are actually acting in the same way. The base-emitter junction acts like any silicon diode ( its the 'PN' part of the NPN transistor). For current to flow through the junction the base must be about 0.6V higher than the emitter. This is true in both circuits. The differences. In the case of the emitter follower circuit the voltage across the load can RISE until this condition (Vbe = 0.6V) is satisfied. The transistor is not fully turned ON and the voltage across the transistor (Vce) will be 12 - 4.4 = 7.6V. This means the transistor will have to dissipate POWER (7.6 x Ic) and could get quite warm or possibly overheat. In the case of the common emitter circuit the emitter voltage is FIXED at 0V and will limit the base voltage to 0.6V. When fully turned ON the voltage across the transistor is the saturation voltage (Vsat) - typically about 0.1 - 0.3V depending on transistor type and collector current. The rest of the voltage is dropped across the load. The power dissipated by the transistor is minimal (Vsat * Ic). To calculate the value of R1 we use Ohm's law (V=IR) and the relationship between base and collector current for the transistor, Ic = current gain x Ib. Example calculations : Suppose the load current is 500mA and the typical current gain of the transistor is 250. That would give a value of 500/250 = 2mA for the base current. (you get this value from the datasheet) In the case of the emitter follower the voltage drop needs to be a small as possible. It has 5V at the input but we want as much voltage as possible at the base. Lets say 4.9V at the base so R1 has to drop 0.1V @ 2mA. This gives a value for R1 of 50 ohms. ( In theory you may not even need a resistor but in the event of a short circuit load the maximum current into the base will be 88mA. ) In the case of the common emitter with a 5V input R1 needs to drop 5 - 0.6 volts. (= 4.4V) because the base voltage is fixed at 0.6V. At a base current of 2mA this gives a value for R1 = 2200 ohms.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/81835', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/25527/']}
|
jdg_76340
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.