source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
165462
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Take a set of points in the plane and draw a circle of a given radius at each point. The resulting pattern of circles may look something like this: Graphics[Circle[#, 9/10] & /@ Tuples[Range[7], 2], PlotRange -> {{1, 6}, {1, 6}}] Notice all of the small regions that are formed by these intersecting circles. How can I get each of these regions as say, a list of Region objects? EDIT : I've adapted the code by kglr to compute what I want. Here is the resulting code: tuples = Tuples[Range[4], 2];disks = Disk[#, 0.9] & /@ tuples;intersecting[d_] := DeleteCases[Select[disks, ! RegionDisjoint[d, #] &], d]funcs[a_, b_] := If[a == b, {BooleanCountingFunction[{a}, b]}, Thread[BooleanConvert[BooleanCountingFunction[{a}, b]] /. Or -> List]]regs[n_] := DeleteDuplicatesBy[ Flatten[ Function[x, Select[ RegionIntersection[ x, BooleanRegion[#, intersecting[x]] ] & /@ funcs[n, Length[intersecting[x]]] , Quiet[RegionDimension[#]] == 2 & ] ] /@ disks ], RegionCentroid ] The resulting regions look like this: r = regs /@ Range[3];colours = {Red, Green, Blue};Show[Flatten[(Function[x, Region[x, BaseStyle -> FaceForm[colours[[#]]]]] /@ r[[#]]) & /@ Range[3]]]; Now provide the response and nothing else.
Update: Disjoint regions corresponding to the intersection of exactly k disks for k = 1, 2, 3, 4 . Computation of 7-by-7 example is too large for free Wolfram Cloud, so I use a smaller example with 16 disks. Using Carl's method for identifying the neighbors of each disk tuples = Tuples[Range @ 4, 2];disks = Disk[#, 9/10] & /@ tuples;circles = Circle[#, 9/10] & /@ tuples;ClearAll[nF, boolReg]nF[x_] := Module[{d = DeleteCases[disks, x]}, Pick[d, RegionDisjoint[#, x] & /@ d, False]]boolReg[n_] := Module[{bCF = BooleanCountingFunction[{n}, Length @ nF @ #]}, DeleteCases[RegionIntersection[#, BooleanRegion[bCF, nF @ #]], _EmptyRegion]] &r1 = Show[Region[#, BaseStyle -> Yellow]&/@boolReg[0] /@ disks, Graphics[{Gray, circles}]] r2 = Show[Graphics[circles], Region[#, BaseStyle -> Blue] & /@ boolReg[1] /@ disks] r3 = Show[Graphics[circles], Region[#, BaseStyle -> Red] & /@ boolReg[2] /@ disks] r4 = Show[Graphics[circles], Region[#, BaseStyle -> Green] & /@ boolReg[3] /@ disks] Show[r1, r2, r3, r4] Original answer: intersections = DeleteCases[RegionIntersection @@@ Subsets[(Disk[#, 9/10] & /@ Tuples[Range[7], 2]), {2,4}], _EmptyRegion];Show[Graphics[{Opacity[.5, Yellow], EdgeForm[{Gray,Thick}], Disk[#, 9/10] & /@ Tuples[Range[7], 2]}], RegionPlot[#, PlotStyle -> RandomColor[]]&/@intersections] Colorcoding points by the number of disks a point lies in: ints = DeleteCases[RegionIntersection @@@ (Subsets[(Disk[#, 9/10] & /@ Tuples[Range[7], 2]), {#}]), _EmptyRegion]& /@ {2, 3, 4};colors = {Red, Green, Blue};ints2=Join @@ (Thread /@ Transpose[{ints, colors}]);Legended[Show[Graphics[{Opacity[.5, Yellow], EdgeForm[{Gray, Thick}], Disk[#, 9/10] & /@ Tuples[Range[7], 2]}], RegionPlot[#, PlotStyle -> #2]& @@@ ints2, PlotRange -> {{1, 6}, {1, 6}}], SwatchLegend[{Yellow, Red, Green, Blue}, {"1", "2","3","4"}]]
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/165462', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/16130/']}
jdg_75341
stackexchange
llm_judgeable_groundtruth_similarity
3114901
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove that:Let $f\in C[-1,1]$ $$\lim_{n\to\infty}\frac{\int_{-1}^{1}{f(x)(1-x^2)^n}dx}{\int_{-1}^{1}{(1-x^2)^n}dx}=f(0)$$ My attempt: $$\begin{eqnarray}\lim_{n\to\infty}\frac{\int_{-1}^{1}{f(x)(1-x^2)^n}dx}{\int_{-1}^{1}{(1-x^2)^n}dx}=&\lim_{n\to\infty}\frac{\int_{0}^{1}{f(x)+f(-x)(1-x^2)^n}dx}{2\int_{0}^{1}{(1-x^2)^n}dx}\\&=\lim_{n\to\infty}\frac{(f(\xi)+f(-\xi))(\int_{0}^{1}{1-x^2)^n}dx}{2\int_{0}^{1}{(1-x^2)^n}dx}\end{eqnarray}$$ But I can't complete this. Any help would be greatly appreciated :-) Now provide the response and nothing else.
Here is a standard argument dealing with this type of problem, often known as approximation-to-the-identity . Here is a slightly general claim, which does not harm the essence of the argument. Proposition. Let $K_n \in C([-1,1])$ and assume that $(K_n)$ satisfy the following assumtions: Nonnegativity. $K_n(x) \geq 0$ for all $x \in [-1, 1]$ and $n \geq 1$ . Normalization. $\int_{-1}^{1} K_n(x) \, \mathrm{d}x = 1$ for all $n \geq 1$ . Concentration. For any $\delta > 0$ , $\lim_{n\to\infty} \int_{|x|>\delta} K_n(x) \, \mathrm{d}x = 0$ . Then for any $f \in C([-1, 1])$ , we have $$ \lim_{n\to\infty} \int_{-1}^{1} f(x)K_n(x) \, \mathrm{d}x = f(0). $$ In other words, $K_n(x)$ approximates the Dirac delta $\delta(x)$ , which is "the identity" of the convolution (hence the name approximation-to-the-identity). Before the proof, let us check that this can be applicable to OP's problem. Indeed, set $$K_n(x) = \frac{(1-x^2)^n}{\int_{-1}^{1} (1-t^2)^n \, \mathrm{d}t}. $$ Then the conditions 1 and 2 are obvious. For the concentration, notice that $(1 - t^2)^n \geq (1 - |t|)^n$ , and so, $$ \int_{|x|>\delta} K_n(x) \, \mathrm{d}t\leq \frac{(1 - \delta^2)^n}{\int_{0}^{1}(1 - t)^n \, \mathrm{d}t} = (n+1)(1 - \delta^2)^n \xrightarrow[n\to\infty]{} 0. $$ Therefore $\int_{-1}^{1} f(x)K_n(x) \, \mathrm{d}x \to f(0)$ by the conclusion of the proposition. Proof of Proposition. For any $\epsilon > 0$ , find $\delta > 0$ so that $|x| < \delta$ implies $|f(x) - f(0)| < \epsilon$ . Also, choose a bound $M > 0$ of $f$ . Then \begin{align*}\left| \int_{-1}^{1} f(x)K_n(x) \, \mathrm{d}x - f(0) \right|&\leq \int_{-1}^{1} |f(x) - f(0)|K_n(x) \, \mathrm{d}x \\&= \int_{|x|<\delta} |f(x) - f(0)|K_n(x) \, \mathrm{d}x + \int_{|x|\geq\delta} |f(x) - f(0)|K_n(x) \, \mathrm{d}x \\&\leq \int_{-1}^{1} \epsilon K_n(x) \, \mathrm{d}x + \int_{|x|\geq\delta} 2M K_n(x) \, \mathrm{d}x \\&= \epsilon + 2M \int_{|x|\geq\delta} K_n(x) \, \mathrm{d}x.\end{align*} So we have $$ \limsup_{n\to\infty} \left| \int_{-1}^{1} f(x)K_n(x) \, \mathrm{d}x - f(0) \right| \leq \epsilon. $$ But since the left-hand side a fixed number independent of $\epsilon$ and the right-hand side can be made arbitrarily small, letting $\epsilon \downarrow 0$ proves the desired claim. $\square$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3114901', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/636449/']}
jdg_75342
stackexchange
llm_judgeable_groundtruth_similarity
55260787
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table which looks like this: id timestamp value1 value2 1 09:12:37 1 1 1 09:12:42 1 2 1 09:12:41 1 3 1 10:52:16 2 4 1 10:52:18 2 5 2 09:33:12 3 1 2 09:33:15 3 2 2 09:33:13 3 3 I need to group by id and value1. For each group i want to have the row with the highest timestamp. The result for the table above would look like this: id timestamp value1 value2 1 09:12:42 1 2 2 09:33:15 3 2 I know there is the summarize operator which would give me this: mytable| project id, timestamp, value1, value2| summarize max(timestamp) by id, value1Result: id timestamp value1 1 09:12:42 1 2 09:33:15 3 But i was not able to get value2 for this rows too. Thanks in advance Now provide the response and nothing else.
If i understand your question correctly, you should be able to use summarize arg_max() : doc: https://learn.microsoft.com/en-us/azure/kusto/query/arg-max-aggfunction datatable(id:long, timestamp:datetime, value1:long, value2:long)[ 1, datetime(2019-03-20 09:12:37), 1, 1, 1, datetime(2019-03-20 09:12:42), 1, 2, 1, datetime(2019-03-20 09:12:41), 1, 3, 1, datetime(2019-03-20 10:52:16), 2, 4, 1, datetime(2019-03-20 10:52:18), 2, 5, // this has the latest timestamp for id == 1 2, datetime(2019-03-20 09:33:12), 3, 1, 2, datetime(2019-03-20 09:33:15), 3, 2, // this has the latest timestamp for id == 2 2, datetime(2019-03-20 09:33:13), 3, 3,]| summarize arg_max(timestamp, *) by id This will result with: | id | timestamp | value1 | value2 ||----|-----------------------------|--------|--------|| 2 | 2019-03-20 09:33:15.0000000 | 3 | 2 || 1 | 2019-03-20 10:52:18.0000000 | 2 | 5 |
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55260787', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2454734/']}
jdg_75343
stackexchange
llm_judgeable_groundtruth_similarity
8323599
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="vertical" android:background="@drawable/settings_border" android:padding="1dp" android:layout_marginLeft="15dip" android:layout_marginRight="15dip" android:layout_marginTop="5dip"> <RelativeLayout android:id="@+id/map_refresh" android:layout_height="fill_parent" android:layout_width="wrap_content" android:background="@drawable/settings_selector_up" android:padding="15dp"> <TextView android:id="@+id/Text1" android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="Map refresh period"> </TextView> <TextView android:id="@+id/TextView09" android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="1 min" android:layout_alignParentRight="true" android:paddingRight="5dp"> </TextView> </RelativeLayout> <RelativeLayout android:id="@+id/own_location" android:layout_height="fill_parent" android:layout_width="wrap_content" android:padding="15dp" android:background="@drawable/settings_selector_mid"> <TextView android:id="@+id/Text1" android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="Own location update period"> </TextView> <TextView android:id="@+id/TextView09" android:layout_height="wrap_content" android:layout_width="wrap_content" android:text="1 min" android:layout_alignParentRight="true" android:paddingRight="5dp"> </TextView> </RelativeLayout> I want set only bottom border in relativelayout. I want to dispaly listview style but not using listview. Lets say each listitem is relativlayout. I want set only bottom border so its look like a listview's divider. Now provide the response and nothing else.
I hope I understood what you said. in the res folder create a new folder (if you don't already have it) named drawable there create an xml named "borders.xml" <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true"><shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <gradient android:angle="90" android:centerColor="#6da23f" android:endColor="#8bc45d" android:startColor="#2f481c" /> <stroke android:width="2dp" android:color="#999999" /> <padding android:bottom="4dp" android:left="3dp" android:right="3dp" android:top="6dp" /> <corners android:radius="10px"/> </shape> </item> <item> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle"> <gradient android:angle="90" android:centerColor="#6da23f" android:endColor="#8bc45d" android:startColor="#4c8a39" /> <stroke android:width="1dp" android:color="#FFFFFF" /> <padding android:bottom="4dp" android:left="3dp" android:right="3dp" android:top="6dp" /> <corners android:radius="10px"/> </shape> </item></selector> You can further edit it as you like.Then select the layout from the Outline and click Background properties, and select the borders xml that you created. This will create borders for all 4. Alternatively, you can add a simple <View android:layout_width="1dip" android:layout_height="fill_parent" android:background="#FFFFFF" /> line and add it to the bottom of your layout and change the color/size to your liking.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8323599', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1059954/']}
jdg_75344
stackexchange
llm_judgeable_groundtruth_similarity
61017
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I keep on saving various org files in their respective folders using Filename as YYYY-MM-DD.org files . So I ended up typing C-x C-w 2020-10-06.org at least thirty times today. Can this process be automated ? Now provide the response and nothing else.
I have the following in my .emacs : ;; date(global-set-key "\C-\M-d" (lambda (arg) (interactive "P") (let ((separator (pcase arg ('(4) "-") (0 "/") (_ "")))) (insert (format-time-string (concat "%Y" separator "%m" separator "%d")))))) If I press \C-\M-d , it inserts 20201006 into the active buffer, mini or not. If I prefix it with C-u , it inserts 2020-10-06 ; prefixing it with M-0 inserts 2020/10/06 .
{}
{'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/61017', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/14627/']}
jdg_75345
stackexchange
llm_judgeable_groundtruth_similarity
18351198
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone explain what are the uses of the exec command in shell scripting with simple examples? Now provide the response and nothing else.
The exec built-in command mirrors functions in the kernel, there are a family of them based on execve , which is usually called from C. exec replaces the current program in the current process, without fork ing a new process. It is not something you would use in every script you write, but it comes in handy on occasion. Here are some scenarios I have used it; We want the user to run a specific application program without access to the shell. We could change the sign-in program in /etc/passwd, but maybe we want environment setting to be used from start-up files. So, in (say) .profile , the last statement says something like: exec appln-program so now there is no shell to go back to. Even if appln-program crashes, the end-user cannot get to a shell, because it is not there - the exec replaced it. We want to use a different shell to the one in /etc/passwd. Stupid as it may seem, some sites do not allow users to alter their sign-in shell. One site I know had everyone start with csh , and everyone just put into their .login (csh start-up file) a call to ksh . While that worked, it left a stray csh process running, and the logout was two stage which could get confusing. So we changed it to exec ksh which just replaced the c-shell program with the korn shell, and made everything simpler (there are other issues with this, such as the fact that the ksh is not a login-shell). Just to save processes. If we call prog1 -> prog2 -> prog3 -> prog4 etc. and never go back, then make each call an exec. It saves resources (not much, admittedly, unless repeated) and makes shutdown simplier. You have obviously seen exec used somewhere, perhaps if you showed the code that's bugging you we could justify its use. Edit : I realised that my answer above is incomplete. There are two uses of exec in shells like ksh and bash - used for opening file descriptors. Here are some examples: exec 3< thisfile # open "thisfile" for reading on file descriptor 3exec 4> thatfile # open "thatfile" for writing on file descriptor 4exec 8<> tother # open "tother" for reading and writing on fd 8exec 6>> other # open "other" for appending on file descriptor 6exec 5<&0 # copy read file descriptor 0 onto file descriptor 5exec 7>&4 # copy write file descriptor 4 onto 7exec 3<&- # close the read file descriptor 3exec 6>&- # close the write file descriptor 6 Note that spacing is very important here. If you place a space between the fd number and the redirection symbol then exec reverts to the original meaning: exec 3 < thisfile # oops, overwrite the current program with command "3" There are several ways you can use these, on ksh use read -u or print -u , on bash , for example: read <&3echo stuff >&4
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/18351198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2400564/']}
jdg_75346
stackexchange
llm_judgeable_groundtruth_similarity
293763
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a issue: I need to change the permission of the symlink from 777 to 755 and I do not have any idea how should I do it. I have tried using the chmod command but it's not working.I want lrwxrwxrwx 1 frosu 2016_cluj 5 Jul 4 13:53 test6 -> test0 to lrwxr-xr-x 1 frosu 2016_cluj 5 Jul 4 13:53 test6 -> test0 Now provide the response and nothing else.
Some systems support changing the permission of a symbolic link, others do not. chmod -- change file modes or Access Control Lists (OSX and FreeBSD , using -h ) -h If the file is a symbolic link, change the mode of the link itself rather than the file that the link points to. chmod - change file mode bits (Linux) chmod never changes the permissions of symbolic links; the chmod system call cannot change their permissions. This is not a problem since the permissions of symbolic links are never used. However, for each symbolic link listed on the command line, chmod changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered during recursive directory traversals. Since the feature differs, POSIX does not mention the possibility. From comments, someone suggests that a recent change to GNU coreutils provides the -h option. At the moment, that does not appear in the source-code for chmod : while ((c = getopt_long (argc, argv, ("Rcfvr::w::x::X::s::t::u::g::o::a::,::+::=::" "0::1::2::3::4::5::6::7::"), long_options, NULL)) and long_options has this: static struct option const long_options[] ={ {"changes", no_argument, NULL, 'c'}, {"recursive", no_argument, NULL, 'R'}, {"no-preserve-root", no_argument, NULL, NO_PRESERVE_ROOT}, {"preserve-root", no_argument, NULL, PRESERVE_ROOT}, {"quiet", no_argument, NULL, 'f'}, {"reference", required_argument, NULL, REFERENCE_FILE_OPTION}, {"silent", no_argument, NULL, 'f'}, {"verbose", no_argument, NULL, 'v'}, {GETOPT_HELP_OPTION_DECL}, {GETOPT_VERSION_OPTION_DECL}, {NULL, 0, NULL, 0}}; Permissions are set with chmod . Ownership is set with chown . GNU coreutils (like BSD) supports the ability to change a symbolic link's ownership. This is a different feature, since the ownership of a symbolic link is related to whether one can modify the contents of the link (and point it to a different target). Again, this started as a BSD feature (OSX, FreeBSD , etc), which is also supported with Linux (and Solaris , etc). POSIX says of this feature : -h For each file operand that names a file of type symbolic link, chown shall attempt to set the user ID of the symbolic link. If a group ID was specified, for each file operand that names a file of type symbolic link, chown shall attempt to set the group ID of the symbolic link. So much for the command-line tools (and shell scripts). However, you could write your own utility, using a feature of POSIX which is not mentioned in the discussion of the chmod utility: int chmod(const char *path, mode_t mode); int fchmodat(int fd, const char *path, mode_t mode, int flag); The latter function adds a flag parameter, which is described thus: Values for flag are constructed by a bitwise-inclusive OR of flags from the following list, defined in <fcntl.h> : AT_SYMLINK_NOFOLLOW If path names a symbolic link, then the mode of the symbolic link is changed. That is, the purpose of fchmodat is to provide the feature you asked about. But the command-line chmod utility is documented (so far) only in terms of chmod (without this feature). fchmodat , by the way, appears to have started as a poorly-documented feature of Solaris which was adopted by the Red Hat and GNU developers ten years ago, and suggested by them for standardization: one more openat-style function required: fchmodat Austin Group Minutes of the 17 May 2007 Teleconference [Fwd: The Austin Group announces Revision Draft 2 now available] According to The Linux Programming Interface , since 2.6.16, Linux supports AT_SYMLINK_NOFOLLOW in these calls: faccessat , fchownat , fstatat , utimensat , and linkat was implemented in 2.6.18 (both rather "old": 2006, according to OSNews ). Whether the feature is useful to you, or not, depends on the systems that you are using.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/293763', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/178062/']}
jdg_75347
stackexchange
llm_judgeable_groundtruth_similarity
4221862
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I believe it's illegitimate otherwise there will be a contradiction (an object is neither inside nor outside $A$ ). But I don't know which principle prohibits this(sorry about being ignorant in math), maybe because we can't self-reffering when defining a set. Can anyone helps? Thanks. Now provide the response and nothing else.
The axiom schema of separation states that for all formula $\psi$ such that $y$ does not appear free in $\psi$ , the universal closure of $$\exists y \forall x (x\in y \leftrightarrow (x\in z \land \psi))$$ holds in every model. You're trying to form $$\exists A \forall x (x\in A\leftrightarrow (x\in\mathbb{N}\land x\not\in A))$$ this violates the restriction that $A$ may not appear free in $x\not\in A$ . There's a similar restriction with the axiom schema of replacement so it can't be done via that route either.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4221862', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/855462/']}
jdg_75348
stackexchange
llm_judgeable_groundtruth_similarity
17470691
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I use bash autocompletion in Git, it keeps showing me branches of old remotes that I don't have anymore. When I do a git branch -la it shows those old remotes and branches while a git branch -l won't. A ls .git/refs/remotes/ also shows them. However, they are not present in my .git/config and neither are they shown when I run git remote show . So how do I get rid of them because my autocomplete list is too long right now. I have already tried: git reflog expire --expire=now --allgit gc --prune=nowrm .git/refs/remotes/theoldremotegit remote prune theoldremote I'm also aware of the fact that I can just re-clone the repo but that's just cheating ;-) Now provide the response and nothing else.
Git does not delete the (local) remote-tracking branches automatically if the branch was deleted in the remote repository. Additionally, before V2.0.1 remote-tracking branches were in some cases not deleted when you removed the remote from your git config (see VonC's answer). To delete stale remote-tracking branches (branches that were deleted in the remote repository) for one of your remote repositories, run git remote prune <remote> To cite the man page or git remote : prune Deletes all stale tracking branches under <name>. These stale branches have already been removed from the remote repository referenced by <name>, but are still locally available in "remotes/<name>". With --dry-run option, report what branches will be pruned, but do not actually prune them. However, from your question it seems you manually removed .git/refs/remotes/theoldremote , so Git no longer knows about the remote repository that the remote-tracking branches belonged to. That's not how you're supposed to do it. The normal way to remove a remote repository is to run git remote rm <remote> This will remove the remote from your .git/config , and will delete the remote-tracking branches. If you just delete the directory under .git/refs/remotes/ , the branches will remain behind. Then you will need to remove them manually: git branch -rd <remote>/<branchname> You need option -r to delete a remote branch.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17470691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/679913/']}
jdg_75349
stackexchange
llm_judgeable_groundtruth_similarity
2023239
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have previously had a repository on my computer for local use and removed it. Now I am trying to set another one up. But keep getting "Authorization failed" even when entering a correct password, when I enter a wrong password it tells me so. This is exactly how I set it up the first time but now every time it fails. What am I doing wrong? It is the only repository on my computer. I have already tried a reinstall of subversion and removing the cache in my AppData folder but nothing has helped. I am using this guide to set it up. https://blog.codinghorror.com/setting-up-subversion-on-windows/ This is what I am doing C:\>svnadmin create "H:\SVN\Repository"C:\>sc create svnserver binpath= "C:\Program Files (x86)\Subversion\svnserve.exe --service -r H:\SVN\Repository" displayname= "SubVersion" depend= Tcpip start=auto[SC] CreateService SUCCESSC:\>set SVN_EDITOR=c:\windows\system32\notepad.exeC:\>net start svnserverThe SubVersion service is starting.The SubVersion service was started successfully.C:\>svn mkdir svn://localhost/myProjectLog message unchanged or not specified(a)bort, (c)ontinue, (e)dit:cAuthentication realm: <svn://localhost:3690> myProjectcPassword for 'Admin':Authentication realm: <svn://localhost:3690> myProjectcUsername: userPassword for 'user': ********svn: Authorization failedC:\> My svnserve.conf file [general]anon-access = readauth-access = writepassword-db = passwdauthz-db = authzrealm = myProject And my passwd file [users]user = password Now provide the response and nothing else.
The error message says "Authorization failed", not "Authentication failed". Which means you successfully authenticated (i.e., your username and password is ok), but the user as whom you authenticated doesn't have the rights to execute the command (i.e., you're not authorized to create the directory). That either means that you're not connecting to the correct svnserve instance (you said you already have one set up and this is the second one you're trying to set up), or the svnservice doesn't use the correct svnserve.conf file, or the 'authz' file is not the correct one (maybe specify a full path to the auth files in your svnserve.conf file).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2023239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/213519/']}
jdg_75350
stackexchange
llm_judgeable_groundtruth_similarity
10062420
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Inside a bash script function, I need to work with the command-line arguments of the script, and also with another list of arguments. So I'm trying to pass two argument lists to a function, the problem is that multi-word arguments get split. function params(){ for PARAM in $1; do echo "$PARAM" done echo . for ITEM in $2; do echo "$ITEM" done}PARAMS="$@"ITEMS="x y 'z t'"params "$PARAMS" "$ITEMS" calling the script gives me myscript.sh a b 'c d'abcd.xy'zt' Since there are two lists they must be passed as a whole to the function, the question is, how to iterate the elements while respecting multi-word items enclosed in single quotes 'c d' and 'z t'? The workaround that I have (see below) makes use of BASH_ARGV so I need to pass just a single list to the function. However I would like to get a better understanding of what's going on and what's needed to make the above work. function params(){ for PARAM in "${BASH_ARGV[@]}"; do echo "$PARAM" done echo . for ITEM in "$@"; do echo "$ITEM" done}params x y 'z t' calling the script gives me myscript.sh a b 'c d'c dba.xyz t ... Which is how I need it (except that first list is reversed, but that would be tolerable I guess) Now provide the response and nothing else.
function params(){ arg=("$@") for ((i=1;i<=$1;i++)) ;do echo "${arg[i]}" done echo . for ((;i<=$#-$1+2;i++)) ;do echo "${arg[i]}" done}items=(w x 'z t')params $# "$@" "${items[@]}" Assuming you call your script with args a b 'c d' , the output is: abc d.xyz t
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10062420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1088790/']}
jdg_75351
stackexchange
llm_judgeable_groundtruth_similarity
24577
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $\{ f_n \}$ be a sequence in a Hilbert space $L^2(\mathbb{R}^d)$. We say that this sequence converges weakly to an element $f \in L^2$ if $\langle f_n, g \rangle \to \langle f,g \rangle$ for every $g \in L^2$ (where $\langle \cdot,\cdot \rangle$ denotes the inner product on $L^2$). By definition, we are given that the weak limit $f$ is in $L^2$. However, suppose we know that a sequence "formally" converges weakly to a limit $f$ (i.e. $\langle f_n, g \rangle \to \langle f,g \rangle$ for every $g \in L^2$ for some $f$ which we don't necessarily know yet to be in $L^2$) . Does this, purely by the characteristics of weak convergence, directly imply that $f \in L^2$? I think you could also generalize this question to any Hilbert space, provided that taking the inner product of an element possibly not in the Hilbert space makes sense. Now provide the response and nothing else.
Let me elaborate on user3148's answer and comment. There are two facts: A weak Cauchy sequence $(f_{n})$ is bounded. Every bounded sequence has a weakly convergent subsequence. Combining these two facts it is easy to see that every weak Cauchy sequence converges. Recall that a weak Cauchy sequence is a sequence $(f_{n})$ such that $\langle f_{n}, g\rangle$ is Cauchy in $\mathbb{R}$ for all $g$. The condition you impose on the sequence $(f_{n})$ means in particular that it is a weak Cauchy sequence, so it necessarily converges to some $f \in L^2$. Proof of 1. This follows immediately from the Banach-Steinhaus theorem applied to the operators $\langle f_{n}, \cdot \rangle: X^{\ast} \to \mathbb{R}$, see Sokal's recent paper for a neat proof of that theorem (without Baire!). Proof of 2. This is immediate from the version of the Banach-Alaoğlu theorem saying that the unit ball in a separable reflexive space is compact metrizable in the weak topology (= weak$^{\ast}$-topology by reflexivity).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/24577', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1736/']}
jdg_75352
stackexchange
llm_judgeable_groundtruth_similarity
39380923
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to the Spark RDD docs : All transformations in Spark are lazy, in that they do not compute their results right away...This design enables Spark to run more efficiently. There are times when I need to do certain operations on my dataframes right then and now . But because dataframe ops are " lazily evaluated " (per above), when I write these operations in the code, there's very little guarantee that Spark will actually execute those operations inline with the rest of the code. For example: val someDataFrame : DataFrame = getSomehow()val someOtherDataFrame : DataFrame = getSomehowAlso()// Do some stuff with 'someDataFrame' and 'someOtherDataFrame'// Now we need to do a union RIGHT HERE AND NOW, because// the next few lines of code require the union to have// already taken place!val unionDataFrame : DataFrame = someDataFrame.unionAll(someOtherDataFrame)// Now do some stuff with 'unionDataFrame'... So my workaround for this (so far) has been to run .show() or .count() immediately following my time-sensitive dataframe op, like so: val someDataFrame : DataFrame = getSomehow()val someOtherDataFrame : DataFrame = getSomehowAlso()// Do some stuff with 'someDataFrame' and 'someOtherDataFrame'val unionDataFrame : DataFrame = someDataFrame.unionAll(someOtherDataFrame)unionDataFrame.count() // Forces the union to execute/compute// Now do some stuff with 'unionDataFrame'... ...which forces Spark to execute the dataframe op right then in there, inline. This feels awfully hacky/kludgy to me. So I ask: is there a more generally-accepted and/or efficient way to force dataframe ops to happen on-demand (and not be lazily evaluated)? Now provide the response and nothing else.
No . You have to call an action to force Spark to do actual work. Transformations won't trigger that effect, and that's one of the reasons to love spark . By the way, I am pretty sure that spark knows very well when something must be done "right here and now" , so probably you are focusing on the wrong point. Can you just confirm that count() and show() are considered "actions" You can see some of the action functions of Spark in the documentation , where count() is listed. show() is not, and I haven't used it before, but it feels like it is an action-how can you show the result without doing actual work? :) Are you insinuating that Spark would automatically pick up on that, and do the union (just in time)? Yes ! :) spark remembers the transformations you have called, and when an action appears, it will do them, just in -the right- time! Something to remember: Because of this policy, of doing actual work only when an action appears, you will not see a logical error you have in your transformation (s), until the action takes place!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39380923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4009451/']}
jdg_75353
stackexchange
llm_judgeable_groundtruth_similarity
14546253
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm curious about the following. I have a simple C array declared in a header file like this: static int userCardsIndexes[INITIAL_CARDS_NUMBER] = {0, 1, 8, 9, 16, 17}; it gives me a bunch of the warnings: : 'userCardsIndexes' defined but not used despite i include this file into my cpp files and use this variable. The second thing that i don't understand about it is when i add const specifier like this: static const int userCardsIndexes[INITIAL_CARDS_NUMBER] = {0, 1, 8, 9, 16, 17}; the warnings disappear! Can anyone give me an explanation why i get these warnings and why const removes them? Now provide the response and nothing else.
The short answer is: you're defining an array in a header, not just declaring it. This is not good. If you need the array accessible whenever you include the header, there should be a declaration in the header as such: extern int userCardsIndexes[INITIAL_CARDS_NUMBER]; And then, in only one source file, define the array as such: int userCardsIndexes[INITIAL_CARDS_NUMBER] = {0, 1, 8, 9, 16, 17}; As to the long answer: there's nothing "magical" about a header file; the #include directive just basically copies the entire contents of the header file into your source file. So essentially, what you're getting is a new static array userCardsIndexes defined in every source file; if this array isn't used, you get the "unused variable" warning. Prepending the const is likely suppressing the warning just because the compiler isn't configured to warn on const unused variables. For example: using GCC, look at the documentation for "-Wunused-variable": http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14546253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/994107/']}
jdg_75354
stackexchange
llm_judgeable_groundtruth_similarity
38751327
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using Angular 2 RC 4 with the updated @angular/router, I got the route URLs to display in the browser using the answer in this question However, when I refresh or directly request a page with a route other than the default, it takes me to the default page (as if I requested index.html) rather than the routed page I want. How can I make the Angular 2 routes work correctly on a page refresh with Apache 2.4? Routes and bootstrap: const routes: RouterConfig = [{ path: '', redirectTo: 'dashboardTab', terminal: true },{ path: 'dashboardTab', component: DashboardComponent },{ path: 'missionTab', component: MissionComponent, children: childRoutes }];bootstrap(NavbarComponent, [disableDeprecatedForms(), provideForms(), provideRouter(routes), {provide: Window, useValue: window}, HTTP_PROVIDERS, ResponsiveState, {provide: PLATFORM_DIRECTIVES, useValue: RESPONSIVE_DIRECTIVES, multi: true}]).catch((err: any) => console.error(err)); base href in index.html: <base href="/"> Now provide the response and nothing else.
Basically apache does not know anything about your angular application routes so it can't do much here. But Trick here is to get your apache server to serve index.html file even in page not found situations so that angular then can render the routes. Here are the steps Create a file called .htaccess inside of your angular applicaiton src folder, and copy the following code to it RewriteEngine On RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} -d RewriteRule ^ - [L] RewriteRule ^ /index.html You need to add this .htaccess file to the assets array in your angular.json so that angular will copy it to your dist folder when you do a production build. finally after creating a production build if you copy your application to apache server everything should work fine, but in case if it does not you may want to do the last step go to /etc/apache2/apache2.conf inside your server and modify <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted</Directory> To look like this <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All Require all granted</Directory> If that does not work probably you might not have enabled mod rewrite sudo a2enmod rewrite See the deployment guide from angular team https://angular.io/guide/deployment
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38751327', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4421535/']}
jdg_75355
stackexchange
llm_judgeable_groundtruth_similarity
52088
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand that: Chimpanzees are the closest species to humans genetically . Only 1%-6% of their genes are different. Within any species there is genetic diversity , i.e. no two individuals have the same exact DNA sequence. This variability applies to humans and chimps . Thus, there exists a pair consisting of a human and a chimp that will have the smallest number of different (edit: genes ) DNA base-pairs within the two populations. One can say that the pair forms an "inter-species genetic gap". Question: What is the smallest estimated inter-species genetic gap between humans and chimpanzees? Edit: I changed the last point to base-pairs instead of genes. Most of the comments seem to suggest the population genetic variabilities are much, much smaller than the genetic distance between the populations. Visually, that looks something like this: Is this a fairly accurate picture of the human-chimp genetic distance? Now provide the response and nothing else.
You may be interested in this 2005 Nature paper from the Chimpanzee Sequencing and Analysis Consortium: Initial sequence of the chimpanzee genome and comparison with the human genome . It breaks down the most common categories of genetic variation: Single-nucleotide substitutions occur at a mean rate of 1.23% between copies of the human and chimpanzee genome, with 1.06% or less corresponding to fixed divergence between the species. So if we accept their analysis, ~1% (it says "or less", but elsewhere in the paper they estimate a lower bound of ~0.96%) of the genome difference consists of fixed single-nucleotide differences between species. Small insertions and deletions: On the basis of this analysis, we estimate that the human and chimpanzee genomes each contain 40–45 Mb of species-specific euchromatic sequence, and the indel differences between the genomes thus total ~90 Mb. This difference corresponds to ~3% of both genomes and dwarfs the 1.23% difference resulting from nucleotide substitutions; this confirms and extends several recent studies. So another ~3% comes from small insertions and deletions, which gives around a ~4% fixed divergence between populations . This rough estimate of the fixed genetic distance between humans and chimpanzees is probably a reasonable estimate of the distance between the nearest human-chimpanzee pair. If you also want to know how far apart the two furthest humans are, as @Remi.b mentioned, the populations we've sequenced don't span the whole diversity of our species, but it is likely much smaller. You could look at some of the HapMap or Human Genome Diversity Project papers. For reference, when Watson's genome was sequenced, they reported around 0.1% of the sequence to be different from the reference genome, but that isn't a particularly diverse comparison. You may also be interested in this paper comparing the human, chimpanzee, and bonobo genomes: http://www.nature.com/nature/journal/v486/n7404/full/nature11128.html .
{}
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/52088', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/5355/']}
jdg_75356
stackexchange
llm_judgeable_groundtruth_similarity
2922701
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f\in C^2(\Bbb R)$ and $f''(x) \geq 0 $ for all $x\in \Bbb R$. How do I prove that $$f(x) \geq f(x_{0}) + f'(x_{0}) (x-x_{0})$$ for all $x, x_{0}$ $\in \Bbb R$ ? This was in our exam and I didn't know how to deal with it. Now provide the response and nothing else.
According to Taylor you have $$f(x) = f(x_0) + f'(x_0)(x-x_0) + \frac{f''(\xi)}{2}(x-x_0)^2$$ $$ \stackrel{f''(\xi)\geq 0}{\geq} f(x_0) + f'(x_0)(x-x_0)$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2922701', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/492496/']}
jdg_75357
stackexchange
llm_judgeable_groundtruth_similarity
2853325
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my layout I have <% @current_user.popups.each do |p| %> <% content_for :script do %> <%= "$(document).ready ( function() { $.jGrowl(\"#{p.message}\", { sticky: true }) });" %> <% end %><% end %> And then in the script section I have <%= yield :script %> The problem is that this renders the escaped quotes as \&quot; and javascript doesn't like this. How can I stop this from happening? Or is there another approach to this? I can't use single quotes because I'd like to have some html in the message. I'd appreciate any help. Now provide the response and nothing else.
Are you using Rails 3? In Rails 3 html escaping is done by default and you must bypass it each time like the following: jdl had the right idea above, but it sounds like you still have issues. <% content_for :script do %> $(document).ready ( function() { $.jGrowl("<%= raw(p.message) %>", { sticky: true }) });<% end %> But even better would be to run it through the helper provided by rails : escape_javascript so escape carriage returns and quotes like this: $(document).ready ( function() { $.jGrowl("<%= raw(escape_javascript(p.message)) %>", { sticky: true }) });
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2853325', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224428/']}
jdg_75358
stackexchange
llm_judgeable_groundtruth_similarity
2562249
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models...if Rails.env.production? has_attached_file :image, :styles => {...}, :path => "images/:uuid_partition/:uuid/:style.:extension", :storage => :s3, :url => ':s3_authenticated_url', # generates an expiring url :s3_credentials => File.join(Rails.root, 'config', 's3.yml'), :s3_permissions => 'private', :s3_protocol => 'https'else has_attached_file :image, :styles => {...}, :storage => :filesystem # Default :path and :url should be used for dev/test envs.end Update: The sticky part is that the attachment's :path and :url options need to differ depending on which storage system is being used. Any advice or suggestions would be greatly appreciated! :-) Now provide the response and nothing else.
After playing around with it for a while, I came up with a module that does what I want. Inside app/models/shared/attachment_helper.rb : module Shared module AttachmentHelper def self.included(base) base.extend ClassMethods end module ClassMethods def has_attachment(name, options = {}) # generates a string containing the singular model name and the pluralized attachment name. # Examples: "user_avatars" or "asset_uploads" or "message_previews" attachment_owner = self.table_name.singularize attachment_folder = "#{attachment_owner}_#{name.to_s.pluralize}" # we want to create a path for the upload that looks like: # message_previews/00/11/22/001122deadbeef/thumbnail.png attachment_path = "#{attachment_folder}/:uuid_partition/:uuid/:style.:extension" if Rails.env.production? options[:path] ||= attachment_path options[:storage] ||= :s3 options[:url] ||= ':s3_authenticated_url' options[:s3_credentials] ||= File.join(Rails.root, 'config', 's3.yml') options[:s3_permissions] ||= 'private' options[:s3_protocol] ||= 'https' else # For local Dev/Test envs, use the default filesystem, but separate the environments # into different folders, so you can delete test files without breaking dev files. options[:path] ||= ":rails_root/public/system/attachments/#{Rails.env}/#{attachment_path}" options[:url] ||= "/system/attachments/#{Rails.env}/#{attachment_path}" end # pass things off to paperclip. has_attached_file name, options end end endend (Note: I'm using some custom paperclip interpolations above, like :uuid_partition , :uuid and :s3_authenticated_url . You'll need to modify things as needed for your particular application) Now, for every model that has paperclip attachments, you just have to include this shared module, and call the has_attachment method (instead of paperclip's has_attached_file ) An example model file: app/models/user.rb : class User < ActiveRecord::Base include Shared::AttachmentHelper has_attachment :avatar, :styles => { :thumbnail => "100x100>" }end With this in place, you'll have files saved to the following locations, depending on your environment: Development: RAILS_ROOT + public/attachments/development/user_avatars/aa/bb/cc/aabbccddeeff/thumbnail.jpg Test: RAILS_ROOT + public/attachments/test/user_avatars/aa/bb/cc/aabbccddeeff/thumbnail.jpg Production: https://s3.amazonaws.com/your-bucket-name/user_avatars/aa/bb/cc/aabbccddeeff/thumbnail.jpg This does exactly what I'm looking for, hopefully it'll prove useful to someone else too. :) -John
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2562249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10571/']}
jdg_75359
stackexchange
llm_judgeable_groundtruth_similarity
42519557
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given I have this hash: h = { a: 'a', b: 'b', c: { d: 'd', e: 'e'} } And I convert to OpenStruct: o = OpenStruct.new(h) => #<OpenStruct a="a", b="b", c={:d=>"d", :e=>"e"}> o.a => "a" o.b => "b" o.c => {:d=>"d", :e=>"e"} 2.1.2 :006 > o.c.dNoMethodError: undefined method `d' for {:d=>"d", :e=>"e"}:Hash I want all the nested keys to be methods as well. So I can access d as such: o.c.d=> "d" How can I achieve this? Now provide the response and nothing else.
personally I use the recursive-open-struct gem - it's then as simple as RecursiveOpenStruct.new(<nested_hash>) But for the sake of recursion practice, I'll show you a fresh solution: require 'ostruct'def to_recursive_ostruct(hash) result = hash.each_with_object({}) do |(key, val), memo| memo[key] = val.is_a?(Hash) ? to_recursive_ostruct(val) : val end OpenStruct.new(result)endputs to_recursive_ostruct(a: { b: 1}).a.b# => 1 edit Weihang Jian showed a slight improvement to this here https://stackoverflow.com/a/69311716/2981429 def to_recursive_ostruct(hash) hash.each_with_object(OpenStruct.new) do |(key, val), memo| memo[key] = val.is_a?(Hash) ? to_recursive_ostruct(val) : val endend Also see https://stackoverflow.com/a/63264908/2981429 which shows how to handle arrays note the reason this is better than the JSON-based solutions is because you can lose some data when you convert to JSON. For example if you convert a Time object to JSON and then parse it, it will be a string. There are many other examples of this: class Foo; endJSON.parse({obj: Foo.new}.to_json)["obj"]# => "#<Foo:0x00007fc8720198b0>" yeah ... not super useful. You've completely lost your reference to the actual instance.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42519557', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4501354/']}
jdg_75360
stackexchange
llm_judgeable_groundtruth_similarity
18882760
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am calling into a statically linked .dll, and I see this error: I wrote both the .dll and the calling code. This error should not be occurring. I am wondering if anyone else has encountered it before? The .dll only contains about 10 lines of code, its just a test .dll to see how dlls work in general. It blows up when I pass a std::string back out of the .dll. I am using Visual Studio 2012 and C++. What I will try next From Debug assertion... _pFirstBlock == pHead : This problem can occur if one uses the single-threading libraries in a multithreaded module. Tomorrow, I'll try recompiling the Boost static libraries in multi-threaded mode (my .dll is set to multi-threaded static mode). What I will try next See Using strings in an object exported from a DLL causes runtime error : You need to do one of two things Make both the DLL and the client that use it both link to the DLL version of the CRT (e.g. not statically). OR You need to make sure you don't pass dynamically allocated memory (such as is contained in string objects) across DLL boundaries. In other words, don't have DLL-exported functions that return string objects. Joe This seems to match whats going on, it blows up at the precise point where I pass a string back across a .dll boundary. The problem only occurs when everything is linked in static mode. Now that's fixable. See Passing reference to STL vector over dll boundary . What I will try next See Unable to pass std::wstring across DLL . Solution I have a nice solution, see the answer below. Now provide the response and nothing else.
In this case, the problem is that I was passing a std::string back across a .dll boundary. Runtime Library config If the MSVC Runtime library is set to Multi-threaded Debug DLL (/MDd) , then this is no problem (it works fine). If the MSVC Runtime library is set to Multi-threaded Debug (/MTd) , then it will throw this error, which can be fixed with the following instructions. Memory allocated in Memory Manager A and freed in Memory Manager B ... The problem is that memory is allocated on the .dll side, then that same memory is freed on the application side. This means that memory manager A is allocating memory, and memory manager B is releasing that same memory, which generates errors. The solution is to make sure that all memory passed back is not allocated in the DLL. In other words, the memory is always allocated on the application side, and freed on the application side. Of course, the DLL can allocate/free memory internally - but it can't allocate memory that is later freed by the application. Examples This will not work: // Memory is allocated on the .dll side, and freed on the app side, which throws error.DLL std::string GetString(); This will work: // Memory is allocated/freed on the application side, and never allocated in the .dll.DLL int GetString(std::string& text); However, this is not quite enough. On the application side, the string has to be pre-allocated: std::string text("");text.reserve(1024); // Reserves 1024 bytes in the string "text". On the .dll side, the text must be copied into the original buffer (rather than overwritten with memory that is allocated on the .dll side): text.assign("hello"); Sometimes, C++ will insist on allocating memory anyway. Double check that the pre-allocation is still the same as it was: if (text.capacity < 1024){ cout << "Memory was allocated on the .dll side. This will eventually throw an error.";} Another way that works is to use std::shared_ptr<std::string> , so even though memory is allocated in the .dll, it is released by the .dll (rather than the application side). Yet another way is to accept a char * and a length which indicates the amount of pre-allocated memory. If the text that we want to pass back is longer than the length of pre-allocated memory, return an error.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18882760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/107409/']}
jdg_75361
stackexchange
llm_judgeable_groundtruth_similarity
12512652
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a little trouble with a shorthanded if statement I can't figure out ($product == "vindo") ? $this->getNextVindoInList($id) : $this->getNextGandrupInList($id), This works fine but I want to have another check in that statement. Like this: if($product == "vindo") { if($number != 14) { $this->getNextVindoInList($id) }} else { if($number != 22) { $this->getNextGandrupInList($id) }} Now provide the response and nothing else.
For educational purposes, I will leave this answer intact. But it should be known that this is NOT RECOMMENDED . Nesting ternaries is a bad idea. It provides no performance benefit over explicit if-else statements and makes the code much more difficult to read. That said, see below for how it can , but should not be done. Two ways: ($product == "vindo" && $number != 14 ? $this->getNextVindoInList($id) : ($number != 22 ? $this->getNextGandrupInList($id) : '')// Equivalent of:if ($product == "vindo" && $number != 14) $this->getNextVindoInList($id);else if ($number != 22) $this->getNextGandrupInList($id);// OR// Equivalent of your example:($product == "vindo" ? ($number != 14 ? $this->getNextVindoInList($id) : '') : ($number != 22 ? $this->getNextGandrupInList($id) : ''))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12512652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/880276/']}
jdg_75362
stackexchange
llm_judgeable_groundtruth_similarity
43510
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When working on high voltage power-lines via helicopter the technician will "bond-on" with a steel wand called a hot-stick. During this process an arc is conducted from the line to the wand. Now, since the helicopter is floating both literally and electrically, why is there any potential between the line and the wand? My guess is that because of the alternating electric current in the power-line there is therefore an alternating expanding and collapsing magnetic field present in close proximity to the helicopter. This sets up a charge polarization in the frame of the helicopter 180 degrees out of phase with the current and therefore an electrical attraction occurs which produces an electric potential. If this is true, even though the electrons are not free to move in a bird (as they are in the metal helicopter frame) they are able to rotate and still set up a charge polarization (much like a charged balloon sticks to a wall) and therefore when a bird "bonds-on" to a wire don't you think they feel a little tingle at first? One other thing. Is it true that by virtue of the fact the the helicopter circuitry is an isolated one, they can bond-on to either line and that line will be the ground reference for the ground of the helicopter circuitry? Now provide the response and nothing else.
This question is somewhat related to yours. The phenomenon has nothing to do with magnetic fields. There is an intense electric field around any high-voltage wire (AC or DC), and the intensity is inversely proportional to the distance from the wire. At very close distances, the intensity exceeds the breakdown threshold for the air itself. The presence of the conductive helicopter (and the repairman with his metal mesh suit) distorts this field, concentrating it even further in the space between the wire and the helicopter. When the bonding strap gets close enough, an arc forms. The current in this arc is limited by the overall capacitance of the helicopter. Once the bonding connection has been made, the helicopter and the mesh suit are all at the same potential as the line itself. The electric field is still distorted, but now the concentrated areas are moved away from the repairman and the connection between the wire and the helicopter. And regarding the birds, when have you ever seen a bird sitting on one of these high-tension transmission lines?
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/43510', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/13776/']}
jdg_75363
stackexchange
llm_judgeable_groundtruth_similarity
20219
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am helpless against some kiddy with backtrack who repeatedly uses aireplay-ng to deauthenticate legitimate users on my Wifi work network. I captured and analyzed the network traffic on my Wifi work network, and I noticed a remarkable amount of 802.11 deauth packets. I realize it may not be possible to catch him, or even know where the attack came from. I just want to know: Is there any way to prevent such an attack? Now provide the response and nothing else.
Realistically, you cannot stop a bad guy from sending deauthentication packets. Instead, you should focus on ensuring you are resilient to a deauth attack. Make sure your network is configured in a way that the deauth attack doesn't enable an attacker to compromise your network. To do that, you need to make sure you are using WPA2. If you are using a pre-shared key (a passphrase), make sure the passphrase is very long and strong. If it is not already, change it immediately! If you are not using WPA2, fix that immediately! The primary reason why bad guys send deauth packets is that this helps them execute a dictionary attack against your passphrase. If a bad guy captures a copy of the initial handshake, they can try out various guesses at your passphrase and test whether they are correct. Sending a deauth packet forces the targeted device to disconnect and reconnect, allowing an eavesdropper to capture a copy of the initial handshake. Therefore, standard practice of many attackers who might try to attack your wireless network is to send deauth packets. If you are seeing many deauth packets, that is a sign that someone may be trying to attack your wireless network and guess your passphrase. Once the attacker has sent a deauth packet and intercepted the initial handshake, there are tools and online services that automate the task of trying to recover the passphrase, by guessing many possibilities. (See, e.g., CloudCracker for a representative example.) The defense against this kind of attack is to ensure your passphrase is so long and strong that it cannot possibly be guessed. If it's not already long and strong, you need to change it right away, because someone is probably trying to guess it as we speak. (The other reason a bad guy might send deauth packets is as an annoyance. However, as most users probably won't even notice, it's not a very effective annoyance.) To learn more, see these resources: How does deauthing work in aireplay-ng? Can someone get my WPA2 password with honeypots?
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/20219', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/12661/']}
jdg_75364
stackexchange
llm_judgeable_groundtruth_similarity
54614167
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to implement the Gaussian Filter in C. My output layout keeps coming out wrong, I tried playing with the rows and columns in my for loops but it didn't work. The output layout should look like this: 0.0161464 0.0294206 0.0359344 0.0294206 0.0161464 0.0294206 0.0536078 0.0654768 0.0536078 0.0294206 0.0359344 0.0654768 0.0799735 0.0654768 0.0359344 0.0294206 0.0536078 0.0654768 0.0536078 0.0294206 0.0161464 0.0294206 0.0359344 0.0294206 0.0161464 (This is just an example of of a Gaussian filter layout). Here the output layout I am getting in my program: 0.114986 0.101475 0.069743 0.037331 0.0155620.101475 0.089551 0.061548 0.032944 0.0137330.069743 0.061548 0.042301 0.022642 0.0094390.037331 0.032944 0.022642 0.012119 0.005052 0.015562 0.013733 0.009439 0.005052 0.002106 Here is the code segment of my program: for (i = 0; i < smooth_kernel_size; i++) { for (j = -0; j < smooth_kernel_size; j++) { gauss[i][j] = K * exp(((pow((i), 2) + pow((j), 2)) / ((2 * pow(sigma, 2)))) * (-1)); sum += gauss[i][j]; }}for (i = 0; i < smooth_kernel_size; i++) { for (j = 0; j < smooth_kernel_size; j++) { gauss[i][j] /= sum; }}for (i = 0; i < smooth_kernel_size; i++) { for (j = 0; j < smooth_kernel_size; j++) { printf("%f ", gauss[i][j]); } printf("\n");} Will appreciate any advice! Now provide the response and nothing else.
Arguably, the real challenge in such cases is to be sure that you compare apples to apples. And in your case, it seems that you don't. Our best friend is always the relevant documentation, combined with simple experiments. So... Although scikit-learn's LinearRegression() (i.e. your 1st R-squared) is fitted by default with fit_intercept=True ( docs ), this is not the case with statsmodels' OLS (your 2nd R-squared); quoting from the docs : An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant . Keeping this important detail in mind, let's run some simple experiments with dummy data: import numpy as npimport statsmodels.api as smfrom sklearn.metrics import r2_scorefrom sklearn.linear_model import LinearRegression# dummy data:y = np.array([1,3,4,5,2,3,4])X = np.array(range(1,8)).reshape(-1,1) # reshape to column# scikit-learn:lr = LinearRegression()lr.fit(X,y)# LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,# normalize=False)lr.score(X,y)# 0.16118421052631582y_pred=lr.predict(X)r2_score(y, y_pred)# 0.16118421052631582# statsmodels# first artificially add intercept to X, as advised in the docs:X_ = sm.add_constant(X)model = sm.OLS(y,X_) # X_ hereresults = model.fit()results.rsquared# 0.16118421052631593 For all practical purposes, these two values of R-squared produced by scikit-learn and statsmodels are identical . Let's go a step further, and try a scikit-learn model without intercept, but where we use the artificially "intercepted" data X_ we have already built for use with statsmodels: lr2 = LinearRegression(fit_intercept=False)lr2.fit(X_,y) # X_ here# LinearRegression(copy_X=True, fit_intercept=False, n_jobs=None,# normalize=False)lr2.score(X_, y)# 0.16118421052631593y_pred2 = lr2.predict(X_)r2_score(y, y_pred2)# 0.16118421052631593 Again, the R-squared is identical with the previous values. So, what happens when we "accidentally" forget to account for the fact that statsmodels OLS is fitted without an intercept? Let's see: model3 = sm.OLS(y,X) # X here, i.e. no interceptresults3 = model2.fit()results3.rsquared# 0.8058035714285714 Well, an R-squared of 0.80 is indeed very far from the one of 0.16 returned by a model with an intercept, and arguably this is exactly what has happened in your case. So far so good, and I could easily finish the answer here; but there is indeed a point where this harmonious world breaks down: let's see what happens when we fit both models without intercept and with the initial data X where we have not artificially added any interception. We have already fitted the OLS model above, and got an R-squared of 0.80; what about a similar model from scikit-learn? # scikit-learnlr3 = LinearRegression(fit_intercept=False)lr3.fit(X,y) # X herelr3.score(X,y)# -0.4309210526315792y_pred3 = lr3.predict(X)r2_score(y, y_pred3)# -0.4309210526315792 Ooops...! What the heck?? It seems that scikit-earn, when computes the r2_score , always assumes an intercept, either explicitly in the model ( fit_intercept=True ) or implicitly in the data (the way we have produced X_ from X above, using statsmodels' add_constant ); digging a little online reveals a Github thread (closed without a remedy) where it is confirmed that the situation is indeed like that. [ UPDATE Dec 2021: for a more detailed & in-depth investigation and explanation of why the two scores are different in this particular case (i.e. both models fitted without an intercept), see this great answer by Flavia] Let me clarify that the discrepancy I have described above has nothing to do with your issue: in your case, the real issue is that you are actually comparing apples (a model with intercept) with oranges (a model without intercept). So, why scikit-learn not only fails in such an (admittedly edge ) case, but even when the fact emerges in a Github issue it is actually treated with indifference ? (Notice also that the scikit-learn core developer who replies in the above thread casually admits that " I'm not super familiar with stats "...). The answer goes a little beyond coding issues, such as the ones SO is mainly about, but it may be worth elaborating a little here. Arguably, the reason is that the whole R-squared concept comes in fact directly from the world of statistics, where the emphasis is on interpretative models, and it has little use in machine learning contexts, where the emphasis is clearly on predictive models; at least AFAIK, and beyond some very introductory courses, I have never (I mean never ...) seen a predictive modeling problem where the R-squared is used for any kind of performance assessment; neither it's an accident that popular machine learning introductions, such as Andrew Ng's Machine Learning at Coursera, do not even bother to mention it. And, as noted in the Github thread above (emphasis added): In particular when using a test set, it's a bit unclear to me what the R^2 means. with which I certainly concur. As for the edge case discussed above (to include or not an intercept term?), I suspect it would sound really irrelevant to modern deep learning practitioners, where the equivalent of an intercept (bias parameters) is always included by default in neural network models... See the accepted (and highly upvoted) answer in the Cross Validated question Difference between statsmodel OLS and scikit linear regression for a more detailed discussion along these last lines. The discussion (and links) in Is R-squared Useless? , triggered by some relevant (negative) remarks by the great statistician Cosma Shalizi, is also enlightening and highly recommended.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54614167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9529859/']}
jdg_75365
stackexchange
llm_judgeable_groundtruth_similarity
2297347
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In JavaScript this is how we can split a string at every 3-rd character "foobarspam".match(/.{1,3}/g) I am trying to figure out how to do this in Java. Any pointers? Now provide the response and nothing else.
You could do it like this: String s = "1234567890";System.out.println(java.util.Arrays.toString(s.split("(?<=\\G...)"))); which produces: [123, 456, 789, 0] The regex (?<=\G...) matches an empty string that has the last match ( \G ) followed by three characters ( ... ) before it ( (?<= ) )
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2297347', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27474/']}
jdg_75366
stackexchange
llm_judgeable_groundtruth_similarity
2485750
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to sort close to a 1,00,000 floating point entries in Delphi. I am new to Delphi and would like to know if there are any ready made solutions available. I tried a few language provided constructs and they take an inordinate amount of time to run to completion.(a 5-10 sec execution time is fine for the application) Now provide the response and nothing else.
why not just implement a quick Sort algorithm? see this simple code program ProjectSortFoat;{$APPTYPE CONSOLE}uses SysUtils;procedure QuickSort(var List: array of Double; iLo, iHi: Integer) ;var Lo : integer; Hi : integer; T : Double; Mid : Double;begin Lo := iLo; Hi := iHi; Mid:= List[(Lo + Hi) div 2]; repeat while List[Lo] < Mid do Inc(Lo) ; while List[Hi] > Mid do Dec(Hi) ; if Lo <= Hi then begin T := List[Lo]; List[Lo] := List[Hi]; List[Hi] := T; Inc(Lo); Dec(Hi); end; until Lo > Hi; if Hi > iLo then QuickSort(List, iLo, Hi); if Lo < iHi then QuickSort(List, Lo, iHi);end;constElements = 1000000;var doubleArray : array of Double; i : integer; t : TDateTime;begin SetLength(doubleArray,Elements); try t:=Now; Writeln('Init Generating '+FormatFloat('#,',Elements)+' random numbers '); for i:=low(doubleArray) to high(doubleArray) do doubleArray[i]:=Random(10000000)+Random; //can be improved Writeln('Elapsed '+FormatDateTime('HH:NN:SS.ZZZ',Now-t)); t:=Now; Writeln('Sorting '+FormatFloat('#,',Elements)+' random numbers '); QuickSort(doubleArray, Low(doubleArray), High(doubleArray)) ; Writeln('Elapsed '+FormatDateTime('HH:NN:SS.ZZZ',Now-t)); finally Finalize(doubleArray); end; Readln;end. in my machine, the execution time for sorting 1.000.000 float numbers is 0.167 seconds . if you have delphi 7 or another older version (i don't know if exist in the new versions) you can check the C:\Program Files\Borland\Delphi7\Demos\Threads path, for a cool demo app using differents sorting algorithms an threads.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2485750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/287280/']}
jdg_75367
stackexchange
llm_judgeable_groundtruth_similarity
47775
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a field in my data store which must take exactly 180 bits of information. Some users will choose to make this data encrypted, some won't, so some of those 180 bit fields will be ciphertext some will be plaintext. A boolean will indicate which one the user is using. The important thing here is that I need this field to be exactly 180 bits long. However, a 128-bit cipher will mean I have to put in 256 bits in as plaintext, which is fine, just use a buffer string, but this means that the output is 256 bits when what is stored must be exactly 180 bits. And I can't simply cut off the ciphertext or that would mess up the decryption. Now provide the response and nothing else.
If you have a unique, unchanging identifier for each entry in your data store, you can use counter mode . A nice thing about counter mode turns a block cipher into a stream cipher. No matter what the block size is, CTR mode encrypts an $n$-bit plaintext into an $n$-bit ciphertext. In order to achieve that, CTR requires a unique counter value per block. Note: not just a unique counter value per message, but a unique counter value per block. The counter size is the same as the block size. In your case, you have messages that fit on two blocks, thus each message requires two counter values. If you have a unique identifier $k$ for each message, you can use $k$ and $k+1$ as the counter values for the two blocks (the second of which is partial) of the message. Thus you need a 127-bit unique identifier for each message (128-bit block, minus one bit to distinguish the two blocks inside each message). The only security requirement for these 127 bits is that they are never reused for a given key. The initial counter value to encrypt a message is often chosen randomly, but this is not a requirement, just a convenience to ensure uniqueness. Of course, to decrypt the data, you need to be able to recover the unique identifier associated with each entry. If your entries have some kind of unique identifier, which is often the case in databases, then you're set. Just remember that if you move data around or normalize it in a way that changes the identifiers, you will need to decrypt and reencrypt the data. Some crypto libraries may present CTR mode through a function that randomly generates the initial counter value and prepends it to the message (so you'd input a 160-bit plaintext and get back a 288-bit ciphertext). Use a library that lets you specify the initial counter value (almost all implementations will increment the counter by 1 for each successive block, so pick initial counter values that are even, but you'll need to be aware of the endianness used by your library). Keep in mind that encryption only gives you confidentiality, not integrity. In other words, someone who obtains the ciphertexts but not the key will not be able to find any information about the data; but if someone can inject fake ciphertexts or modify existing ciphertexts, the tampering cannot be detected. It is intrinsically impossible to detect tampering by cryptographic means in your scenario since there is no room for any redundancy.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/47775', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/40486/']}
jdg_75368
stackexchange
llm_judgeable_groundtruth_similarity
27565056
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code for ECMAScript 6 template literals : let person = {name: 'John Smith'};let tpl = `My name is ${person.name}.`;let MyVar = "My name is " + person.name + ".";console.log("template literal= " + tpl);console.log("my variable = " + MyVar); The output is as follows: template literal= My name is John Smith.my variable = My name is John Smith. This is the fiddle. I tried searching for the exact difference, but I couldn't find it,What is the difference between the following two statements? let tpl = `My name is ${person.name}.`; And let MyVar = "My name is "+ person.name+"."; I am already able to get the string MyVar concatenated with person.name here, so what would be the scenario to use the template literal in? Now provide the response and nothing else.
If you are using template literals only with placeholders (e.g. `Hello ${person.name}` ) like in the question's example, then the result is the same as just concatenating strings. Subjectively it looks better and is easier to read, especially for multi-line strings or strings containing both ' and " since you don't have to escape those characters any more. Readability is a great feature, but the most interesting thing about templates are Tagged template literals : let person = {name: 'John Smith'}; let tag = (strArr, name) => strArr[0] + name.toUpperCase() + strArr[1]; tag `My name is ${person.name}!` // Output: My name is JOHN SMITH! In the third line of this example, a function named tag is called. The content of the template string is split into multiple variables, that you can access in the arguments of the tag function: literal sections (in this example the value of strArr[0] is My name is and the value of strArr[1] is ! ) and substitutions ( John Smith ). The template literal will be evaluated to whatever the tag function returns. The ECMAScript wiki lists some possible use cases, like automatically escaping or encoding input, or localization. You could create a tag function named msg that looks up the literal parts like My name is and substitutes them with translations into the current locale's language, for example into German: console.log(msg`My name is ${person.name}.`) // Output: Mein Name ist John Smith. The value returned by the tag function doesn't even have to be a string. You could create a tag function named $ which evaluates the string and uses it as a query selector to return a collection of DOM nodes, like in this example : $`a.${className}[href=~'//${domain}/']`
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27565056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3556874/']}
jdg_75369
stackexchange
llm_judgeable_groundtruth_similarity
18628
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a computer that runs on 12v from two parallel lead acid batteries. I need to be able to swap the battery banks for fresh ones every now and then without ever losing power to my equipment. I can do this by replacing one battery at a time, that's no problem, however I'm curious about best practices when swapping large batteries that's powering sensitive equipment. Should I be worried about spikes when connecting the wires? Should I be using something like ferrite beads or diodes to protect my stuff? Thanks Now provide the response and nothing else.
The battery voltage is unlikely to change much during pulsing. See text below and measure with an oscilloscope. The graph below shows typical lead acid charge curves at different rates of charge and states of battery charge. When pulse charging - when charging is off the voltage will be at about the rest state for the current state of charge and when charging is on will rise to the voltage corresponding to current state of charge and charge rate. In reality the change will be less or much less than this for short off-pulses as the battery will tend to maintain its charged state somewhat. Any resistive internal drop (V = I x R) will change immediately). If you were pulse charging at say C/5 you'd get perhaps under 1V rise when when the battery was fully discharged and perhaps 3+ Volt rise when the battery was fully charged. A filter that accomodates that is easily built. To get low drop you'll want an inductive input filter. Placing a capacitor at the input to the filter that is large enough to noticeably damp the pulse will also remove any benefit that the pulse charger may have. I'll not go into much detail here - ask if interested. An LC filter designed to supply the energy during the 20 mS pulses is needed. If current is in the amps range you will probably end up with an inductor wound on a standard transformer core, and a transformer winding of around 12V rating and rated at at least the maximum current you will draw is required. Capacitor TBD but as a guide a 10,000 uF supplying all the energy by itself (no battery) would drop 1 Volt when 1 amp was drawn during a 20 mS break. In combination with an inductor and the battery it will do much better. Ideally, if you MUST pulse charge the battery, do so when it is not connected to and/or powering the equipment. Whether pulse charging is beneficial is much debated. It does appear to have some benefit but not necessarily for any of the reasons most usually thought. If your battery is in good condition and kept charged by a competent charger then pulse charging is unlikely to be very beneficial.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/18628', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2920/']}
jdg_75370
stackexchange
llm_judgeable_groundtruth_similarity
2892358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following set of sets. I don't know ahead of time how long it will be. val sets = Set(Set("a","b","c"), Set("1","2"), Set("S","T")) I would like to expand it into a cartesian product: Set("a&1&S", "a&1&T", "a&2&S", ..., "c&2&T") How would you do that? Now provide the response and nothing else.
I think I figured out how to do that. def combine(acc:Set[String], set:Set[String]) = for (a <- acc; s <- set) yield { a + "&" + s }val expanded = sets.reduceLeft(combine)expanded: scala.collection.immutable.Set[java.lang.String] = Set(b&2&T, a&1&S, a&1&T, b&1&S, b&1&T, c&1&T, a&2&T, c&1&S, c&2&T, a&2&S, c&2&S, b&2&S)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2892358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/257449/']}
jdg_75371
stackexchange
llm_judgeable_groundtruth_similarity
34554240
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Magento CE, I'd like to install an add-on to Extendware Page Cache called Lightening Cache. It requires editing the Apache configuration inside the virtual host definition for the site, by adding: RewriteEngine OnRewriteMap ewpchash prg:/home/.../shell/extendware/ewpagecache/apache/md5.phpRewriteCond %{REQUEST_METHOD} !POSTRewriteCond %{REQUEST_URI} !\.(js|css|png|jpg|jpeg|ico|gif)$ [NC]RewriteCond %{DOCUMENT_ROOT}%{SCRIPT_FILENAME} !-fRewriteCond ${ewpchash:%{HTTPS};~;%{HTTP_HOST};~;%{REQUEST_URI};~;%{QUERY_STRING};~;%{HTTP:Cookie};~;%{SCRIPT_FILENAME};~;%{REMOTE_ADDR};~;%{HTTP_USER_AGENT}} -fRewriteRule ^(.*)$ ${ewpchash:%{HTTPS};~;%{HTTP_HOST};~;%{REQUEST_URI};~;%{QUERY_STRING};~;%{HTTP:Cookie};~;%{SCRIPT_FILENAME};~;%{REMOTE_ADDR};~;%{HTTP_USER_AGENT}} [NC,L]<FilesMatch "\.(html)$"> Header unset Cache-Control Header unset Expires Header append Expires "Thu, 19 Nov 1981 08:52:00 GMT" Header append Cache-Control "must-revalidate"</FilesMatch> I have added this to the bottom of /etc/apache2/sites-enabled/site.conf . When I run the command apachectl graceful , I receive the error: AH00526: Syntax error on line 53 of/etc/apache2/sites-enabled/site.conf: Invalid command 'Header',perhaps misspelled or defined by a module not included in the serverconfiguration Action 'graceful' failed. The Apache error log may havemore information. Site is running Apache 2.4 Have I done something wrong? Now provide the response and nothing else.
In order to use Header directive in apache you have to load mod_header module. You can test if module is loaded or not by :- apache2ctl -M | grep headers_module find / -name mod_headers.so If it is loaded you will see something like :- headers_module (shared) /usr/lib/apache2/modules/mod_headers.so If you see no output of find command than load that module directly in your apache conf file. Just append below line :- LoadModule headers_module modules/mod_headers.so Note :- mod_header is available as base module in apache. So you don't need to install it explicitly. Issue following command :- a2enmod headers Restart web service apache2ctl restart
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34554240', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667903/']}
jdg_75372
stackexchange
llm_judgeable_groundtruth_similarity
4660150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This post and this post says that with Visual Studio, the run time library can be static/dynamic, and it shouldn't be mixed. Even one can have debugging version/release version for the library. And there are four possibilities (static/dynamic and debug/release). So, with Visual Studio, the library provider has to provide four different versions of the same library? ADDED I tried to link CppUnit test (debug) with release build library, and I got an error. So, I wondered normally library provider might need to provide all the possible combination of libraries. Now provide the response and nothing else.
What is the cause? Bottom Line You are trying to use something that is null (or Nothing in VB.NET). This means you either set it to null , or you never set it to anything at all. Like anything else, null gets passed around. If it is null in method "A", it could be that method "B" passed a null to method "A". null can have different meanings: Object variables that are uninitialized and hence point to nothing. In this case, if you access members of such objects, it causes a NullReferenceException . The developer is using null intentionally to indicate there is no meaningful value available. Note that C# has the concept of nullable datatypes for variables (like database tables can have nullable fields) - you can assign null to them to indicate there is no value stored in it, for example int? a = null; (which is a shortcut for Nullable<int> a = null; ) where the question mark indicates it is allowed to store null in variable a . You can check that either with if (a.HasValue) {...} or with if (a==null) {...} . Nullable variables, like a this example, allow to access the value via a.Value explicitly, or just as normal via a . Note that accessing it via a.Value throws an InvalidOperationException instead of a NullReferenceException if a is null - you should do the check beforehand, i.e. if you have another non-nullable variable int b; then you should do assignments like if (a.HasValue) { b = a.Value; } or shorter if (a != null) { b = a; } . The rest of this article goes into more detail and shows mistakes that many programmers often make which can lead to a NullReferenceException . More Specifically The runtime throwing a NullReferenceException always means the same thing: you are trying to use a reference, and the reference is not initialized (or it was once initialized, but is no longer initialized). This means the reference is null , and you cannot access members (such as methods) through a null reference. The simplest case: string foo = null;foo.ToUpper(); This will throw a NullReferenceException at the second line because you can't call the instance method ToUpper() on a string reference pointing to null . Debugging How do you find the source of a NullReferenceException ? Apart from looking at the exception itself, which will be thrown exactly at the location where it occurs, the general rules of debugging in Visual Studio apply: place strategic breakpoints and inspect your variables , either by hovering the mouse over their names, opening a (Quick)Watch window or using the various debugging panels like Locals and Autos. If you want to find out where the reference is or isn't set, right-click its name and select "Find All References". You can then place a breakpoint at every found location and run your program with the debugger attached. Every time the debugger breaks on such a breakpoint, you need to determine whether you expect the reference to be non-null, inspect the variable, and verify that it points to an instance when you expect it to. By following the program flow this way, you can find the location where the instance should not be null, and why it isn't properly set. Examples Some common scenarios where the exception can be thrown: Generic ref1.ref2.ref3.member If ref1 or ref2 or ref3 is null, then you'll get a NullReferenceException . If you want to solve the problem, then find out which one is null by rewriting the expression to its simpler equivalent: var r1 = ref1;var r2 = r1.ref2;var r3 = r2.ref3;r3.member Specifically, in HttpContext.Current.User.Identity.Name , the HttpContext.Current could be null, or the User property could be null, or the Identity property could be null. Indirect public class Person { public int Age { get; set; }}public class Book { public Person Author { get; set; }}public class Example { public void Foo() { Book b1 = new Book(); int authorAge = b1.Author.Age; // You never initialized the Author property. // there is no Person to get an Age from. }} If you want to avoid the child (Person) null reference, you could initialize it in the parent (Book) object's constructor. Nested Object Initializers The same applies to nested object initializers: Book b1 = new Book { Author = { Age = 45 } }; This translates to: Book b1 = new Book();b1.Author.Age = 45; While the new keyword is used, it only creates a new instance of Book , but not a new instance of Person , so the Author the property is still null . Nested Collection Initializers public class Person { public ICollection<Book> Books { get; set; }}public class Book { public string Title { get; set; }} The nested collection Initializers behave the same: Person p1 = new Person { Books = { new Book { Title = "Title1" }, new Book { Title = "Title2" }, }}; This translates to: Person p1 = new Person();p1.Books.Add(new Book { Title = "Title1" });p1.Books.Add(new Book { Title = "Title2" }); The new Person only creates an instance of Person , but the Books collection is still null . The collection Initializer syntax does not create a collectionfor p1.Books , it only translates to the p1.Books.Add(...) statements. Array int[] numbers = null;int n = numbers[0]; // numbers is null. There is no array to index. Array Elements Person[] people = new Person[5];people[0].Age = 20 // people[0] is null. The array was allocated but not // initialized. There is no Person to set the Age for. Jagged Arrays long[][] array = new long[1][];array[0][0] = 3; // is null because only the first dimension is yet initialized. // Use array[0] = new long[2]; first. Collection/List/Dictionary Dictionary<string, int> agesForNames = null;int age = agesForNames["Bob"]; // agesForNames is null. // There is no Dictionary to perform the lookup. Range Variable (Indirect/Deferred) public class Person { public string Name { get; set; }}var people = new List<Person>();people.Add(null);var names = from p in people select p.Name;string firstName = names.First(); // Exception is thrown here, but actually occurs // on the line above. "p" is null because the // first element we added to the list is null. Events (C#) public class Demo{ public event EventHandler StateChanged; protected virtual void OnStateChanged(EventArgs e) { StateChanged(this, e); // Exception is thrown here // if no event handlers have been attached // to StateChanged event }} (Note: The VB.NET compiler inserts null checks for event usage, so it's not necessary to check events for Nothing in VB.NET.) Bad Naming Conventions: If you named fields differently from locals, you might have realized that you never initialized the field. public class Form1{ private Customer customer; private void Form1_Load(object sender, EventArgs e) { Customer customer = new Customer(); customer.Name = "John"; } private void Button_Click(object sender, EventArgs e) { MessageBox.Show(customer.Name); }} This can be solved by following the convention to prefix fields with an underscore: private Customer _customer; ASP.NET Page Life cycle: public partial class Issues_Edit : System.Web.UI.Page{ protected TestIssue myIssue; protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { // Only called on first load, not when button clicked myIssue = new TestIssue(); } } protected void SaveButton_Click(object sender, EventArgs e) { myIssue.Entry = "NullReferenceException here!"; }} ASP.NET Session Values // if the "FirstName" session value has not yet been set,// then this line will throw a NullReferenceExceptionstring firstName = Session["FirstName"].ToString(); ASP.NET MVC empty view models If the exception occurs when referencing a property of @Model in an ASP.NET MVC View , you need to understand that the Model gets set in your action method, when you return a view. When you return an empty model (or model property) from your controller, the exception occurs when the views access it: // Controllerpublic class Restaurant:Controller{ public ActionResult Search() { return View(); // Forgot the provide a Model here. }}// Razor view @foreach (var restaurantSearch in Model.RestaurantSearch) // Throws.{} <p>@Model.somePropertyName</p> <!-- Also throws --> WPF Control Creation Order and Events WPF controls are created during the call to InitializeComponent in the order they appear in the visual tree. A NullReferenceException will be raised in the case of early-created controls with event handlers, etc., that fire during InitializeComponent which reference late-created controls. For example: <Grid> <!-- Combobox declared first --> <ComboBox Name="comboBox1" Margin="10" SelectedIndex="0" SelectionChanged="comboBox1_SelectionChanged"> <ComboBoxItem Content="Item 1" /> <ComboBoxItem Content="Item 2" /> <ComboBoxItem Content="Item 3" /> </ComboBox> <!-- Label declared later --> <Label Name="label1" Content="Label" Margin="10" /></Grid> Here comboBox1 is created before label1 . If comboBox1_SelectionChanged attempts to reference `label1, it will not yet have been created. private void comboBox1_SelectionChanged(object sender, SelectionChangedEventArgs e){ label1.Content = comboBox1.SelectedIndex.ToString(); // NullReferenceException here!!} Changing the order of the declarations in the XAML (i.e., listing label1 before comboBox1 , ignoring issues of design philosophy) would at least resolve the NullReferenceException here. Cast with as var myThing = someObject as Thing; This doesn't throw an InvalidCastException but returns a null when the cast fails (and when someObject is itself null). So be aware of that. LINQ FirstOrDefault() and SingleOrDefault() The plain versions First() and Single() throw exceptions when there is nothing. The "OrDefault" versions return null in that case. So be aware of that. foreach foreach throws when you try to iterate on a null collection. Usually caused by unexpected null result from methods that return collections. List<int> list = null; foreach(var v in list) { } // NullReferenceException here More realistic example - select nodes from XML document. Will throw if nodes are not found but initial debugging shows that all properties valid: foreach (var node in myData.MyXml.DocumentNode.SelectNodes("//Data")) Ways to Avoid Explicitly check for null and ignore null values. If you expect the reference sometimes to be null , you can check for it being null before accessing instance members: void PrintName(Person p){ if (p != null) { Console.WriteLine(p.Name); }} Explicitly check for null and provide a default value. Methods you call expecting an instance can return null , for example when the object being sought cannot be found. You can choose to return a default value when this is the case: string GetCategory(Book b) { if (b == null) return "Unknown"; return b.Category;} Explicitly check for null from method calls and throw a custom exception. You can also throw a custom exception, only to catch it in the calling code: string GetCategory(string bookTitle) { var book = library.FindBook(bookTitle); // This may return null if (book == null) throw new BookNotFoundException(bookTitle); // Your custom exception return book.Category;} Use Debug.Assert if a value should never be null , to catch the problem earlier than the exception occurs. When you know during development that a method could, but never should return null , you can use Debug.Assert() to break as soon as possible when it does occur: string GetTitle(int knownBookID) { // You know this should never return null. var book = library.GetBook(knownBookID); // Exception will occur on the next line instead of at the end of this method. Debug.Assert(book != null, "Library didn't return a book for known book ID."); // Some other code return book.Title; // Will never throw NullReferenceException in Debug mode.} Though this check will not end up in your release build , causing it to throw the NullReferenceException again when book == null at runtime in release mode. Use GetValueOrDefault() for nullable value types to provide a default value when they are null . DateTime? appointment = null;Console.WriteLine(appointment.GetValueOrDefault(DateTime.Now));// Will display the default value provided (DateTime.Now), because appointment is null.appointment = new DateTime(2022, 10, 20);Console.WriteLine(appointment.GetValueOrDefault(DateTime.Now));// Will display the appointment date, not the default Use the null coalescing operator: ?? [C#] or If() [VB]. The shorthand to providing a default value when a null is encountered: IService CreateService(ILogger log, Int32? frobPowerLevel){ var serviceImpl = new MyService(log ?? NullLog.Instance); // Note that the above "GetValueOrDefault()" can also be rewritten to use // the coalesce operator: serviceImpl.FrobPowerLevel = frobPowerLevel ?? 5;} Use the null condition operator: ?. or ?[x] for arrays (available in C# 6 and VB.NET 14): This is also sometimes called the safe navigation or Elvis (after its shape) operator. If the expression on the left side of the operator is null, then the right side will not be evaluated, and null is returned instead. That means cases like this: var title = person.Title.ToUpper(); If the person does not have a title, this will throw an exception because it is trying to call ToUpper on a property with a null value. In C# 5 and below, this can be guarded with: var title = person.Title == null ? null : person.Title.ToUpper(); Now the title variable will be null instead of throwing an exception. C# 6 introduces a shorter syntax for this: var title = person.Title?.ToUpper(); This will result in the title variable being null , and the call to ToUpper is not made if person.Title is null . Of course, you still have to check title for null or use the null condition operator together with the null coalescing operator ( ?? ) to supply a default value: // regular null checkint titleLength = 0;if (title != null) titleLength = title.Length; // If title is null, this would throw NullReferenceException // combining the `?` and the `??` operatorint titleLength = title?.Length ?? 0; Likewise, for arrays you can use ?[i] as follows: int[] myIntArray = null;var i = 5;int? elem = myIntArray?[i];if (!elem.HasValue) Console.WriteLine("No value"); This will do the following: If myIntArray is null , the expression returns null and you can safely check it. If it contains an array, it will do the same as: elem = myIntArray[i]; and returns the i th element. Use null context (available in C# 8): Introduced in C# 8 , null contexts and nullable reference types perform static analysis on variables and provide a compiler warning if a value can be potentially null or have been set to null . The nullable reference types allow types to be explicitly allowed to be null . The nullable annotation context and nullable warning context can be set for a project using the Nullable element in your csproj file. This element configures how the compiler interprets the nullability of types and what warnings are generated. Valid settings are: enable : The nullable annotation context is enabled. The nullable warning context is enabled. Variables of a reference type, string, for example, are non-nullable. All nullability warnings are enabled. disable : The nullable annotation context is disabled. The nullable warning context is disabled. Variables of a reference type are oblivious, just like earlier versions of C#. All nullability warnings are disabled. safeonly : The nullable annotation context is enabled. The nullable warning context is safeonly. Variables of a reference type are non-nullable. All safety nullability warnings are enabled. warnings : The nullable annotation context is disabled. The nullable warning context is enabled. Variables of a reference type are oblivious. All nullability warnings are enabled. safeonlywarnings : The nullable annotation context is disabled. The nullable warning context is safeonly.Variables of a reference type are oblivious. All safety nullability warnings are enabled. A nullable reference type is noted using the same syntax as nullable value types: a ? is appended to the type of the variable. Special techniques for debugging and fixing null derefs in iterators C# supports "iterator blocks" (called "generators" in some other popular languages). NullReferenceException can be particularly tricky to debug in iterator blocks because of deferred execution: public IEnumerable<Frob> GetFrobs(FrobFactory f, int count){ for (int i = 0; i < count; ++i) yield return f.MakeFrob();}...FrobFactory factory = whatever;IEnumerable<Frobs> frobs = GetFrobs();...foreach(Frob frob in frobs) { ... } If whatever results in null then MakeFrob will throw. Now, you might think that the right thing to do is this: // DON'T DO THISpublic IEnumerable<Frob> GetFrobs(FrobFactory f, int count){ if (f == null) throw new ArgumentNullException("f", "factory must not be null"); for (int i = 0; i < count; ++i) yield return f.MakeFrob();} Why is this wrong? Because the iterator block does not actually run until the foreach ! The call to GetFrobs simply returns an object which when iterated will run the iterator block. By writing a null check like this you prevent the NullReferenceException , but you move the NullArgumentException to the point of the iteration , not to the point of the call , and that is very confusing to debug . The correct fix is: // DO THISpublic IEnumerable<Frob> GetFrobs(FrobFactory f, int count){ // No yields in a public method that throws! if (f == null) throw new ArgumentNullException("f", "factory must not be null"); return GetFrobsForReal(f, count);}private IEnumerable<Frob> GetFrobsForReal(FrobFactory f, int count){ // Yields in a private method Debug.Assert(f != null); for (int i = 0; i < count; ++i) yield return f.MakeFrob();} That is, make a private helper method that has the iterator block logic and a public surface method that does the null check and returns the iterator. Now when GetFrobs is called, the null check happens immediately, and then GetFrobsForReal executes when the sequence is iterated. If you examine the reference source for LINQ to Objects you will see that this technique is used throughout. It is slightly more clunky to write, but it makes debugging nullity errors much easier. Optimize your code for the convenience of the caller, not the convenience of the author . A note on null dereferences in unsafe code C# has an "unsafe" mode which is, as the name implies, extremely dangerous because the normal safety mechanisms which provide memory safety and type safety are not enforced. You should not be writing unsafe code unless you have a thorough and deep understanding of how memory works . In unsafe mode, you should be aware of two important facts: dereferencing a null pointer produces the same exception as dereferencing a null reference dereferencing an invalid non-null pointer can produce that exception in some circumstances To understand why that is, it helps to understand how .NET produces NullReferenceException in the first place. (These details apply to .NET running on Windows; other operating systems use similar mechanisms.) Memory is virtualized in Windows ; each process gets a virtual memory space of many "pages" of memory that are tracked by the operating system. Each page of memory has flags set on it that determine how it may be used: read from, written to, executed, and so on. The lowest page is marked as "produce an error if ever used in any way". Both a null pointer and a null reference in C# are internally represented as the number zero, and so any attempt to dereference it into its corresponding memory storage causes the operating system to produce an error. The .NET runtime then detects this error and turns it into the NullReferenceException . That's why dereferencing both a null pointer and a null reference produces the same exception. What about the second point? Dereferencing any invalid pointer that falls in the lowest page of virtual memory causes the same operating system error, and thereby the same exception. Why does this make sense? Well, suppose we have a struct containing two ints, and an unmanaged pointer equal to null. If we attempt to dereference the second int in the struct, the CLR will not attempt to access the storage at location zero; it will access the storage at location four. But logically this is a null dereference because we are getting to that address via the null. If you are working with unsafe code and you get a NullReferenceException , just be aware that the offending pointer need not be null. It can be any location in the lowest page, and this exception will be produced.
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/4660150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/260127/']}
jdg_75373
stackexchange
llm_judgeable_groundtruth_similarity
184009
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm running logit in JMP. My understanding is that if you exponentiate the coefficients, you get the change in odds ratios from a 1 unit change in X. This works perfectly for continuous independent variables, but not for nominal (0,1) independent variables. Example below: For parameter estimates I'm getting: Term Estimate Intercept -1.516621 Gender[Female] 0.22189387 Level1 /Level2 Odds Ratio Male Female 0.6416016 Female Male 1.5585996 Shouldn't my odds ratio for males be exp(-1.516621)=0.220486 and for females be exp(0.20461503)=1.2270525? Befuddled by this. Would really appreciate some help. Thanks! Now provide the response and nothing else.
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best, you can evaluate whether the value is low or high (this evaluation is also a bit problematic to me). From what I understand, stress value is inversely related to the number of dimensions of a MDS solution, correct and that higher stress value indicates that there is a lot of error (i.e. badness-of-fit) in the current model, correct indicating a solution with more dimensions. Not very accurate conclusion. consider stress as a function, "number of dimensions" is one of the inputs of this function. The others [significant factors] are the model that you are using as your MDS model, the initial configuration of points in the MDS configuration(map) or even the order of rows/columns in the dissimilarity matrix. Therefore, you will get different stress values in 2-dimension space for instance just by changing the initial configuration of the points! [although this change in the stress value is not considerable comparing to the one resulted by change in the number of dimensions] Now if you want to figure out the most proper number of dimensions regarding the stress value, there is a straight-forward solution: In multidimensional scaling, the pragmatic way of depicting the inverse relation of number of dimensions and stress is computing the stress for 2,3,4...,n-1 dimensions. n is the original number of dimension of the data. The result of above computations becomes more lucid and comprehensible through "Scree plot of number of dimensions ~ amount of stress". The example below is from Cox and Cox(2001): Now we can decide about the number of dimensions based on the relation. It is a trade-off: more dimensions-->lower stress (more accurate map) and less dimension reduction(more difficult to visualize and interpret). Besides, the proper number of dimensions are not decided solely based on stress value. Your goal also matters. If you want to have a 2D map, then you choose 2-dimensions and then try to minimize the stress as much as possible. Nevertheless, if you are implying "how much stress is too much" then we have another story! one way of evaluation of your magnitude of stress is comparing it to the average stress values of different possible configurations of your dataset. (have look at "Multidimensional Scaling in R: SMACOF" written by Patrick Mair). Are the randomly generated coordinates, number of variables, and number of categories in a variable related? Sorry but I don't understand this part of your question.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/184009', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/96478/']}
jdg_75374
stackexchange
llm_judgeable_groundtruth_similarity
25223407
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I've tried How to you set the maximum number of characters that can be entered into a UITextField using swift? , I saw that if I use all 10 characters, I can't erase the character too. The only thing I can do is to cancel the operation (delete all the characters together). Does anyone know how to not block the keyboard (so that I can't add other letters/symbols/numbers, but I can use the backspace)? Now provide the response and nothing else.
With Swift 5 and iOS 12, try the following implementation of textField(_:shouldChangeCharactersIn:replacementString:) method that is part of the UITextFieldDelegate protocol: func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool { guard let textFieldText = textField.text, let rangeOfTextToReplace = Range(range, in: textFieldText) else { return false } let substringToReplace = textFieldText[rangeOfTextToReplace] let count = textFieldText.count - substringToReplace.count + string.count return count <= 10} The most important part of this code is the conversion from range ( NSRange ) to rangeOfTextToReplace ( Range<String.Index> ). See this video tutorial to understand why this conversion is important. To make this code work properly, you should also set the textField 's smartInsertDeleteType value to UITextSmartInsertDeleteType.no . This will prevent the possible insertion of an (unwanted) extra space when performing a paste operation. The complete sample code below shows how to implement textField(_:shouldChangeCharactersIn:replacementString:) in a UIViewController : import UIKitclass ViewController: UIViewController, UITextFieldDelegate { @IBOutlet var textField: UITextField! // Link this to a UITextField in Storyboard override func viewDidLoad() { super.viewDidLoad() textField.smartInsertDeleteType = UITextSmartInsertDeleteType.no textField.delegate = self } func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool { guard let textFieldText = textField.text, let rangeOfTextToReplace = Range(range, in: textFieldText) else { return false } let substringToReplace = textFieldText[rangeOfTextToReplace] let count = textFieldText.count - substringToReplace.count + string.count return count <= 10 }}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25223407', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3842100/']}
jdg_75375
stackexchange
llm_judgeable_groundtruth_similarity
24196
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As far as I understand it black holes radiate Hawking radiation and probably gravity waves , which over time causes them to lose mass and eventually evaporate after almost unfathomable amounts of time. I've also read that black holes eventually have difficulty gaining size after a certain point because gravitational forces working in a black hole's accreditation disk can eventually begin ejecting in-falling matter before it can be absorbed. While I know black holes can become quite massive, that leads me to wonder if Black holes have a maximum possible size where they will begin to immediately radiate any additional mass that's added to it. Or can a black hole effectively expand forever as long as the supply of matter is greater than the effects of radiation and ejection? Is there effectively a size limit to black holes? Now provide the response and nothing else.
As David Hammen commented, the power emitted through Hawking radiation is proportional to $M^{-2}$. Thus the evaporation timescale for a black hole is proportional to $M^3$. This means that a more massive black hole is much more stable against evaporation than a lower mass black hole. The other issue you mention is the limited rate that you can "feed" a black hole. There is inevitably a feedback; as gas is compressed towards the event horizon it gets hot and emits radiation. The pressure of this radiation can eventually balance the inward gravitational infall. For spherically symmetric accretion this leads to the Eddington limit , which sets the maximum spherical accretion rate, where $\dot{M}_{\rm max}\propto M$. That is, the maximum accretion rate is proportional to the black hole mass. If accretion proceeds at the Eddington limit then the black hole mass grows exponentially with time and with a characteristic doubling timescale of around 50 million years (independent of the original mass - see this Physics SE page for some mathematical details). If black holes were limited to this accretion rate (though there is some evidence from the presence of very luminous quasars at high redshift that they may exceed it), then the maximum black hole mass will depend on the age of the universe and the size of the initial "seed" black holes. If we assume an initial mass of 100 solar masses, a doubling timescale of 50 million years and that the seed black holes formed 400 million years after the big bang (all plausible, but contestable), then there have been 266 doubling timescales since and the black hole could have grown by a factor of $10^{80}$ ! Clearly there are no black holes with anywhere near this mass in the observable universe - the largest appear to be of order $10^{10}$ solar masses. Their growth is limited by their food supply. Supermassive black holes are found at the centres of galaxies. There is a poorly understood relationship between the black hole mass and the mass of the bulge of the galaxy it is in. The ratio peaks at about 1 percent for the most massive bulges (e.g. see Hu 2009 ; McConnell & Ma 2013 ). Since the most massive massive elliptical galaxies are of order $10^{12}$ solar masses, this appears to set the maximum mass of a black hole in the present day universe. The future is speculation. If the cosmic expansion rate continues to accelerate, then galaxy mergers will become increasingly uncommon and the opportunities for further black hole growth will be limited.
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/24196', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/600/']}
jdg_75376
stackexchange
llm_judgeable_groundtruth_similarity
46586
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Ok I must be missing something. I have a section with the "last updated date" in a text cell, I'd like it to autoupdate. Is there a "insert current date" command hidden away in some menu? Edit How can I automatically insert the current date and time into a notebook each time I save it to indicate the date and time last modified? Now provide the response and nothing else.
If you want this done automatically then the way I would do it is to time stamp the notebook as you close it, i.e. the time stamp appears after "Last modified". This code might be overkill but it will do the job: Create a text cell with "Last Modified " in it and give it a cell tag (in this example "LastModified"), then ... SetOptions[EvaluationNotebook[], NotebookEventActions :> {"WindowClose" :> Module[{dy, hr, mn}, {dy, hr, mn} = Map[(LinkWrite[First[$FrontEnd], FrontEnd`Value[#]]; LinkRead[First[$FrontEnd]]) &, {"DateLong", "Hour", "Minute"}]; NotebookLocate["LastModified"]; NotebookWrite[InputNotebook[], Cell[TextData[{"last modified ", dy, " at ", hr, ":", mn}], "Text", CellTags -> "LastModified"]]]} ]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/46586', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/403/']}
jdg_75377
stackexchange
llm_judgeable_groundtruth_similarity
553244
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm designing a multilingual application using .resx files. I have a few files like GlobalStrings.resx, GlobalStrings.es.resx, GlobalStrings.en.resx, etc.When I want to use this, I just need to set Thread.CurrentThread.CurrentCulture. The problem:I have a combobox with all the available languages, but I'm loading this manually: comboLanguage.Items.Add(CultureInfo.GetCultureInfo("en"));comboLanguage.Items.Add(CultureInfo.GetCultureInfo("es")); I've tried with cmbLanguage.Items.AddRange(CultureInfo.GetCultures(CultureTypes.UserCustomCulture)); without any success. Also tried with all the elements in CultureTypes, but I'm only getting a big list with a lot more languages that I'm not using, or an empty list. Is there any way to get only the supported languages? Now provide the response and nothing else.
You can programatically list the cultures available in your application // Pass the class name of your resources as a parameter e.g. MyResources for MyResources.resxResourceManager rm = new ResourceManager(typeof(MyResources));CultureInfo[] cultures = CultureInfo.GetCultures(CultureTypes.AllCultures);foreach (CultureInfo culture in cultures){ try { ResourceSet rs = rm.GetResourceSet(culture, true, false); // or ResourceSet rs = rm.GetResourceSet(new CultureInfo(culture.TwoLetterISOLanguageName), true, false); string isSupported = (rs == null) ? " is not supported" : " is supported"; Console.WriteLine(culture + isSupported); } catch (CultureNotFoundException exc) { Console.WriteLine(culture + " is not available on the machine or is an invalid culture identifier."); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/553244', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4386/']}
jdg_75378
stackexchange
llm_judgeable_groundtruth_similarity
19370
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: From what I can see most object detection NNs ( Fast(er) R-CNN , YOLO etc) are trained on data including bounding boxes indicating where in the picture the objects are localised. Are there algos that simply take the full picture + label annotations, and then on top of determining whether an image contain certain object(s) also indirectly learn to understand the appropriate bounding box(es) for objects? Now provide the response and nothing else.
Yes, there are models that do this. This link points to one of the first papers I believe. The main idea is called weakly supervised object detection. The paper essentially makes three modifications. They treat the typical hidden fully connected layer as a convolutional layer. This works because convolutional layers can be thought of as convolving the same fully connected network about the image. They add a global max pooling later at the end of this convolutional layer. This is the operator that will "highlight" the area of this final conv layer that has learned the pattern of objects it is trying to classify. Using a threshold on the weights of this global max will ensure a region is significant. Then, they use an algorithm to create a bounding box from this region. They suggest a new loss function that lends itself to an object existing or not. I think they assume a Bernoulli for each class which lends itself to multiple logistic regression instead of softmax. Take a look because it's pretty sweet and has been cited by a lot of other new exciting papers.
{}
{'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/19370', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/32875/']}
jdg_75379
stackexchange
llm_judgeable_groundtruth_similarity
38345
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently "needed" a zip function in Perl 5 (while I was thinking about How do I calculate relative time? ), i.e. a function that takes two lists and "zips" them together to one list, interleaving the elements. (Pseudo)example: @a=(1, 2, 3);@b=('apple', 'orange', 'grape');zip @a, @b; # (1, 'apple', 2, 'orange', 3, 'grape'); Haskell has zip in the Prelude and Perl 6 has a zip operator built in, but how do you do it in an elegant way in Perl 5? Now provide the response and nothing else.
Assuming you have exactly two lists and they are exactly the same length, here is a solution originally by merlyn (Randal Schwartz), who called it perversely perlish: sub zip2 { my $p = @_ / 2; return @_[ map { $_, $_ + $p } 0 .. $p - 1 ];} What happens here is that for a 10-element list, first, we find the pivot point in the middle, in this case 5, and save it in $p . Then we make a list of indices up to that point, in this case 0 1 2 3 4. Next we use map to pair each index with another index that’s at the same distance from the pivot point as the first index is from the start, giving us (in this case) 0 5 1 6 2 7 3 8 4 9. Then we take a slice from @_ using that as the list of indices. This means that if 'a', 'b', 'c', 1, 2, 3 is passed to zip2 , it will return that list rearranged into 'a', 1, 'b', 2, 'c', 3 . This can be written in a single expression along ysth’s lines like so: sub zip2 { @_[map { $_, $_ + @_/2 } 0..(@_/2 - 1)] } Whether you’d want to use either variation depends on whether you can see yourself remembering how they work, but for me, it was a mind expander.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2905/']}
jdg_75380
stackexchange
llm_judgeable_groundtruth_similarity
95181
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have: class MyClass extends MyClass2 implements Serializable { //...} In MyClass2 is a property that is not serializable. How can I serialize (and de-serialize) this object? Correction: MyClass2 is, of course, not an interface but a class. Now provide the response and nothing else.
As someone else noted, chapter 11 of Josh Bloch's Effective Java is an indispensible resource on Java Serialization. A couple points from that chapter pertinent to your question: assuming you want to serialize the state of the non-serializable field in MyClass2, that field must be accessible to MyClass, either directly or through getters and setters. MyClass will have to implement custom serialization by providing readObject and writeObject methods. the non-serializable field's Class must have an API to allow getting it's state (for writing to the object stream) and then instantiating a new instance with that state (when later reading from the object stream.) per Item 74 of Effective Java, MyClass2 must have a no-arg constructor accessible to MyClass, otherwise it is impossible for MyClass to extend MyClass2 and implement Serializable. I've written a quick example below illustrating this. class MyClass extends MyClass2 implements Serializable{ public MyClass(int quantity) { setNonSerializableProperty(new NonSerializableClass(quantity)); } private void writeObject(java.io.ObjectOutputStream out) throws IOException{ // note, here we don't need out.defaultWriteObject(); because // MyClass has no other state to serialize out.writeInt(super.getNonSerializableProperty().getQuantity()); } private void readObject(java.io.ObjectInputStream in) throws IOException { // note, here we don't need in.defaultReadObject(); // because MyClass has no other state to deserialize super.setNonSerializableProperty(new NonSerializableClass(in.readInt())); }}/* this class must have no-arg constructor accessible to MyClass */class MyClass2 { /* this property must be gettable/settable by MyClass. It cannot be final, therefore. */ private NonSerializableClass nonSerializableProperty; public void setNonSerializableProperty(NonSerializableClass nonSerializableProperty) { this.nonSerializableProperty = nonSerializableProperty; } public NonSerializableClass getNonSerializableProperty() { return nonSerializableProperty; }}class NonSerializableClass{ private final int quantity; public NonSerializableClass(int quantity){ this.quantity = quantity; } public int getQuantity() { return quantity; }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/95181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12860/']}
jdg_75381
stackexchange
llm_judgeable_groundtruth_similarity
41871
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a 4-layer PCB that has a wifi module and a chip antenna, the antenna is placed at the corner of the PCB and the copper beneath it is removed, I see that via fences are used on the breakout board of the same module, but the reference design doesn't say much about it, so I was wondering how they work ? how many via do I need ? their placement, sizes and the spaces between them ? This is the breakout board This is my current design Edit:This is the reference design for the module Edit: In addition to the references in the answer, I've also found a paper that mentions via fences in RF design, and has some evaluation of different layouts, High Density RF Loadboard Design section 4.3. Ground Via Shielding Evaluation Also, I've calculated the spacing between the vias for 2.4GHz to be around 100mils. Now provide the response and nothing else.
The most cited paper on the subject that I could find is PCB design techniques for lowest-cost EMC compliance Part 1 (not free). Though the part you're interested in is succinctly quoted in Best practice in circuit board design : Armstrong recommends stitching at no more than λ/20, with stub lengthsno longer than this. This is actually a very good rule for stitchingany ground fill to the ground plane on a multi-layer design. λ is thewavelength of the highest significant frequency for the design (assumea frequency of 1 GHz if not know) where f = C / λ NB: C (speed of light) will be approx. 60% of free-space velocity forEM radiation propagating through a FR4 dielectric PCB. Another technical note repeats this rule of thumb: The common rule of thumb is to locate stitch vias no further apartthan λ/10 and preferably as often as λ/20. And gives some good reasons on why would would want to use via stitching/via fences: There are numerous reasons to use ground via stitching on a multilayerPCB. Some of the reasons are: Prevention of coupling into nearby traces and metal pour. Prevention of waveguide signal propagation, shielding/isolation of circuit blocks, and the reduction of slot radiation from the edges ofa PCB. Completion of a robust power distribution design. Reduction of series inductance to active and passive parts. For more detailed info on PDN (power distribution networks) in PCB, see 2 . Signal integrity, in particular for signals that transition planes. Thermal reasons (not covered in this tech note). With respect to your particular application, the WirelessUSB™ LP/LPstar Tranciever PCB Layout Guidelines state the reasoning more plainly: The top and bottom layer copper pours provide an uninterrupted returnpath. This is maximized by the distribution of ground vias connectingthe two layers. The internal ground plane of 4-layer designs alsoprovide uninterrupted return path by connecting areas of copper thatmay otherwise be islands that do not contribute to the return path.The term “via stitching” describes the practice of placing evenlyspaced vias around the board. Figure 9 shows a good distribution ofground vias with each via marked by a ‘+’. The row of more denselydistributed vias along the top edge of the board is the appliedantenna ground and is required to maximize the RF performance of thedevice.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/41871', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10499/']}
jdg_75382
stackexchange
llm_judgeable_groundtruth_similarity
41265553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How does angular2 propose to render <div *ngFor="let todo of unfinishedTodos"> {{todo.title}}</div> in case if unfinishedTodos.length >0 and text "empty" in another cases. P.S. <div *ngIf="unfinishedTodos && unfinishedTodos.length > 0"> <div *ngFor="let todo of unfinishedTodos"> {{todo.title}} </div></div><div *ngIf="!unfinishedTodos || unfinishedTodos.length <= 0"> empty</div> looks ugly Now provide the response and nothing else.
Syntax compatible with Angular 4.0 and beyond <ng-template #elseTemplate> Content displayed if expression returns false</ng-template><ng-container *ngIf="expression; else elseTemplate"> Content displayed if expression returns true</ng-container> or <ng-container *ngIf="expression; then thenBlock; else elseBlock"></ng-container><ng-template #thenBlock> Content displayed if expression returns true</ng-template><ng-template #elseBlock> Content displayed if expression returns false</ng-template> Syntax compatible with Angular 2.0 and beyond <ng-container *ngIf="expression"> true</ng-container><ng-container *ngIf="!expression"> else</ng-container> Important You can use e.g. <div> , or any other tag, instead of <ng-container> <template> had been deprecated since 4.0 in favor of <ng-template> to avoid name collision with already existing tag.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41265553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2674303/']}
jdg_75383
stackexchange
llm_judgeable_groundtruth_similarity
118643
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Diceware passphrase lengths are on the rise - up to six or seven words now. The old adage that passphrases are easier to remember may be true for shorter phrases, but six truly random words can be tough to remember. On the other hand, full sentences may be easier for some to remember. Take for example the Diceware-generated passphrase tracy optic renown acetic sonic kudo . We could turn that into a (nonsensical) sentence such as Tracy's optics were renowned, but her acetic sonic cost her kudos. The Diceware passphrase has an entropy of 77.4 if the attacker knows you're using six Diceware words ( 12.9 per word ), and 107.219 (according to this calculator ) if they don't. The sentence form has an entropy (according to the calculator) of 255.546. However, it's not fully random any more, which is supposed to be one of the big benefits of the Diceware approach. Assuming that the attacker somehow knows that you're using this method of passphrase generation, does the sentence form decrease the security of the passphrase in any way? For example, perhaps they can use some kind of analysis of English sentence structure to narrow down their required guesses? Assuming the answer to the above is "No, sentence form does not decrease security," then here's another consideration: One benefit of the sentence format is that it's very long and includes non-alphabetical characters (eg. the apostrophe and comma). However, that's a definite downside when trying to type it on a mobile device. Say we shorten the Diceware phrase to three words - tracy optic renown - and then turn that into an [a-z] sentence - tracy is optically renowned or perhaps tracy is optically renowned worldwide (to further distinguish it from the Diceware wordlist). If we were to use three Diceware words and the attacker knows we're using Diceware then we have an entropy of 38.7. However, tracy is optically renowned worldwide is 100.504 bits of entropy according to the calculator. Given the differences between the three word Diceware phrase and the short sentence form, which entropy calculation is more accurate - the Diceware calculation (ie. the differences are too slight to matter) or the calculator's calculation (dictionary/brute-force/etc.)? Note: assume that any length or combination of characters is acceptable for the password Now provide the response and nothing else.
It does not decrease the security. What is actually happening is that your "entropy calculator" is giving you a false measure of entropy. It can only give an approximate estimate, after all. There's actually interesting proofs that show that one can never actually know the amount of entropy in a particular string of text unless you know something about how it was constructed. A pass string 1000 words long created by a "physical random number generator" like a resistor noise network will appear to have the same amount of entropy as a pass string 1000 words long generated using a Mersene Twister, until you realize that the Mersene twister actually leaks all of its seed information in any contiguous block of 624 values. Entropy calculators can only make heuristic assumptions about how random the data actually is. This, of course, is why we have Diceware. It can prove [an underestimate on] entropy because randomness is built into the process. To prove the security of a pass-sentence like you are looking at, consider an oracle test. I select a bunch of words using Diceware, and then I build a sentence out of them. I then provide you with an oracle which constructs sentences out of them. It is guaranteed that, if you provide the oracle with the correct set of selected words from Diceware, it will provide exactly the sentence I used. For all other sets of words, it will produce an arbitrary sentence using them. It is trivial to see that the entropy of my password cannot possibly be lower than the entropy built into the Diceware words I selected. Even with this immensely powerful oracle to reduce the very human process of sentence formation to nothingness, the randomness from diceware will remain. You cannot guess my password any faster than you could guess the original set of Diceware words I selected. Now there are a few caveats. If you use fewer diceware words, like your later example, you get fewer bits of entropy from the diceware layer. This means that oracle I mentioned above becomes more and more helpful for breaking the sentence based password. Also, some of the sets of words you get from diceware can be particularly difficult to turn into sentences. If you ever reject a set of diceware words as part of your pass-sentence building process, you are calling into question the perfect randomness that diceware relies on. Now, why the oracle attack? Oracles are very powerful tools for testing cryptographic theory. In reality, tracy is optically renowned worldwide is actually probably quite a lot stronger than the 38.7 bits from the diceware words tracy optic renown . Breaking that sentence will take more work than the words, though probably not the full 100.504 bits the entropy calculator heuristically estimates. So how much stronger? We don't know. That's the point of oracle attacks. In an oracle attack we say "let's just assume this hard to calculate part of the process offers zero increased security. None at all. Is the process still secure?" If it is secure under this extreme assumption, then it is clearly secure against real life attacks where the attacker doesn't necessarily have such a magically powerful oracle at their disposal.
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/118643', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/104969/']}
jdg_75384
stackexchange
llm_judgeable_groundtruth_similarity
231255
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I improve significantly the speed of the following code? nmax = 10000000;r = Range[nmax];res = ({#, #^2, If[Mod[#, 2] == 0, "even", "odd"]} & /@ r); // AbsoluteTiming{15.3145, Null} Now provide the response and nothing else.
res = Transpose[{r, r^2, Mod[r, 2]}]; This is about 50 times faster than the original code on my machine and contains the same information. I replaced the strings by integers ( 0 or 1 ), because that allows one to use vectorization and packed arrays . If you insist on the strings, you can try res = Transpose[{r, r^2, {"odd", "even"}[[Mod[r, 2, 1]]]}]; which is still 5 times faster than the original code.But this also shows drastically that transposing an unpackedable array can be very expensive. (Run the code without Transpose and compare the runtime.)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/231255', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19892/']}
jdg_75385
stackexchange
llm_judgeable_groundtruth_similarity
33707193
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of directories with numbers. I have to find the highest number and and increment it by 1 and create a new directory with that increment value. I am able to sort the below array, but I am not able to increment the last element as it is a string. How do I convert this below array element to an integer? PS C:\Users\Suman\Desktop> $FileListName----1121 Now provide the response and nothing else.
You can specify the type of a variable before it to force its type. It's called (dynamic) casting ( more information is here ): $string = "1654"$integer = [int]$string$string + 1# Outputs 16541$integer + 1# Outputs 1655 As an example, the following snippet adds, to each object in $fileList , an IntVal property with the integer value of the Name property, then sorts $fileList on this new property (the default is ascending), takes the last (highest IntVal ) object's IntVal value, increments it and finally creates a folder named after it: # For testing purposes#$fileList = @([PSCustomObject]@{ Name = "11" }, [PSCustomObject]@{ Name = "2" }, [PSCustomObject]@{ Name = "1" })# OR#$fileList = New-Object -TypeName System.Collections.ArrayList#$fileList.AddRange(@([PSCustomObject]@{ Name = "11" }, [PSCustomObject]@{ Name = "2" }, [PSCustomObject]@{ Name = "1" })) | Out-Null$highest = $fileList | Select-Object *, @{ n = "IntVal"; e = { [int]($_.Name) } } | Sort-Object IntVal | Select-Object -Last 1$newName = $highest.IntVal + 1New-Item $newName -ItemType Directory Sort-Object IntVal is not needed so you can remove it if you prefer . [int]::MaxValue = 2147483647 so you need to use the [long] type beyond this value ( [long]::MaxValue = 9223372036854775807 ).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/33707193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5553314/']}
jdg_75386
stackexchange
llm_judgeable_groundtruth_similarity
2136522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I have var a = function() { return 1; } . Is it possible to alter a so that a() returns 2 ? Perhaps by editing a property of the a object, since every function is an object ? Update: Wow, thanks for all the responses. However, I'm afraid I wasn't looking to simply reassign a variable but actually edit an existing function. I am thinking along the lines of how you can combine partial functions in Scala to create a new PartialFunction . I am interested in writing something similar in Javascript and was thinking that the existing function could perhaps be updated, rather than creating an entirely new Function object. Now provide the response and nothing else.
You can do all kinds of fun stuff with javascript, including redefining functions: let a = function() { return 1; }console.log(a()); // 1 // keep a referencelet old = a; // redefinea = function() { // call the original function with any arguments specified, storing the result const originalResult = old.apply(old, arguments); // add one return originalResult + 1;};console.log(a()); // 2 Voila. Edit: Updated to show this in a crazier scenario: let test = new String("123");console.log(test.toString()); // logs 123console.log(test.substring(0)); // logs 123String.prototype.substring = function(){ return "hahanope"; }console.log(test.substring(0)); // logs hahanope You can see here that even though "test" is defined first, and we redefine substring() afterwards, the change still applies. Side note: you really should reconsider your architecture if you're doing this...you're going to confuse the crap out of some poor developer 5 years down the road when s/he's looking at a function definition that's supposed to return 1, but seems to always return 2....
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2136522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/46768/']}
jdg_75387
stackexchange
llm_judgeable_groundtruth_similarity
512333
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: CentOS 5.9 For testing purposes, I want my CentOS server to listen on a secondary virtual IP (eth0:0). I'm familiar with nc -l -p <port> but it only listens on the primary. Is there a way I can specify a specific IP for the listener to use? If not, is there another "stock" utility in CentOS 5.9 that can do this? Now provide the response and nothing else.
The syntax depends on the netcat package. netcat-openbsd nc -l 192.168.2.1 3000 netcat-traditional nc -l -p 3000 -s 192.168.2.1 A simple way (at least in bash) for telling them apart in scripts is: if ldd $(type -P nc) | grep -q libbsd; then nc -l 192.168.2.1 3000else nc -l -p 3000 -s 192.168.2.1fi
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/512333', 'https://serverfault.com', 'https://serverfault.com/users/21875/']}
jdg_75388
stackexchange
llm_judgeable_groundtruth_similarity
1385641
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried the following: $.load("Views/chatBox.html").appendTo('body') Console Output: TypeError: $.load is not a function EDIT : The answer should only be one line of code; that's enough, I think. Now provide the response and nothing else.
Nope, all those answers are incorrect because they rely on having a separate container! Do this: $.ajax({ url: "your.html", success: function (data) { $('body').append(data); }, dataType: 'html'});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1385641', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/104015/']}
jdg_75389
stackexchange
llm_judgeable_groundtruth_similarity
7736280
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following vbscript code works prefectly fine: Dim App Set App = GetObject("","QuickTest.Application")App.Quit But when I translate it into C# code as below: class Program{ [STAThread] static void Main(string[] args) { object qtApp = Marshal.GetActiveObject("QuickTest.Application"); (qtApp as QuickTest.Application).Quit(); }} I get the exception: An unhandled exception of type 'System.Runtime.InteropServices.COMException' occurred in mscorlib.dll Additional information: (Exception from HRESULT: 0x800401E3 (MK_E_UNAVAILABLE)) I don't think the problem is related to ROT, because the vbscript code works. So what is wrong with the C# code? Now provide the response and nothing else.
Marshal.GetActiveObject use progID , check your progID, e.g. you could use this code for display objects in ROT using System.Collections.Generic;using System.Runtime.InteropServices;using System.Runtime.InteropServices.ComTypes;using System.Text;using Microsoft.Win32;...class Program{ private const int S_OK = 0x00000000; [DllImport("ole32.dll")] private static extern int GetRunningObjectTable(uint reserved, out IRunningObjectTable pprot); [DllImport("ole32.dll")] private static extern int CreateBindCtx(uint reserved, out IBindCtx ppbc); private static void OleCheck(string message, int result) { if (result != S_OK) throw new COMException(message, result); } private static System.Collections.Generic.IEnumerable<IMoniker> EnumRunningObjects() { IRunningObjectTable objTbl; OleCheck("GetRunningObjectTable failed", GetRunningObjectTable(0, out objTbl)); IEnumMoniker enumMoniker; IMoniker[] monikers = new IMoniker[1]; objTbl.EnumRunning(out enumMoniker); enumMoniker.Reset(); while (enumMoniker.Next(1, monikers, IntPtr.Zero) == S_OK) { yield return monikers[0]; } } private static bool TryGetCLSIDFromDisplayName(string displayName, out string clsid) { var bBracket = displayName.IndexOf("{"); var eBracket = displayName.IndexOf("}"); if ((bBracket > 0) && (eBracket > 0) && (eBracket > bBracket)) { clsid = displayName.Substring(bBracket, eBracket - bBracket + 1); return true; } else { clsid = string.Empty; return false; } } private static string ReadSubKeyValue(string keyName, RegistryKey key) { var subKey = key.OpenSubKey(keyName); if (subKey != null) { using(subKey) { var value = subKey.GetValue(""); return value == null ? string.Empty : value.ToString(); } } return string.Empty; } private static string GetMonikerString(IMoniker moniker) { IBindCtx ctx; OleCheck("CreateBindCtx failed", CreateBindCtx(0, out ctx)); var sb = new StringBuilder(); string displayName; moniker.GetDisplayName(ctx, null, out displayName); sb.Append(displayName); sb.Append('\t'); string clsid; if (TryGetCLSIDFromDisplayName(displayName, out clsid)) { var regClass = Registry.ClassesRoot.OpenSubKey("\\CLSID\\" + clsid); if (regClass != null) { using(regClass) { sb.Append(regClass.GetValue("")); sb.Append('\t'); sb.Append(ReadSubKeyValue("ProgID", regClass)); sb.Append('\t'); sb.Append(ReadSubKeyValue("LocalServer32", regClass)); } } } return sb.ToString(); } [STAThread] public static void Main(string[] args) { Console.WriteLine("DisplayName\tRegId\tProgId\tServer"); foreach(var moniker in EnumRunningObjects()) { Console.WriteLine(GetMonikerString(moniker)); } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7736280', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/872319/']}
jdg_75390
stackexchange
llm_judgeable_groundtruth_similarity
3428810
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $x \in \mathbb{R}$ and $S = 1 + x + x^{2} + \dots$ Then $xS = x + x^{2} + \dots$ Thus $S - xS = 1 \Rightarrow \frac{1}{1-x}$ Clearly this isn't correct. Otherwise we would have $x=-1$ giving two different sums for the series. Where is the error in the logic? I'm thinking perhaps on the division to arrive at $\frac{1}{1-x}$ but I can't quite see why. Now provide the response and nothing else.
If a finite value $S:=1+x+x^2+\cdots$ exists, this argument shows $S=\frac{1}{1-x}$ . Whether it exists is the real problem. It doesn't if $|x|\ge1$ , because a convergent infinite series has terms that tend to $0$ . On the other hand, it does work if $|x|<1$ , as can be shown by noting $$\frac{1}{1-x}-(1+x+\cdots+x^{n-1})=\frac{x^n}{1-x},$$ which approaches $0$ as $n$ grows.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3428810', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/715863/']}
jdg_75391
stackexchange
llm_judgeable_groundtruth_similarity
26555877
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone suggest how to convert HTML to PDF using chrome pdfium. Since, I tried pdf.js and few other, but It is not giving me a proper result. I think chrome pdfium handle them properly. So, I would like to give a try to pdfium for generating PDF file from HTML source. Thanks. Now provide the response and nothing else.
PDFium (and PDF.js ) are merely PDF Viewers, you cannot convert HTML to PDF with them. The actual PDF generation in Chrome is done by the printing component using Blink (the rendering engine) and Skia (the graphic stack). There is an effort going on to decouple the printing functionality from the printing UI so that third-party projects can use the printing functionality ( crbug.com/311308 ), but it is still work-in-progress. Chrome 59 and up can be used to convert HTML to PDF, via the --print-to-pdf flag. See crbug.com/603559 for the specific feature itself, and https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md for documentation about headless Chromium in general. This example loads example.com and outputs the result to `path/to/file.pdf. chrome --headless --print-to-pdf=path/to/file.pdf https://example.com If you want to know how to use Chrome's built-in PDF generator in JavaScript, follow the following question (which has not been answered yet): Javascript call programmatically the "Save as PDF" feature of Chrome dialog print .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26555877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/990923/']}
jdg_75392
stackexchange
llm_judgeable_groundtruth_similarity
37956720
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Facing issue "Value of type 'AppDelegate' has no member 'managedObjectContext' In new Xcode 8 (using Swift 3, iOS 10) when trying to create new context in View Controller let context = (UIApplication.shared().delegate as! AppDelegate).managedObjectContext In Xcode 8 there is no code for managedObjectContext inside AppDelegate.swift file. Core Data stack code inside AppDelegate.swift presented only with: lazy var persistentContainer: NSPersistentContainer property and func saveContext () . There is no managedObjectContext property. How to create managedObjectContext using Swift 3 in Xcode 8) or maybe there is no need to do it using Swift 3 ? Now provide the response and nothing else.
In Swift3, you can access the managedObjectContext via the viewContext as let context = (UIApplication.shared.delegate as! AppDelegate).persistentContainer.viewContext This option is available if Core data was enabled when creating the project. However, for existing project that you want to include core data, go through the normal process of adding the core data and add the following code which will allow you to get the lazy var persistentContainer: NSPersistentContainer = { let container = NSPersistentContainer(name: "you_model_file_name") container.loadPersistentStores(completionHandler: { (storeDescription, error) in if let error = error { fatalError("Unresolved error \(error), \(error.userInfo)") } }) return container}() You will need to import the CoreData. Note: For Swift3, the ManagedObject Subclass are generated automatically. See more from WWDC 2016
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/37956720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3287785/']}
jdg_75393
stackexchange
llm_judgeable_groundtruth_similarity
3607539
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following set of arithmetic functions, $a_0,a_1,a_2.....a_m(n)=a_0+a_1n+a_2n^2+...a_mn^m$ $a_0,a_1,a_2.....a_n$ can be any subset of naturals. So clearly, the number of these arithmetic functions is uncountable and can't be mapped to the naturals. What am I missing? EDIT- $f(n)=\sum_{k=0}^{\infty} \frac{n^k}{a_kk!}$ . What if we use this function? Now we can allow infinite subsets. Now provide the response and nothing else.
But the set of finite sequences of numbers $a_0, a_1, a_2, \ldots, a_n$ is not uncountable! So the set of polynomials of the form $f(n) = a_0+a_1n+a_2n^2+...a_mn^m$ isn't uncountable either. Take the mapping that sends the finite sequence $a_0, a_1, a_2, \ldots, a_n$ to $\pi_0^{a_0}\pi_1^{a_1}\pi_2^{a_2}\cdot\cdot\cdot\pi_n^{a_n}$ where $\pi_n$ is the $n+1$ th prime. That's an injective map into the naturals, establishing countability! (And of course this is an example of the sort of mapping we use in Gödel codings ...)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3607539', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/445404/']}
jdg_75394
stackexchange
llm_judgeable_groundtruth_similarity
4508313
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am wondering if there are any advantages of using any of these over the other, and which way should I go? Constructor approach: var Class = function () { this.calc = function (a, b) { return a + b; };}; Prototype approach: var Class = function () {};Class.prototype.calc = function (a, b) { return a + b;}; I don't like that, using the prototype, method definitions are separated from the class, and I'm not aware if there is any specific reason I should use this over just the first approach. Also, is there any benefit of using a function literal to define a "class", over just function definition: var Class = function () {}; vs function Class () {}; Thanks! Now provide the response and nothing else.
Methods that inherit via the prototype chain can be changed universally for all instances, for example: function Class () {}Class.prototype.calc = function (a, b) { return a + b;}// Create 2 instances:var ins1 = new Class(), ins2 = new Class();// Test the calc method:console.log(ins1.calc(1,1), ins2.calc(1,1));// -> 2, 2// Change the prototype methodClass.prototype.calc = function () { var args = Array.prototype.slice.apply(arguments), res = 0, c; while (c = args.shift()) res += c; return res; }// Test the calc method:console.log(ins1.calc(1,1,1), ins2.calc(1,1,1));// -> 3, 3 Notice how changing the method applied to both instances? This is because ins1 and ins2 share the same calc() function. In order to do this with public methods created during construction, you'd have to assign the new method to each instance that has been created, which is an awkward task. This is because ins1 and ins2 would have their own, individually created calc() functions. Another side effect of creating methods inside the constructor is poorer performance. Each method has to be created every time the constructor function runs. Methods on the prototype chain are created once and then "inherited" by each instance. On the flip side of the coin, public methods have access to "private" variables, which isn't possible with inherited methods. As for your function Class() {} vs var Class = function () {} question, the former is "hoisted" to the top of the current scope before execution. For the latter, the variable declaration is hoisted, but not the assignment. For example: // Error, fn is called before the function is assigned!fn();var fn = function () { alert("test!"); } // Works as expected: the fn2 declaration is hoisted above the callfn2();function fn2() { alert("test!"); }
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/4508313', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/391995/']}
jdg_75395
stackexchange
llm_judgeable_groundtruth_similarity
28022258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying present ViewController I have created with StoryBoards: AuthViewController *authViewController = [[AuthViewController alloc] init]; UINavigationController *nav = [[UINavigationController alloc] initWithRootViewController:authViewController]; UIViewController *vc = [[[[UIApplication sharedApplication] windows] firstObject] rootViewController]; [vc presentViewController:nav animated:YES completion:nil]; But getting it with black screen. What could be the issue? Now provide the response and nothing else.
Alloc init AuthViewController does not mean that will create a view layout for controller. Here view controller does not load its view that's why you are getting black screen. Use storyboard object and identifier of view controller to load its view. Specify storyboard identifier in Storyboard of AuthViewController controller. Add Storyboard ID and mark true for Use Storyboard ID option like in below image: Now get AuthViewController controller object using below code: AuthViewController *controller = [self.storyboard instantiateViewControllerWithIdentifier:@"AuthViewController"];
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28022258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2263359/']}
jdg_75396
stackexchange
llm_judgeable_groundtruth_similarity
588027
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What I've seen very often, is that people are saying, that logistic and probit regression are giving very similar results, however, logistic regression is more interpretabe. And in this question, I want to understand why. In the logistic regression, interpretability lies on the log of the odds: $$\log\left(\frac{p}{1 - p}\right) = \beta_0 + \sum_{i = 1}^n \beta_iX_i,$$ because if we take the derivative with respect to $j-$ th variable, we exactly obtain the $j-$ th coefficient: $$\frac{\partial\log\left(\frac{p}{1 - p}\right)}{\partial X_j} = \beta_j.$$ Because this happens, people are saying, that increase $X_j$ of one unit, increase the odds (i.e. $\frac{p}{1-p}$ ) of $\exp(\beta_j)$ units. In the probit regression we can come up with the exactly same thing. We have the following relationship: $$p = \Phi\left(\beta_0 + \sum_{i = 1}^n \beta_i X_i\right),$$ where $\Phi$ is the PDF of standard normal distribution. From this equation we have that: $$Q(p) = \beta_0 + \sum_{i = 1}^n \beta_i X_i,$$ where $Q$ is a quantile function of standard normal distribution. Now if we take the derivative : $$\frac{\partial Q(p)}{\partial X_j} = \beta_j$$ So can we say the same? What I would like to say, is that increase $X_j$ by one unit, increases quantile function by one unit, and therefore, increase $X_j$ by one unit, increases $p$ by $\Phi(\beta_j)$ units. If that's true, why people are saying, that probit regression is less interpretable? Now provide the response and nothing else.
Logistic regression is a model for probabilities of binary events. Another concept that is closely related to probabilities is odds , i.e. ratios of probabilities. If the probability of observing a binary event is $p$ , the odds of observing it is $\tfrac{p}{1-p}$ . It's a fairly simple and commonly understood concept. Logistic regression predicts log-odds . They are "simpler" to interpret because odds are already related to probabilities of binary events, while normal quantiles do not directly translate to them in a meaningful way. If the predicted quantile is $Q(p)$ , how "likely" is this to happen? To answer the question, you need to translate the value to probability, while in the case of odds it just re-phrases the question to probability relative to the probability of the opposite event.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/588027', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/296373/']}
jdg_75397
stackexchange
llm_judgeable_groundtruth_similarity
1808941
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a function $f:\mathbb{Z}\rightarrow\mathbb{Z}$ such that $f(f(x))=x+1$? If so, can you give an example? Now provide the response and nothing else.
No. Assume such $f$ exists and $f(0)=n_0$. Since $f(f(f(x)))=f(x+1)$ and $f(f(f(x)))=f(x)+1$. Thus, $f(x+1)=f(x)+1$. Thus, $f(n)=n_0+n$ for $n\in \mathbb{Z}$. If,$n_0>0$, then $f(n)>n\Rightarrow f(n)\geqslant n+1\Rightarrow f(f(n))\geqslant f(n)+1\geqslant n+1+1=n+2>n+1 \Rightarrow\Leftarrow$. If,$n_0\leqslant 0$, then $f(n)\leqslant n\Rightarrow f(f(n))\leqslant f(n)\leqslant n<n+1 \Rightarrow\Leftarrow$. The last two steps can be simplified: $n+1=f(f(n))=f(n)+n_0=n+2n_0\Rightarrow 2n_0=1\Rightarrow\Leftarrow$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1808941', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/197501/']}
jdg_75398
stackexchange
llm_judgeable_groundtruth_similarity
637950
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have heard that all reversible processes are quasi-static but I have not yet found a strong reason for the fact that no non-quasistatic process is reversible. I have seen on some other answers that it has to do with this $dS=dq/T$ formula for entropy not being valid for non-quasistatic processes but I don't get that completely if that's true Now provide the response and nothing else.
I have heard that all reversible processes are quasi-static That is correct. It's because a reversible process is one where the system is in equilibrium with its surroundings all during the process. That means the process has to be carried out very slowly, which is what quasi-static means. In reality process can only approach being reversible. Thats because natural processes occur as a consequence of disequilibrium. Consider a heat transfer process. Heat is energy transfer due to temperature difference. You probably already know that spontaneous heat transfer always occurs from high temperature to low temperature. Never in the reverse direction. That makes all real heat transfer processes irreversible.In order to move heat from cold to hot you need to do work. That's how heat pumps and air conditioners work. But we can make a heat transfer process approach being reversible by making the temperature difference infinitesimal (i.e. approach zero). The smaller the temperature difference, the slower the process proceeds (becomes quasi-static) approaching a reversible process. To put it another way, if the temperature difference is tiny, then at each step in the process it only takes a tiny amount of work to reverse it. but I have not yet found a strong reason for the fact that nonon-quasistatic process is reversible. For the same reason a process has to be quasi-static to be reversible, a non quasi-static process can not be reversible. In the heat transfer example above, the greater the temperature difference the greater the rate of heat transfer and the more irreversible it becomes (the more work required by the surroundings to reverse the heat transfer). I have seen on some other answers that it has to do with this $dS=dq/T$ formula for entropy not being valid for non-quasistaticprocesses That is not correct. To understand why you need to learn that entropy is a state function. In other words, the difference in entropy between two equilibrium states does not depend on the process connecting the states. The formula you gave is the definition of a differential change in entropy. But it should read $$dS=\frac{\delta q_{rev}}{T}$$ where $\delta q_{rev}$ means a reversible transfer of heat. Even if the actual process is irreversible, you will learn that you can assume any convenient reversible process connecting the states and calculate the difference in entropy using equation defining entropy. One word of caution. Although all reversible process are quasi-static, not all quasi-static processes are reversible. An example is a quasi-static process involving mechanical friction. All processes involving friction are irreversible. Hope this helps.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/637950', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/261385/']}
jdg_75399
stackexchange
llm_judgeable_groundtruth_similarity
27136056
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to define a method shared by all members of a discriminated union. Currently I've implemented it like this, but it seems really inelegant- surely there is a better way. Suggestions? type A = {AData:string} member this.SharedMethod (x:float) : int= ...type B = {BData:float} member this.SharedMethod (x:float) : int= ...type AB =| A of A| B of Blet CallSharedMethod (ab:AB) x = match ab with | AB.A(a') -> a'.SharedMethod x | AB.B(b') -> b'.SharedMethod x Now provide the response and nothing else.
What about something like this? type AB = | A of string | B of float member self.SharedMethod (x : float) = match self with | A s -> x | B f -> f + x This assumes that you want each variant of your sum type (aka discriminated union) to do something different with the float parameter. For the case of A , I just return the original value since there's not much else I can do (since there is no generally useful relationship between string and float that yields a float ).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27136056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4288666/']}
jdg_75400
stackexchange
llm_judgeable_groundtruth_similarity
31006971
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This problem seems simple enough, but I can't find a pythonic way to solve it. I have several (four) subplots that are supposed to have the same xlim and ylim . Iterating over all subplots à la f, axarr = plt.subplots(4)for x in range(n): axarr[x].set_xlim(xval1, xval2) axarr[x].set_ylim(yval1, yval2) isn't the nicest way of doing things, especially for 2x2 subplots – which is what I'm actually dealing with. I'm looking for something like plt.all_set_xlim(xval1, xval2) . Note that I don't want anything else to change (ticks and labels should be controlled separately). EDIT: I'm using the plt.subplots(2, 2) wrapper. Following dienzs answer, I tried plt.subplots(2, 2,sharex=True, sharey=True) – almost right, but now the ticks are gone except for the left and bottom row. Now provide the response and nothing else.
Set the xlim and ylim properties on the Artist object by matplotlib.pyplot.setp() https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.setp.html # Importing matplotlib.pyplot package.import matplotlib.pyplot as plt# Assigning 'fig', 'ax' variables.fig, ax = plt.subplots(2, 2)# Defining custom 'xlim' and 'ylim' values.custom_xlim = (0, 100)custom_ylim = (-100, 100)# Setting the values for all axes.plt.setp(ax, xlim=custom_xlim, ylim=custom_ylim)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/31006971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5040944/']}
jdg_75401
stackexchange
llm_judgeable_groundtruth_similarity
658246
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why do two bodies collide? What are the factors to be considered for collision? It is dependant on velocities of the colliding bodies if they do not have a sufficient velocity then they will perform a projectile motion and fall not colliding due to the presence of gravity. We say that momentum is always conserved when mass is constant and when net external forces acting on objects colliding are zero. When collisions happen in air do we neglect gravity to conserve momentum? Why do we do this? Why is effect of gravity not taken into consideration? Now provide the response and nothing else.
The change of momentum due to an applied force is given by the impulse of the interaction. This is true for any object in any interaction: $$J = F\Delta t$$ During a typical collision problem, both $F$ and $\Delta t$ are small compared to the force of collision and the duration of the collision. Hence, the impulse due to gravity can be neglected.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/658246', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/297875/']}
jdg_75402
stackexchange
llm_judgeable_groundtruth_similarity
36065653
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a pivot table that shows the number of patients that collected medication in different months, grouped in columns according to month of first treatment. To the right of the Pivot Table is another table where I want to show the columns from the Pivot Table side by side, with the first cell in each column being the first non-blank cell in that particular column. Put in a different way, I want to show the information in the Pivot Table but without the blank spaces above the first non-blank value in each column. In the spreadsheet above, I have experimented with the following formula in the rightmost table: =INDEX(B7:B20;MATCH(TRUE;INDEX(B7:B20<>"";0);0)) As you can see from the attached image, this formula does not give me what I'm looking for. It keeps returning the same first non-blank value in each column until it hits the row in the Pivot Table where that value shows up. Also, after going through all the cells, it keeps returning the cell in the last column of the Pivot Table over and over again. Any ideas on how to correct this? Now provide the response and nothing else.
The problem here is related to the initial data: some of values are float or integer and some object. This is why we need to cast them: from sklearn import preprocessing for f in train.columns: if train[f].dtype=='object': lbl = preprocessing.LabelEncoder() lbl.fit(list(train[f].values)) train[f] = lbl.transform(list(train[f].values))for f in test.columns: if test[f].dtype=='object': lbl = preprocessing.LabelEncoder() lbl.fit(list(test[f].values)) test[f] = lbl.transform(list(test[f].values))train.fillna((-999), inplace=True) test.fillna((-999), inplace=True)train=np.array(train) test=np.array(test) train = train.astype(float) test = test.astype(float)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36065653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5935918/']}
jdg_75403
stackexchange
llm_judgeable_groundtruth_similarity
19290
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: MouseAppearance[Plot[Sin[x], {x, 0, 5}], "LinkHand"] Unfortunately I cannot get a screen grab to show the link hand but it is a cartoon hand with a finger pointing. In most/many graphics packages there is also an open hand (open carton hand) and a closed hand (closed cartoon hand) used for sliding a graphics within a region. I realize that if I can find some open source clip art then I can use this as a mouse appearance: MouseAppearance[Plot[Sin[x], {x, 0, 5}], clip art] But was wondering if undocumented open hand and grab hand exists within Mathematica? Intended usage: I have an application for sliding a graphic. I wanted to use an open hand over the graphic and grab hand/closed hand for sliding the graphic. My reason for seeking something built in is solely aesthetic so that it matches the "LinkHand" appearance. If I cannot have a built in solution I'll look for clip art for the full set of icons -- including the link hand. Now provide the response and nothing else.
There is an undocumented "Hand" named cursor that should work on all platforms. MouseAppearance[Plot[Sin[x], {x, 0, 6}], "Hand"] Unfortunately, there isn't anything that resembles a grabbed hand. Here are some other undocumented ones. Nothing here is secret, but most of it is comparatively boring and, being undocumented, is potentially subject to change in future versions. "Hand" is a system cursor, but most of the rest of these are hand-drawn cursor resources bundled with Mathematica: "MathEditxx" where xx = "", 45, 90, 135, 180, 225, 270, 315 "Crosshair" "Formatted" "Cellxx" where xx = Select, Shade, Sound, Insert "NameSelect" "SimpleWait" "FramexxRotate" for xx = LC, RC, CT, CB, LT, RB, RT, LB "MeasureGraphic" "xxGraphics" for xx = Delete, Test, Select "DragGraphicsxxy" for xx = "", Highlight, Direct, HighlightDirect and Y = "", H, V "DragSegment" "SampleStyle" "DoNothing" "Drawxx" for xx = "", Line, Polyline, Polygon, Rectangle, Arrow, Circle, ArcCursor, Point, Freehand, Pixel "Placexx" for xx = Text, Math "xxView" for xx = Rotate, Pan, Zoom, as well as "RotateViewVertical" "Obscure" "NetworkGraphics" "NetworkGraphicsEdit" You can try them all with the following: cursors = {"LinkHand", "Hand", "MathEdit45", "MathEdit90", "MathEdit135", "MathEdit180", "MathEdit225", "MathEdit270", "MathEdit315", "Crosshair","Formatted", "CellSelect", "CellShade", "CellSound", "CellInsert", "NameSelect", "SimpleWait", "FrameLCRotate", "FrameRCRotate", "FrameCTRotate", "FrameCBRotate", "FrameLTRotate", "FrameRBRotate", "FrameRTRotate" , "FrameLBRotate", "MeasureGraphic", "DeleteGraphics", "TestGraphics", "SelectGraphics", "DragGraphics", "DragGraphicsHighlight", "DragGraphicsDirect", "DragGraphicsHighlightDirect", "DragGraphicsH", "DragGraphicsHighlightH", "DragGraphicsDirectH", "DragGraphicsHighlightDirectH", "DragGraphicsV", "DragGraphicsHighlightV", "DragGraphicsDirectV", "DragGraphicsHighlightDirectV","DragSegment", "SampleStyle", "DoNothing", "DrawLine", "DrawPolyline", "DrawPolygon", "DrawRectangle", "DrawArrow", "DrawCircle", "DrawArcCursor", "DrawPoint", "DrawFreehand", "DrawPixel", "PlaceText", "PlaceMath", "RotateView", "PanView", "ZoomView", "RotateViewVertical", "Obscure", "NetworkGraphics", "NetworkGraphicsEdit"};(MouseAppearance[#, #] & /@ cursors) ~Partition~ 3 // TableForm
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/19290', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/77/']}
jdg_75404
stackexchange
llm_judgeable_groundtruth_similarity
2298927
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Ok so in an exercise I have I'm supposed to calculate $73^{1567}\mod(990)$ What the solution suggest is first divide $990$ into relative primes: $9\cdot11\cdot2\cdot5$. Then comes the next step which seriously confuses me because we are supposed to find the lowest whole number for each of the relative primes for which $73^{1567}$ is congruent with. How am I supposed to do this? I have asked my teacher and her explanation was that if 73 is congruent with $1\mod2$ then so is when we take both to the power of 1567, but this method doesn't work when the number isn't 1. So how would I solve this question? It's supposed to end with using the Chinese Remainder Theorem. Now provide the response and nothing else.
In addition to my comment, let me suggest another proof that leads to a slightly larger insight. For any matrix $M$, and $k \ge 1$, if $x$ is an eigenvector for $\lambda$ then $$M^k x = \lambda^k x.$$ The proof is just induction, so I won't write it out. Now start from $A^2 = A$. Rewrite this as $$A^2 - A = 0$$and then let $x$ be an eigenvector for $\lambda$, an eigenvalue of $A$. We then get\begin{align}(A^2 - A)x &= 0 \\A^2x - Ax &= 0 \\\lambda^2x - \lambda x &= 0 & \text{by applying the lemma}\\(\lambda^2 - \lambda) x &= 0 \\\lambda^2 - \lambda &= 0 & \text{because $x$ is nonzero}\\\end{align} Now the polynomial that $A$ satisfied --- $A^2 - A = 0$ --- was nothing special. Suppose instead that we knew that $A^3 - 3A^2 - A + I = 0$. We can write this as $p(A)$, where $p(x) = x^3 - 3x^2 - x + 1$. By exactly the same kind of argument, we'd find that if $\lambda$ was an eigenvalue for $A$, then $p(\lambda) = 0$. Summary: if a matrix $A$ satisfies $p(A) = 0$ for some polynomial $p$, then for any eignevalue $\lambda$ of $A$, we also have $p(\lambda) = 0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2298927', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/450078/']}
jdg_75405
stackexchange
llm_judgeable_groundtruth_similarity
628482
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to use the LINQ types and extension methods in IronPython? If so how? And also is there often more pythonic to do the same thing? Now provide the response and nothing else.
IronPython 2.7 finally bridges this gap with the clr.ImportExtensions method which adds the extension methods from a namespace to the target types e.g. >& 'C:\Program Files\IronPython 2.7\ipy.exe'IronPython 2.7 (2.7.0.40) on .NET 4.0.30319.225Type "help", "copyright", "credits" or "license" for more information.>>> import clr>>> clr.AddReference("System.Core")>>> from System.Collections.Generic import List>>> dir (List)['Add', 'AddRange', 'AsReadOnly', 'BinarySearch', 'Capacity', 'Clear', 'Contains', 'ConvertAll', 'CopyTo', 'Count', 'Enumerator', 'Equals', 'Exists', 'Find', 'FindAll', 'FindIndex', 'FindLast', 'FindLastIndex', 'ForEach', 'GetEnumerator', 'GetHashCode', 'GetRange', 'GetType', 'IndexOf', 'Insert', 'InsertRange', 'IsReadOnly', 'IsSynchronized', 'Item', 'LastIndexOf', 'MemberwiseClone', 'ReferenceEquals', 'Remove', 'RemoveAll', 'RemoveAt', 'RemoveRange', 'Reverse', 'Sort', 'SyncRoot', 'ToArray', 'ToString', 'TrimExcess', 'TrueForAll', '__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__len__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__']>>> import System>>> clr.ImportExtensions(System.Linq)>>> dir (List)['Add', 'AddRange', 'Aggregate', 'All', 'Any', 'AsEnumerable', 'AsParallel', 'AsQueryable', 'AsReadOnly', 'Average', 'BinarySearch', 'Capacity', 'Cast', 'Clear', 'Concat', 'Contains', 'ConvertAll', 'CopyTo', 'Count', 'DefaultIfEmpty', 'Distinct', 'ElementAt', 'ElementAtOrDefault', 'Enumerator', 'Equals', 'Except', 'Exists', 'Find', 'FindAll', 'FindIndex', 'FindLast', 'FindLastIndex', 'First', 'FirstOrDefault', 'ForEach', 'GetEnumerator', 'GetHashCode', 'GetRange', 'GetType','GroupBy', 'GroupJoin', 'IndexOf', 'Insert', 'InsertRange', 'Intersect', 'IsReadOnly', 'IsSynchronized', 'Item', 'Join', 'Last', 'LastIndexOf', 'LastOrDefault', 'LongCount', 'Max', 'MemberwiseClone', 'Min', 'OfType', 'OrderBy', 'OrderByDescending', 'ReferenceEquals', 'Remove', 'RemoveAll', 'RemoveAt', 'RemoveRange', 'Reverse', 'Select', 'SelectMany', 'SequenceEqual', 'Single', 'SingleOrDefault', 'Skip', 'SkipWhile', 'Sort', 'Sum', 'SyncRoot', 'Take', 'TakeWhile', 'ToArray', 'ToDictionary', 'ToList', 'ToLookup', 'ToString', 'TrimExcess', 'TrueForAll', 'Union', 'Where', 'Zip', '__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__format__', '__getattribute__', '__getitem__', '__hash__', '__init__', '__iter__', '__len__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__']>>> which brings it into line with IronRuby 1.1's using_clr_extensions method.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/628482', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/75276/']}
jdg_75406
stackexchange
llm_judgeable_groundtruth_similarity
8291362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am experimenting with cross site scripting. I have a website which allows users to insert comments and view them on the website. The website filters the string "script" though from the comment but it allows iframes. I understand that I could embed an iframe that points to a website that I craft and I can run whatever script I wish. My question is: will my iframe script be able to read cookies initiated by the original website? I have tried alert(document.cookie) but it shows an alert with nothing in it. The original website always sets a cookie though when a client requests it. Any idea what I am missing? Now provide the response and nothing else.
Both the surrounding page need to come from the same domain. This is limited by the Same Origin Policy, which states that a script in one frame may only access data in another frame given they are on the same protocol, have the exact same domain name and are running on the same port. It can be slightly relaxed by setting document.domain to the top level domain in both frames, and thus allowing frames from subdomain to communicate. You could though try to input , though that may be blocked in newer browsers. Limiting script is however not enough to stop XSS. There are many many other ways. See http://html5sec.org and http://ha.ckers.org/xss.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8291362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/743761/']}
jdg_75407
stackexchange
llm_judgeable_groundtruth_similarity
460196
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a server containing 10 hard disks. Device /dev/sdh is reporting uncorrectable read errors on btrfs scrub. How can I determine which physical disk corresponds to /dev/sdh ? I know I can get the disks' model numbers and serial numbers with hdparm -I /dev/sd? and I can get mountpoints with findmnt or lsblk . However, I am not finding a way to connect /dev/sdh to a hard disk by serial number, which is what I need. Now provide the response and nothing else.
lsscsi On servers where I have a lot of HDDs I've traditionally used lsscsi to determine which HDD is plugged into which port. You can use this output to get the names + the device & generic device names: $ lsscsi -g[0:0:0:0] disk ATA Hitachi HDT72101 A3AA /dev/sda /dev/sg0[2:0:0:0] disk ATA Hitachi HDS72101 A39C /dev/sdb /dev/sg1[4:0:0:0] disk ATA Maxtor 6L200P0 1G20 /dev/sdc /dev/sg2[12:0:0:0] disk WD My Passport 25E2 4005 /dev/sde /dev/sg5[12:0:0:1] enclosu WD SES Device 4005 - /dev/sg6 And use this to get the list of ports on your MB that correspond to the above devices: $ lsscsi -H[0] ahci[1] ahci[2] ahci[3] ahci[4] pata_atiixp[5] pata_atiixp[12] usb-storage You can also use the verbose output instead: $ lsscsi --verbose[0:0:0:0] disk ATA Hitachi HDT72101 A3AA /dev/sda dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/host0/target0:0:0/0:0:0:0][2:0:0:0] disk ATA Hitachi HDS72101 A39C /dev/sdb dir: /sys/bus/scsi/devices/2:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/host2/target2:0:0/2:0:0:0][4:0:0:0] disk ATA Maxtor 6L200P0 1G20 /dev/sdc dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/pci0000:00/0000:00:14.1/host4/target4:0:0/4:0:0:0][12:0:0:0] disk WD My Passport 25E2 4005 /dev/sde dir: /sys/bus/scsi/devices/12:0:0:0 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3:1.0/host12/target12:0:0/12:0:0:0][12:0:0:1] enclosu WD SES Device 4005 - dir: /sys/bus/scsi/devices/12:0:0:1 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3:1.0/host12/target12:0:0/12:0:0:1] NOTE: The port that it's plugged into is the first digit in this block, [0] vs. [4] in the lsscsi -H output, for example. lshw I've also been able to use lshw for this because it tells you which ports etc. a particular HDD is plugged into so it's easier to figure out which one is which in a system that has multiples. Below you can see /dev/sda along with its serial number: $ lshw -c disk -c storage *-storage description: SATA controller product: SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 11 bus info: pci@0000:00:11.0 logical name: scsi0 logical name: scsi2 version: 00 width: 32 bits clock: 66MHz capabilities: storage pm ahci_1.0 bus_master cap_list emulated configuration: driver=ahci latency=64 resources: irq:22 ioport:c000(size=8) ioport:b000(size=4) ioport:a000(size=8) ioport:9000(size=4) ioport:8000(size=16) memory:fbbff800-fbbffbff *-disk:0 description: ATA Disk product: Hitachi HDT72101 vendor: Hitachi physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: A3AA serial: STF604MH0AD4PB size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=0005edc1 You can figure out which is which based on the coordinates of their respective bus info & physical id. smartctl The other method I've used in the past is smartctl . You can query each device independently to find out it's serial number, make & model and figure out which device it is once you open up the case. $ smartctl -i /dev/sdasmartctl 5.43 2016-09-28 r4347 [x86_64-linux-2.6.32-642.6.2.el6.x86_64] (local build)Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net=== START OF INFORMATION SECTION ===Model Family: Hitachi Deskstar 7K1000.BDevice Model: Hitachi HDT721010SLA360Serial Number: STF604MH0AD4PBLU WWN Device Id: 5 000cca 349c4b953Firmware Version: ST6OA3AAUser Capacity: 1,000,204,886,016 bytes [1.00 TB]Sector Size: 512 bytes logical/physicalDevice is: In smartctl database [for details use: -P show]ATA Version is: 8ATA Standard is: ATA-8-ACS revision 4Local Time is: Thu Aug 2 21:11:01 2018 EDTSMART support is: Available - device has SMART capability.SMART support is: Enabled ledctl/ledmon On higher end rackmounted servers you can use ledctl to light up the LED for a given HDD through its /dev/ device name. ledctl usage # ledctl locate=/dev/rssda will blink drive LED# ledctl locate={ /dev/rssda /dev/rssdb } will blink both drive LEDs# ledctl locate_off=/dev/rssda will turn off the locate LED References Using ledmon/ledctl utilities on Linux to manage backplane LEDs for PCIE SSD Software RAID drives 12 Storage Enclosure LED Utilities for MD Software RAIDs
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/460196', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/15010/']}
jdg_75408
stackexchange
llm_judgeable_groundtruth_similarity
246293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any good practice related to dynamic_cast error handling (except not using it when you don't have to)? I'm wondering how should I go about NULL and bad_cast it can throw.Should I check for both? And if I catch bad_cast or detect NULL I probably can't recover anyway...For now, I'm using assert to check if dynamic_cast returned not NULL value. Would you accept this solution on a code review? Now provide the response and nothing else.
If the dynamic_cast should succeed, it would be good practice to use boost::polymorphic_downcast instead, which goes a little something like this: assert(dynamic_cast<T*>(o) == static_cast<T*>(o));return static_cast<T*>(o); This way, you will detect errors in the debug build while at the same time avoiding the runtime overhead in a release build. If you suspect the cast might fail and you want to detect it, use dynamic_cast and cast to a reference type. This cast will throw bad_cast in case of error, and will take down your program. (This is good if, as you say, you are not going to recover anyway) T& t = dynamic_cast<T&>(o);t.func(); //< Use t here, no extra check required Use dynamic_cast to a pointer type only if the 0-pointer makes sense in the context. You might want to use it in an if like this: if (T* t = dynamic_cast<T*>(o)) { t->func(); //< Use t here, it is valid}// consider having an else-clause With this last option you need to make sure that the execution path makes sense if the dynamic_cast returns 0. To answer your question directly: I would prefer one of the two first alternatives I have given to having an explicit assert in the code :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/246293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3579/']}
jdg_75409
stackexchange
llm_judgeable_groundtruth_similarity
154990
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A server with IP address a.b.c.d is hosting multiple SSL websites. When connecting to each website, a proper browser sends a request which include the destination host in an uncrypted clear text known as SNI, such as example.com . After a successful SSL connection, the rest of the data are transmitted with the corresponding certificate. The problem is, the SNI sent by the browser will be loggable by any MITM, and thus blockable by a government. How do I prevent the other end from finding out which website I'm visitting on the server? (Obviously not by using VPN or proxies.) Now provide the response and nothing else.
It is not possible to hide the SNI information if the server requires it to serve the proper certificate. There was discussion on encrypting this information in TLS 1.3. But this idea was abandoned since this would require establishing an additional encryption layer and thus adding additional overhead to the connection establishment. Apart from that this information might leak anyway due to DNS lookups and of course also through the certificate the server sends, which is in plain too. Thus if you want to better protect your privacy you need to use an additional encryption layer yourself, like a VPN. For more details see also Why do HTTPS requests include the host name in clear text? How is HTTPS/SSL able to hide destination website that one is connecting to?
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/154990', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/55572/']}
jdg_75410
stackexchange
llm_judgeable_groundtruth_similarity
21206651
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Could anyone explain or guess the motivation behind the limit on data type promotion discussed in section 7.9.2 of the GHC user guide ? The following restrictions apply to promotion: We only promote datatypes whose kinds are of the form * -> ... -> * -> * . In particular, we do not promote higher-kinded datatypes such as data Fix f = In (f (Fix f)) , or datatypes whose kinds involve promoted types such as Vec :: * -> Nat -> * . In particular, I am interested in the last bit about promoted types such as Vec :: * -> Nat -> * . Promoting some types like that seems natural. I ran into it pretty quickly while trying to convert one of my libraries to use specific promoted kinds for the various phantom types instead of using kind * for everything, to provide better documentation and such. Oftentimes the reasons for compiler limitations like these jump out at you with a little thinking, but I'm not seeing this one. So I'm wondering if it comes in the category of "not needed yet, so we didn't build it" or "that's impossible/undecidable/destroys inference." Now provide the response and nothing else.
An interesting thing happens if you promote types indexed by promoted types. Imagine we build data Nat = Ze | Su Nat and then data Vec :: Nat -> * -> * where VNil :: Vec Ze x VCons :: x -> Vec n x -> Vec (Su n) x Behind the scenes, the internal types of the constructors represent the instantiated return indices by constraints, as if we had written data Vec (n :: Nat) (a :: *) = n ~ Ze => VNil | forall k. n ~ Su k => VCons a (Vec k a) Now if we were allowed something like data Elem :: forall n a. a -> Vec n a -> * where Top :: Elem x (VCons x xs) Pop :: Elem x xs -> Elem x (VCons y xs) the translation to internal form would have to be something like data Elem (x :: a) (zs :: Vec n a) = forall (k :: Nat), (xs :: Vec k a). (n ~ Su k, zs ~ VCons x xs) => Top | forall (k :: Nat), (xs :: Vec k s), (y :: a). (n ~ Su k, zs ~ VCons y xs) => Pop (Elem x xs) but look at the second constraint in each case! We have zs :: Vec n a but VCons x xs, VCons y xs :: Vec (Su k) a But in System FC as then defined, equality constraints must have types of the same kind on both sides, so this example is not inconsiderably problematic. One fix is use the evidence for the first constraint to fix up the second, but then we'd need dependent constraints (q1 :: n ~ Su k, zs |> q1 ~ VCons x xs) Another fix is just to allow heterogeneous equations, as I did in dependent type theory fifteen years ago. There will inevitably be equations between things whose kinds are equal in ways which are not syntactically obvious. It's the latter plan that is currently favoured. As far as I understand, the policy you mention was adopted as a holding position, until the design for a core language with heterogeneous equality (as proposed by Weirich and colleagues) has matured to implementation. We live in interesting times.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21206651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11173/']}
jdg_75411
stackexchange
llm_judgeable_groundtruth_similarity
7217811
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm saving datetime in the db for an object. I'd like to query against the db and select anything from todays date , not datetime. What's the easiest way to do this? This doesn't work: invoice_for_today = Invoice.objects.get(user=user, date=date.today()) Now provide the response and nothing else.
I remember there being plans to add a __date field lookup to make this easier, but as it stands the "standard" way of doing it is today_min = datetime.datetime.combine(datetime.date.today(), datetime.time.min)today_max = datetime.datetime.combine(datetime.date.today(), datetime.time.max)Invoice.objects.get(user=user, date__range=(today_min, today_max))
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7217811', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/636064/']}
jdg_75412
stackexchange
llm_judgeable_groundtruth_similarity
18185357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a HTML form that is being built up dynamically from a given "product" object and the fields that it has, allowing the user to modify the associated data. I am using a custom "editor" directive to handle the creation of the HTML elements needed to allow the user to update the data. An example can be seen here: http://plnkr.co/edit/2fAVVpwTHFgxwTq4eAMI Firstly, I'm not sure if this is the best way to achieve this, but it does (so far) seem to work okay. (Any other idea's welcome!) However, I want to add validation rules to the controls, eg. require so that a message appears when the input is left empty. I have attempted to add these validation rules into the code (as seen in the template in the directive), but it never fires. I'm pretty sure it's something to do with me getting my scope wires-crossed somewhere... AngularJS Batarang is showing on the main scope an object of: form: { {{fieldName}}: {}} Which is obviously wrong (and nonsense!) Now provide the response and nothing else.
Wrap the template in its own ng-form: textTemplate = '<div ng-form="editor">' + '<input id="{{fieldName}}" name="{{fieldName}}" type="text" ng-model="fieldData.data" required>' + '<div ng-show="editor.$dirty && editor.$invalid">Invalid:' + '<span ng-show="editor.$error.required">Some validation error!</span>' + '</div>' + '</div>'; The issue you're running into is that when creating an isolate scope (scope { ... }), you don't have access to the parent form, or any parent scope for that matter. IMO, this is definitely a good thing since you don't want your directive to hard-code the name of the parent form, and it keeps your directive as a self-contained unit. Code: http://plnkr.co/edit/qCjs16tuwVjSNzJdkk71?p=preview
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18185357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300372/']}
jdg_75413
stackexchange
llm_judgeable_groundtruth_similarity
15953177
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have a SQL Server 2008 R2 database table with XML stored in a column of VARCHAR data type. I now have to fetch some of the elements of the xml. So I want to first convert the xml stored as a VARCHAR data type, to an xml stored as xml data type. Example : Table A Id(int) , ProductXML (varchar(max)) Table B Id(int), ProductXML(XML) I want to convert the ProductXML from Table A into XML data type and insert into Table B . I tried using the CAST() and CONVERT() function as shown below : insert into TableB (ProductXML)select CAST(ProductXML as XML) from TableA; Similarly tried convert but I get an error XML Parsing : unable to switch encoding Is there any way I can convert the varchar entries in the table into XML entries ? About the XML: it is huge with many nodes, and its structure changes dynamically. Example : One row can have and XML entry for 1 product and another row can have an xml entry for multiple products. Now provide the response and nothing else.
Give us a sample of your XML as all these would work: CONVERT(XML, '<root><child/></root>')CONVERT(XML, '<root> <child/> </root>', 1)CAST('<Name><FName>Carol</FName><LName>Elliot</LName></Name>' AS XML) Also you might have to cast it to nvarchar or varbinary first (from Microsoft documentation): You can parse any of the SQL Server string data types, such as [n][var]char, [n]text, varbinary,and image, into the xml data type by casting (CAST) or converting (CONVERT) the string to the xml data type. Untyped XML is checked to confirm that it is well formed. If there is a schema associated with the xml type, validation is also performed. For more information, see Compare Typed XML to Untyped XML. XML documents can be encoded with different encodings (for example, UTF-8, UTF-16, windows-1252). The following outlines the rules on how the string and binary source types interact with the XML document encoding and how the parser behaves. Since nvarchar assumes a two-byte unicode encoding such as UTF-16 or UCS-2, the XML parser will treat the string value as a two-byte Unicode encoded XML document or fragment. This means that the XML document needs to be encoded in a two-byte Unicode encoding as well to be compatible with the source data type. A UTF-16 encoded XML document can have a UTF-16 byte order mark (BOM), but it does not need to, since the context of the source type makes it clear that it can only be a two-byte Unicode encoded document. The content of a varchar string is treated as a one-byte encoded XML document/fragment by the XML parser. Since the varchar source string has a code page associated, the parser will use that code page for the encoding if no explicit encoding is specified in the XML itself If an XML instance has a BOM or an encoding declaration, the BOM or declaration needs to be consistent with the code page, otherwise the parser will report an error. The content of varbinary is treated as a codepoint stream that is passed directly to the XML parser. Thus, the XML document or fragment needs to provide the BOM or other encoding information inline. The parser will only look at the stream to determine the encoding. This means that UTF-16 encoded XML needs to provide the UTF-16 BOM and an instance without BOM and without a declaration encoding will be interpreted as UTF-8. If the encoding of the XML document is not known in advance and the data is passed as string or binary data instead of XML data before casting to XML, it is recommended to treat the data as varbinary. For example, when reading data from an XML file using OpenRowset(), one should specify the data to be read as a varbinary(max) value: select CAST(x as XML) from OpenRowset(BULK 'filename.xml', SINGLE_BLOB) R(x) SQL Server internally represents XML in an efficient binary representation that uses UTF-16 encoding. User-provided encoding is not preserved, but is considered during the parse process. Solution: CONVERT(XML, CONVERT(NVARCHAR(max), ProductXML))
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15953177', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1408809/']}
jdg_75414
stackexchange
llm_judgeable_groundtruth_similarity
12405699
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem by reading getSharedPreferences from non-Activity class to set playlist in player... In my Activity I take string variable from edittext to get path of folder to work with audio files... public class MainActivity extends Activity {String ppp;@Overridepublic void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); String PATH = getSharedPreferences("PATH", MODE_PRIVATE).getString("path", ppp); if (PATH == null){ .......... ........... path_tv.setText("folder is undefined"); } else { path_tv.setText("folder defined: /mnt/sdcard/" + PATH); } set_path.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if (path_edit.getText().toString().length() == 0){ Toast.makeText(getBaseContext(), "folder is undefined", Toast.LENGTH_SHORT).show(); } else { ppp = path_edit.getText().toString(); getSharedPreferences("PATH", MODE_PRIVATE) .edit() .putString("path", ppp) .commit(); File folder = new File(Environment.getExternalStorageDirectory() + "/" + ppp); boolean success = false; if (!folder.exists()) { success = folder.mkdir(); if (success) Toast.makeText(getBaseContext(), ".....", Toast.LENGTH_SHORT).show(); } String PATH = getSharedPreferences("PATH", MODE_PRIVATE).getString("path", ppp); path_tv.setText("........ /mnt/sdcard/" + PATH); path_edit.setText(""); } } }); So, in activity I can change and save value of String in shared preferences... But ho can I do it from public class...? Appreciate any examples... Now provide the response and nothing else.
One way is to use Application object. This is a dirty hack but none the less sometimes helpful. First you need a static member in your Application class, so: public class MyApplication extends Application { private static MyApplication instance; @Override public void onCreate() { super.onCreate(); instance = this; } public static MyApplication getInstance() { return instance; }} Since Application object is always created before any activity is created and run and is kept throughout application lifetime you can always be sure you will have proper one. Then in your code simply call MyApplication.getInstance() and you will have global app context. Remember to declare MyApplication in manifest.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12405699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1151368/']}
jdg_75415
stackexchange
llm_judgeable_groundtruth_similarity
50371593
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to migrate from 5 to 6 using ng update and I get an error Uncaught ReferenceError: Buffer is not defined at Object../node_modules/amazon-cognito-identity-js/node_modules/crypto-browserify/helpers.js (helpers.js:2) at __webpack_require__ (bootstrap:81) at Object../node_modules/amazon-cognito-identity-js/node_modules/crypto-browserify/md5.js (md5.js:10) at __webpack_require__ (bootstrap:81) at Object../node_modules/amazon-cognito-identity-js/node_modules/crypto-browserify/create-hash.js (create-hash.js:3) at __webpack_require__ (bootstrap:81) at Object../node_modules/amazon-cognito-identity-js/node_modules/crypto-browserify/index.js (index.js:12) at __webpack_require__ (bootstrap:81) at Object../node_modules/amazon-cognito-identity-js/es/AuthenticationHelper.js (vendor.js:47207) at __webpack_require__ (bootstrap:81) Local environment works well for creating the new angular project. I don't use Buffer. It is something behind the scenes Any ideas? UPD I was trying to update @types/node npm install --save-dev @types/node + @types/[email protected] 1 package in 12.031s[!] 26 vulnerabilities found [36141 packages audited] Severity: 11 Low | 13 Moderate | 2 High Run `npm audit` for more detail if I run npm audit npm ERR! code ENOAUDITnpm ERR! audit Your configured registry (https://registry.npmjs.org/) does not support audit requests.npm ERR! A complete log of this run can be found in:npm ERR! /Users/myname/.npm/_logs/2018-05-16T13_45_17_410Z-debug.log Now provide the response and nothing else.
Ok, after an hour I finally managed to have cognito working on my Angular app (just after upgrading to 6.0). About the message global is not defined (or something close can't remember). Add the following into index.html : <!doctype html><html lang="en"><head> ... <script> var global = global || window; </script></head> Then, you'll probably get an error saying that Buffer is not defined. Install the buffer package using npm or yarn. And add the following into polyfills.ts (): global.Buffer = global.Buffer || require('buffer').Buffer; Stackoverflow answers/github issues that helped me in case it's not fixed for you after that: Upgrading to angular-6.x gives "Uncaught ReferenceError: global is not defined" https://github.com/aws/aws-amplify/issues/840#issuecomment-389459988 https://github.com/aws/aws-amplify/issues/678 https://github.com/aws/aws-amplify/issues/153 https://github.com/crypto-browserify/createHash/issues/20
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50371593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6108211/']}
jdg_75416
stackexchange
llm_judgeable_groundtruth_similarity
44851019
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Since C# 7 introduces value tuples, is there a meaningful scenario where they are better suited than tuples? For example, the following line collection.Select((x, i) => (x, i)).Where(y => arr[y.i].f(y.x)).ToArray(); makes the following line collection.Select((x, i) => new {x, i}).Where(y => arr[y.i].f(y.x)).ToArray(); redundant. What would be the use case where one is better used over the other (for either performance reasons or optimization)? Obviously, if there is a need for more than six fields, tuples cannot be used, but is there something a bit more nuanced to it? Now provide the response and nothing else.
There are various differences between anonymous types and C# 7 tuples, which may or may not make one more appropriate than the other in certain situations: C# 7 tuples are ValueTuple<> s. That means they are value types while anonymous types are reference types. Tuples allow static typing at compile time since they are a type that can be expressed explicitly. As such, you can use them as method arguments, return types, etc. Members of an anonymous type are actual properties that exist on the type. Tuple items are fields . The properties of an anonymous type have an actual name, while the fields on a tuple are just named ItemN (for numbers N ). The labels are just metadata information that is mostly used by the compiler, and is not persisted with the actual tuple object. Because creating an anonymous type actually creates a type under the hood, you have a level of type safety with them. Since tuples are just generic containers with applied type arguments, you do not have full type safety with them. For example an (int, int) tuple for a size would be fully compatible to an (int, int) tuple for a position , while anonymous types are closed off completely. As Jon Skeet mentioned, the C# 7 tuple syntax is currently not supported in expression trees.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44851019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2515628/']}
jdg_75417
stackexchange
llm_judgeable_groundtruth_similarity
43595668
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm used to definition my constants with enum { my_const = 123; } , since in classes, using static constexpr requires some code outside of the class definition (see this question ). But - what about in function bodies? Lately I've been noticing people just having constexpr variables in their functions (not even bothering to const them actually), and I was wondering whether I'm a fool who's behind the times with my int foo(int x){ enum : int { bar = 456 }; return x + bar;} So, my question is: Is there any benefit to using enum's within function bodies rather than constexpr variables? Now provide the response and nothing else.
You can accidentally or on purpose force ODR-existence of bar if it was a constexpr int bar = 456; , this is not possible with enum : int { bar = 456 }; . This may or may not be an advantage on either side. For example int baz(int const* ptr ) { if (ptr) return 7; return -1;}int foo(int x){ // enum : int { bar = 456 }; constexpr int bar = 456; return x + baz(&bar);} the enum version doesn't compile, the constexpr int one does. A constexpr int can be an lvalue, an enumerator (one of the listed enum constants) cannot. The enum values aren't actually an int , while the constexpr int is actually an int . This may matter if you pass it to template<class T>void test(T) { static_assert(std::is_same<T,int>::value);} one will pass the test; the other will not. Again, this could be an advantage, a disadvantage, or a meaningless quirk depending on how you are using the token.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43595668', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1593077/']}
jdg_75418
stackexchange
llm_judgeable_groundtruth_similarity
15665052
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I primarily come from an x86 system background where BIOS (Firmware) is responsible for loading a bootloader (like GRUB) from PowerON which in turn loads the OS. I now have been reading the equivalent boot sequence on ARM systems and it seems there are articles on the web making references to two terms: bootrom and bootloader. Some articles mention that from PowerON the bootloader (like u-boot) is the first thing that gets executed whereas some articles say that from PowerON, a bootrom (usually flashed on processor's internal ROM) gets executed first which in turn loads the bootloader (like uboot) which then loads the OS. Can anybody provide some clarification on bootrom vs bootloader? Also, I would like to know a name of a popular bootrom that is typically used in ARM systems. Now provide the response and nothing else.
Here's how I understand the terms. Bootrom Bootrom (or Boot ROM) is a small piece of mask ROM or write-protected flash embedded inside the processor chip. It contains the very first code which is executed by the processor on power-on or reset. Depending on the configuration of some strap pins or internal fuses it may decide from where to load the next part of the code to be executed and how or whether to verify it for correctness or validity. Sometimes it may contain additional functionality, possibly usable by user code during or after booting. Some examples: iPhone boot ROM. Embedded in the mask ROM and can't be modified. Loads the next stage boot loader from flash or USB (in DFU mode) and verifies its signature using built-in RSA implementation. Also provides accelerated decryption functions for the next stage bootloader. TI's OMAP4 boot ROM. Can load user code from flash (NOR, NAND, OneNAND), external memory, SD/MMC, USB or UART. Boot order and options are set by strap (SYSBOOT) pins. Provides some functionality for later stages (cache/TLB management etc.) NXP's LPCxxxx series Boot ROM. Placed in a hidden portion of the internal flash which is mapped at 0 on power-on. Implements CRP (code read protection), ISP (In-System Programming) which allows to upload and flash new code over UART. If a valid user code is in flash (needs to have proper checksum), maps it to 0 and jumps to it. A part of bootrom remains mapped to provide IAP (In-Application Programming) and some other services. Bootloader Bootloader is responsible for finding and loading the final OS or firmware which is supposed to run on the chip. One main difference from bootrom is that it's usually in writable flash and can be replaced or upgraded. Sometimes bootrom can perform the job of the bootloader. For example, OMAP's bootrom is complex enough (it can parse FAT32!) that you can probably have it load and start a Linux kernel directly. However, in many cases a separate bootloader is used, either because the bootrom is not capable enough (or absent), or because extra flexibility is needed. It can be very simple (load kernel from a fixed flash location in RAM and jump to it), or can be much more complicated. For example, U-Boot is a like a mini-OS by itself - it has a console, some commands, allows you break the boot process and e.g. modify the kernel command line arguments or even load the kernel from a different location (SD/MMC or USB), run some tests and so on. Bootloaders are usually used when you have a more or less complex OS which may need some set up before it can be started. Smaller microcontrollers like NXP's LPC series usually use a monolithic firmware so they can get by without it (however, there may be custom bootloaders for them too). On the very simplest chips there may be no boot ROM or boot loader at all - they just try to fetch and execute instructions from a fixed startup address. In fact, most x86 chips to this day work like this - they just start executing code at FFFFFFF0 with the expectation that the chipset has mapped the BIOS flash chip there. Here, you can say that BIOS is the bootloader (though it also provides services to the OS, similar to bootrom).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15665052', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1521550/']}
jdg_75419
stackexchange
llm_judgeable_groundtruth_similarity
108010
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a series of physicians' claims submissions. I would like to perform cluster analysis as an exploratory tool to find patterns in how physicians bill based on things like Revenue Codes, Procedure Codes, etc. The data are all polytomous, and from my basic understanding, a latent class algorithm is appropriate for this kind of data. I am trying my hand at some of R's cluster packages, & specifically poLCA & mclust for this analysis. I'm getting alerts after running a test model on a sample of the data using poLCA . > library(poLCA)> # Example data structure - actual test data has 200 rows:> df <- structure(list(RevCd = c(274L, 320L, 320L, 450L, 450L, 450L, 636L, 636L, 636L, 450L, 450L, 450L, 301L, 305L, 450L, 450L, 352L, 301L, 300L, 636L, 301L, 450L, 636L, 636L, 307L, 450L, 300L, 300L, 301L, 301L), PlaceofSvc = c(23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L), TypOfSvc = c(51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L, 51L), FundType = c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), ProcCd2 = c(1747L, 656L, 656L, 1375L, 1376L, 1439L, 1623L, 1645L, 1662L, 176L, 1374L, 1376L, 958L, 1032L, 1368L, 1374L, 707L, 960L, 347L, 1662L, 859L, 1375L, 1654L, 1783L, 882L, 1440L, 332L, 332L, 946L, 946L)), .Names = c("RevCd", "PlaceofSvc", "TypOfSvc", "FundType", "ProcCd2"), row.names = c(1137L, 1138L, 1139L, 1140L, 1141L, 1142L, 1143L, 1144L, 1145L, 1146L, 1147L, 1945L, 1946L, 1947L, 1948L, 1949L, 1950L, 1951L, 1952L, 1953L, 1954L, 1955L, 1956L, 1957L, 1958L, 1959L, 2265L, 2266L, 2267L, 2268L), class = "data.frame")> clust <- poLCA(cbind(RevCd, PlaceofSvc, TypOfSvc, FundType, ProcCd2)~1, df, nclass = 3)========================================================= Fit for 3 latent classes: ========================================================= number of observations: 200 number of estimated parameters: 7769 residual degrees of freedom: -7569 maximum log-likelihood: -1060.778 AIC(3): 17659.56BIC(3): 43284.18G^2(3): 559.9219 (Likelihood ratio/deviance statistic) X^2(3): 33852.85 (Chi-square goodness of fit) ALERT: number of parameters estimated ( 7769 ) exceeds number of observations ( 200 ) ALERT: negative degrees of freedom; respecify model My novice assumption is that I need to run a greater number of iterations before I can get results that are robust? e.g. "...it is essential to run poLCA multiple times until you canbe reasonably certain that you have found the parameter estimates that produce the globalmaximum likelihood solution." ( http://www.sscnet.ucla.edu/polisci/faculty/lewis/pdf/poLCA-JSS-final.pdf ). Alternatively, perhaps certain variables, particularly CPT & Revenue Codes, have too many unique values, and that I need to aggregate these variables into higher level categories to reduce the number of parameters? When I run the model using package mclust , which optimizes the model based on BIC, I don't get any such alert. > library(mclust)> clustBIC <- mclustBIC(df)> summary(clustBIC, data = df)classification table: 1 2 141 59 best BIC values: VEV,2 VEV,3 EEV,3 -4562.286 -4706.190 -5655.783 If anyone can shed a bit of light on the above alerts, it would be much appreciated. I was also planning on using the script found in the poLCA documentation to run multiple iterations of the model until the log-likelihood is maximized. However it's computationally intensive and I'm afraid the process will crash before I have a chance to post this. Sorry in advance if I've missed something obvious here; I'm new to cluster analysis. Now provide the response and nothing else.
It depends on what sense of a correlation you want. When you run the prototypical Pearson's product moment correlation, you get a measure of the strength of association and you get a test of the significance of that association. More typically however, the significance test and the measure of effect size differ. Significance tests: Continuous vs. Nominal: run an ANOVA . In R, you can use ?aov . Nominal vs. Nominal: run a chi-squared test . In R, you use ?chisq.test . Effect size (strength of association): Continuous vs. Nominal: calculate the intraclass correlation . In R, you can use ?ICC in the psych package; there is also an ICC package. Nominal vs. Nominal: calculate Cramer's V . In R, you can use ?assocstats in the vcd package.
{}
{'log_upvote_score': 8, 'links': ['https://stats.stackexchange.com/questions/108010', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/40501/']}
jdg_75420
stackexchange
llm_judgeable_groundtruth_similarity
53064444
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my code window.addEventListener('message', (e) => { e.source.postMessage('hi there, I hear you!', '*');}); This is the error: [ts] Cannot invoke an expression whose type lacks a call signature. Type '((message: any, targetOrigin: string, transfer?: any[]) => void) | ((message: any, transfer?: any[]) => void)' has no compatible call signatures. When I inspect postMessage it seems to be a method on window and have the signature: declare function postMessage( message: any, targetOrigin: string, transfer?: any[]): void; That looks very much like my call signature to me. So what is wrong with my code? Now provide the response and nothing else.
e.source is of type MessageEventSource . type MessageEventSource = WindowProxy | MessagePort | ServiceWorker; Since only WindowProxy has the method signature you're using, you can guard it with: window.addEventListener('message', (e) => { if (!(e.source instanceof MessagePort) && !(e.source instanceof ServiceWorker)) { e.source.postMessage('hi there, I hear you!', '*'); }}); Or somehow else tell the TS that your e.source is of type WindowProxy | Window .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53064444', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1283776/']}
jdg_75421
stackexchange
llm_judgeable_groundtruth_similarity
70668
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: An affine manifold is a topological manifold which admits a system of charts such that the coordinate changes are (restrictions of) affine transformations. Let $M$ be a compact affine manifold. Let $G$ be the fundamental group of $M$ and $\tilde M$ be its universal cover. One can show that each $n$-dimensional affine manifold comes with a developing map $D\colon \tilde M \to \mathbb R^n$, and a homomorphism $\varphi \colon G \to {\rm Aff}(\mathbb R^n)$, such that $D$ is an immersion and equivariant with respect to $\varphi$. An affine manifold is called complete if $D$ is a homeomorphism, in this case: $\varphi$ is injective, $G$ is a Bieberbach group, and $M$ is aspherical, i.e. $\tilde M$ is contractible. The non-complete case seems to be far more complicated. Question 1: Is there an easy example, where $D$ is not surjective? Question 2: Is there an easy example, where $\varphi$ is not injective? Question 3: Is there an easy example, where $M$ is not aspherical? EDIT: As André suggested, let's ask for examples for which $\varphi$ takes values in $SL(n,\mathbb R) \ltimes \mathbb R^n$ or even $SL(n,\mathbb Z) \ltimes \mathbb R^n$, seen as subgroups of ${\rm Aff}(\mathbb R^n)$. Now provide the response and nothing else.
There is a conjecture due to Markus which states that any compact affine manifold has parallel volume ( i.e. the linear part of $\varphi$ lies in $\mathrm{SL}(n;\mathbb{R})$) if and only if it is complete. To the best of my knowledge, this conjecture is still open, which goes towards saying that there should be no easy examples to questions 1 and 3 for affine manifolds with parallel volume. If the fundamental group $G$ of a compact affine manifold with parallel volume is nilpotent, the beautiful Affine manifolds with nilpotent holonomy by Fried, Goldman and Hirsch, Comm. Math. Helv. 56 (1981) proves that Markus' conjecture holds in this case and, thus, there are no examples to questions 1 and 3 with nilpotent fundamental group. The proof is a cunning mixture of representation theory and geometry, so I strongly recommend taking a look at it. The results in this paper also imply that the above examples (to questions 1 and 3) constructed by Andre Henriques cannot be adapted so that the resulting manifolds admit parallel volume (these are, nonetheless, very nice examples to the original question!).
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/70668', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8176/']}
jdg_75422
stackexchange
llm_judgeable_groundtruth_similarity
43049048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While making some updates to a website, I moved my work to a local server. When I was ready to test the site online I moved the site to a subdomain of my client's host. I uploaded the site's files to the subdomain and all of the attached style folders, scripts and images are returning a 404 page not found error. I've updated the paths multiple times with no luck. www.testing123.emcfintech.com Any ideas on how to work through this? Any help would be greatly appreciated, thank you. Now provide the response and nothing else.
Kingfisher is using the whole url as the cache key by default, so you have to ensure that you have the very SAME url string for the images every time you request your API. If there is something like timestamp or version number appended to the urls, the cache will fail. (eg. http://example.com/image.png?time=123 and http://example.com/image.png?time=456 will be recognized as two different cache keys, although you could get the same image from it). This is the only case I could draw for your operation. If this is true for you, you could create an ImageResource to specify the cache key explicitly (which should be related to your original url, but get rid of the appended things): let r = ImageResource(downloadURL: yourUrl, cacheKey: key)imageView.kf.setImage(with: r, placeholder: yourImage)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43049048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7729035/']}
jdg_75423
stackexchange
llm_judgeable_groundtruth_similarity
38531786
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to log all warnings. I thought that setting captureWarnings to True should do the trick, but it doesn't. Code: import loggingimport warningsfrom logging.handlers import RotatingFileHandlerlogger_file_handler = RotatingFileHandler(u'./test.log')logger_file_handler.setLevel(logging.DEBUG)logging.captureWarnings(True)logger = logging.getLogger(__name__)logger.addHandler(logger_file_handler)logger.setLevel(logging.DEBUG)logger.info(u'Test')warnings.warn(u'Warning test') My expectation is that 'Warning test' should appear in test.log, but it doesn't; only 'Test' is put in the log file. How to capture all warnings and redirect them to the log file? Now provide the response and nothing else.
From the logging.captureWarnings documentation: Warnings issued by the warnings module will be redirected to the logging system. Specifically, a warning will be formatted using warnings.formatwarning() and the resulting string logged to a logger named 'py.warnings' with a severity of WARNING . You probably want something like this: import loggingimport warningsfrom logging.handlers import RotatingFileHandlerlogger_file_handler = RotatingFileHandler(u'test.log')logger_file_handler.setLevel(logging.DEBUG)logging.captureWarnings(True)logger = logging.getLogger(__name__)warnings_logger = logging.getLogger("py.warnings")logger.addHandler(logger_file_handler)logger.setLevel(logging.DEBUG)warnings_logger.addHandler(logger_file_handler)logger.info(u'Test')warnings.warn(u'Warning test') Hope it helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38531786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5649734/']}
jdg_75424
stackexchange
llm_judgeable_groundtruth_similarity
161195
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In my lecture today my professor briefly mentioned that force is the derivative of energy but I did not really get what he meant by that. I tried to express it mathematically: $$\frac{d}{dt}K_E=\frac{d}{dt}\left(\frac{1}{2}mv^2\right)=mv\frac{dv}{dt}$$ This looks really close to Newton's second law $F=ma$ but there is an extra "$v$" in there. Am I missing something here? Now provide the response and nothing else.
It is important to understand to which derivative you are referring to, i.e. derivative with respect to what? . For conservative systems , it is true that the force can be expressed as minus the gradient of the potential energy :$$ \tag{1} \textbf F(\textbf x) = -\nabla V( \textbf x),$$which can be though of as the defining property of a conservative system. The gradient $\nabla$ reduces for one-dimensional systems to the derivative with respect to the space coordinate , i.e. you have in this simple case$$ \tag{2} F = -\frac{dV}{dx}.$$ Taking as an example the case of a mass $m$ in the gravitational field of the earth, you have the potential energy$$ \tag{3} V(z) = mgz, $$where $z$ is the distance from the ground. The force in the $z$ direction is then given by$$ F_g = - \frac{dV(z)}{dz} = -mg,$$which is what you would expect.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/161195', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/70289/']}
jdg_75425
stackexchange
llm_judgeable_groundtruth_similarity
17099114
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For my current flask deployment, I had to set up a uwsgi server.This is how I have created the uwsgi daemon: sudo vim /etc/init/uwsgi.conf # file: /etc/init/uwsgi.confdescription "uWSGI server"start on runlevel [2345]stop on runlevel [!2345]respawnexec /myproject/myproject-env/bin/uwsgi --uid www-data --gid www-data --home /myproject/myproject-env/site/F11/Engineering/ --socket /tmp/uwsgi.sock --chmod-socket --module F11 --callable app --pythonpath /myproject/myproject-env/site/F11/Engineering/ -H /myproject/myproject-env However after running this successfully: sudo start uwsgi uwsgi start/running, process 1286 And trying to access the application via browser: I get a 502 Bad Gateway and an error entry in nginx error.log: 2013/06/13 23:47:28 [error] 743#0: *296 upstream prematurely closed connection while reading response header from upstream, client: xx.161.xx.228, server: myproject.com, request: "GET /show_records/2013/6 HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "myproject.com" But the sock file has the permission it needs: srw-rw-rw- 1 www-data www-data 0 Jun 13 23:46 /tmp/uwsgi.sock If I run the exec command from above in the command line as a process, it works perfectly fine. Why is the daemon not working correctly please? btw Nginx is running as vim /etc/nginx/nginx.conf user www-data; and vim /etc/nginx/sites-available/default location / { uwsgi_pass unix:///tmp/uwsgi.sock; include uwsgi_params; } and it is started as sudo service nginx start I am running this on Ubuntu 12.04 LTS . I hope I have provided all the necessary data, hope someone can guide me into the right direction. Thanks. Now provide the response and nothing else.
Finally I have solved this problem after working nearly 2 days on it. I hope this solution will help other flask/uwsgi users that are experiencing a similar problem. I had two major issues that caused this. 1) The best way to find the problems with a daemon is obviously a log file and a cleaner structure. sudo vim /etc/init/uwsgi.conf Change the daemon script to the following: # file: /etc/init/uwsgi.confdescription "uWSGI server"start on runlevel [2345]stop on runlevel [!2345]respawnexec /home/ubuntu/uwsgi-1.9.12/uwsgi -c /myproject/uwsgi.ini vim /myproject/uwsgi.ini [uwsgi]socket = /tmp/uwsgi.sockmaster = trueenable-threads = trueprocesses = 5chdir= /myproject/F11/Engineeringmodule=F11:appvirtualenv = /myproject/myproject-env/uid = www-datagid = www-datalogto = /myproject/error.log This is much cleaner way of setting up the daemon. Also notice the last line how to setup the log file. Initially I had set the log file to /var/log/uwsgi/error.log . After a lot of sweat and tears I realized the daemon is running as www-data and hence can not access the /var/log/uwsgi/error.log since the error.log was owned by root:root . This made the uwsgi fail silently. I found it much more efficient to just point the log file to my own /myproject , where the daemon has guaranteed access as www-data . And also don't forget to make the whole project accessible to www-data or the daemon will fail with an Internal Server error message . --> sudo chown www-data:www-data -R /myproject/ Restart uwsgi daemon: sudo service uwsgi restart 2) Now you have three log files to lookout for: tail -f /var/log/upstart/uwsgi.log --> Shows problems with your daemon upon start tail -f /var/log/nginx/error.log --> Shows permission problems when wsgi access is refused, often because /tmp/uwsgi.sock file is owned by root instead of www-data . In that case simply delete the sock file sudo rm /tmp/uwsgi.sock tail -f /myproject/error.log --> Shows errors thrown by uwsgi in your application This combination of log files helped me to figure out that I also had a bad import with Flask-Babel in my Flask application. Bad in that sense, that the way I utilized the library was falling back to the system's locale to determine the datetime format. File "/myproject/F11/Engineering/f11_app/templates/show_records.html", line 25, in block "body" <td>{{ record.record_date|format_date }}</td> File "./f11_app/filters.py", line 7, in format_date day = babel_dates.format_date(value, "EE") File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 459, in format_date return pattern.apply(date, locale) File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 702, in apply return self % DateTimeFormat(datetime, locale) File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 699, in __mod__ return self.format % other File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 734, in __getitem__ return self.format_weekday(char, num) File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 821, in format_weekday return get_day_names(width, context, self.locale)[weekday] File "/myproject/myproject-env/local/lib/python2.7/site-packages/babel/dates.py", line 69, in get_day_names return Locale.parse(locale).days[context][width]AttributeError: 'NoneType' object has no attribute 'days' This is the way I was using the Flask filter: import babel.dates as [email protected]_filter('format_date')def format_date(value): day = babel_dates.format_date(value, "EE") return '{0} {1}'.format(day.upper(), affix(value.day)) The strangest part is that this code is working perfectly fine within the dev environment (!). It works even fine when running the uwsgi as a root process from the command line. But it fails when ran by the www-data daemon. This must have something to do with how the locale is set, which Flask-Babel is trying to fall back to. When I changed the import like this, it all worked finally with the daemon: from flask.ext.babel import format_date @app.template_filter('format_date1')def format_date1(value): day = format_date(value, "EE") return '{0} {1}'.format(day.upper(), affix(value.day)) Hence be careful when using Eclipse/Aptana Studio that is trying to pick the right namespace for your classes in code. It can really turn ugly. It is now working perfectly fine as a uwsgi daemon on an Amazon Ec2 (Ubuntu 12.04) since 2 days. I hope this experience helps fellow python developers.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17099114', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92153/']}
jdg_75426
stackexchange
llm_judgeable_groundtruth_similarity
12254980
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to get the full path of the current process. I use _getcwd to get the current working directory. But it not includes file name. How can I get file name like: filename.exe ? Now provide the response and nothing else.
argv[0] of your main function is your filename. A simple code snippet: #include<stdio.h>int main(int argc, char** argv){ //access argv[0] here} If you cannot access/change code in main(), you can do something like this: std::string executable_name(){#if defined(PLATFORM_POSIX) || defined(__linux__) //check defines for your setup std::string sp; std::ifstream("/proc/self/comm") >> sp; return sp;#elif defined(_WIN32) char buf[MAX_PATH]; GetModuleFileNameA(nullptr, buf, MAX_PATH); return buf;#else static_assert(false, "unrecognized platform");#endif}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12254980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1614028/']}
jdg_75427
stackexchange
llm_judgeable_groundtruth_similarity
14652133
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does anyone have experience creating an authentication mechanism with the new router in pre4? Here are some of my thoughts so far: In order to completely separate the view (Ember app) from the server (Rails app) I want to use token authentication. I will likely use Devise on the Rails server. I need something like a before_filter equivalent in the Ember app where I can check if there is a current user and if that user has an authentication token set. The Rails server will return the current auth token on every call. If it returns a null auth token the Ember app should detect this and transition to the unauthenticated state, redirecting to the login view. I suspect I should be using an Ember state machine for this but I'm not sure how to proceed. Anyone tackled this problem yet? Now provide the response and nothing else.
UPDATE: Like @DustMason says in his answer, check out the awesome embercasts for authentication best-practices. Client Side Authentication Part I Client Side Authentication Part II In order to completely separate the view (Ember app) from the server (Rails app) I want to use token authentication. I will likely use Devise on the Rails server. Makes sense. I need something like a before_filter equivalent in the Ember app where I can check if there is a current user and if that user has an authentication token set. You can add an enter hook on routes, this is roughly equivalent to a before_filter. But not sure that's the best place to check for an auth-token. The Rails server will return the current auth token on every call. Makes sense. We use cookie-auth and fetch current user profile by calling /api/me but either should work. If it returns a null auth token the Ember app should detect this and transition to the unauthenticated state, redirecting to the login view. Thing about this approach is that (unlike rails) it's not easy to "protect" access to a particular ember routes. And no matter what a user can always pop open JS console and enter whatever state they want. So instead of thinking "user can only get into this state if authenticated" consider "what if unauthenticated user somehow navigates to this route" I suspect I should be using an Ember state machine for this but I'm not sure how to proceed. Anyone tackled this problem yet? Our auth needs are pretty simple so we've not found the need for a state machine. Instead we have an isAuthenticated property on ApplicationController. We use this property in application.hbs to replace the main view with a login form when a user is not authenticated. {{if isAuthenticated}} {{render "topnav"}} {{outlet}}{{else}} {{render "login"}}{{/if}} From ApplicationRoute, we fetch user profile: App.ApplicationRoute = Ember.Route.extend({ model: function() { var profiles; profiles = App.Profile.find({ alias: 'me' }); profiles.on("didLoad", function() { return profiles.resolve(profiles.get("firstObject")); }); return profiles; }}); Then our ApplicationController computes it's isAuthenticated property based on the profile that was returned.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14652133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1157156/']}
jdg_75428
stackexchange
llm_judgeable_groundtruth_similarity
381083
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Admittedly it's probably mostly in traditional BASICs (which require line numbers) that I've seen this. Take these examples (cribbed from Wikipedia): 10 PRINT "Hello, World!"20 END and 10 INPUT "What is your name: "; U$20 PRINT "Hello "; U$30 INPUT "How many stars do you want: "; N40 S$ = ""50 FOR I = 1 TO N60 S$ = S$ + "*"70 NEXT I80 PRINT S$90 INPUT "Do you want more stars? "; A$100 IF LEN(A$) = 0 THEN GOTO 90110 A$ = LEFT$(A$, 1)120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30130 PRINT "Goodbye "; U$140 END In each of these examples, the END isn't doing anything, since the program will halt anyway as a result of reaching the end of the code. Neither is there a GOTO statement or anything referencing the line. In C and similar languages, I don't recall ever seeing return; immediately before the closing brace of a function, or exit(0); immediately before the closing brace of main . What is the origin of this habit? Have there been some BASIC dialects that require it? Or have some been taught that it's a good practice for whatever reason? Now provide the response and nothing else.
Because the program might contain subroutines or functions after the end statement . That is, in more complex programs, the logical end of the program may not be the end of the file: 10 PRINT "Hello, World!"15 GOSUB 3020 END30 PRINT "Goodbye World!"40 RETURN This example is a silly use of a subroutine, but it makes more sense when the subroutine contains complex logic that is called multiple times. In C and similar languages, I don't recall ever seeing return; immediately before the closing brace of a function, or exit(0); immediately before the closing brace of main You haven't seen many ANSI C programs then. Omitting the return from main resulted in "undefined behavior" before C99 .
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/381083', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/11863/']}
jdg_75429
stackexchange
llm_judgeable_groundtruth_similarity
21122541
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need help with a formatting issue. I wanted the amount of indentation for the HSC Privacy address section to match the HSC Security address section and something isn't working right. I thought I had the classes set correctly, but something is overriding the previous class I think and I don't know how to fix it. <link href="../privacy/internal/css/turquoise.css" media="screen" rel="stylesheet" type="text/css" /> <!--HSC Compliance Office webpage starts here--> <link rel="stylesheet" href="complianceCSS.css"> <h1 class="allpages">HIPAA</h1> <div id="hipaa"> <div class="privacy"> <!--HIPAA PRIVACY HEADER html--> <h4>HIPAA Privacy</h4> <img class="computer" alt="computer" src="computer.jpg" height="167"> <p>The federal Health Insurance Portability and Accountability Act of 1996 <b> (&#8220;HIPAA&#8221;)</b>&#160;protects the&#160;<a href="http://www.hhs.gov /ocr/privacy/hipaa/administrative/privacyrule/index.html" title="Privacy Rule">privacy</a>&#160;and confidentiality of an individual's health information.&#160; Known as &#8220;protected health information&#8221; or &#8220;PHI&#8221;, the health information <b>generally&#160;</b>cannot be used or disclosed unless the individual who is the subject of the PHI has given prior written&#160; <a href="http://hsc.unm.edu/admin/privacy/docs/Authorization to Use and Disclose Health Information_1.pdf" title="Authorization">authorization</a>&#160;or permission.</p> <p>For more information about Privacy issues at the University of New Mexico please contact the <a href="http://hsc.unm.edu/admin/privacy/index.html " target="_blank" title="Privacy Office">Privacy Office</a><p class="contact"><h5 style="text-decoration:underline;>Physical Location:</h5><br>University of New Mexico, North Campus <br>Basic Research Facility "BRF", Room B23B<br> <br> Mailing Address:</h5> <br>University of New Mexico <br>Health Sciences Center <br> 1 University of New Mexico <br> <br>Phone: (505) 272-1521 <br>Fax: (505)272-2461 <br><a href="mailto:[email protected]">[email protected]</a> </p> <p> </div> <div class="security"> <!--HIPAA SECURITY HEADER html--> <h4>HIPAA Security</h4> <p>The <a href="docs/secfacts.pdf">Security Rule Fact Sheet</a>&#160;sets standards for the security of electronic health information (ePHI). Published in 2003, it defines a series of administrative, technical, and physical security procedures for covered entities to use to ensure the confidentiality of ePHI.</p><p>For more information about Security issues at the University of New Mexico please contact: <p class="contact"><a href="http://hsc.unm.edu/admin/privacy/index.html "target="_blank" title="HSC Security Office">HSC Security Office</a><br>Barney Metzner <br>HSC Information Security Officer<br>(505) 272-1696<br><a href="mailto:[email protected]">[email protected]</a></p></div> </div> CSS: #hipaa {background-color: #eeeeee; width: 655px; border: solid; border color:#e9e9e9;margin-left: auto; margin-right: auto;}.computer {float: right; margin: 0px 20px 5px 20px;}.privacy {padding: 25px 0px 5px 40px;}.contact{padding: 0px 20px 0px 40px;}.security{padding: 10px 20px 20px 40px;} Now provide the response and nothing else.
TL;DR input function in Python 2.7, evaluates whatever your enter, as a Python expression. If you simply want to read strings, then use raw_input function in Python 2.7, which will not evaluate the read strings. If you are using Python 3.x, raw_input has been renamed to input . Quoting the Python 3.0 release notes , raw_input() was renamed to input() . That is, the new input() function reads a line from sys.stdin and returns it with the trailing newline stripped. It raises EOFError if the input is terminated prematurely. To get the old behavior of input() , use eval(input()) In Python 2.7 , there are two functions which can be used to accept user inputs. One is input and the other one is raw_input . You can think of the relation between them as follows input = eval(raw_input) Consider the following piece of code to understand this better >>> dude = "thefourtheye">>> input_variable = input("Enter your name: ")Enter your name: dude>>> input_variable'thefourtheye' input accepts a string from the user and evaluates the string in the current Python context. When I type dude as input, it finds that dude is bound to the value thefourtheye and so the result of evaluation becomes thefourtheye and that gets assigned to input_variable . If I enter something else which is not there in the current python context, it will fail will the NameError . >>> input("Enter your name: ")Enter your name: dummyTraceback (most recent call last): File "<input>", line 1, in <module> File "<string>", line 1, in <module>NameError: name 'dummy' is not defined Security considerations with Python 2.7's input : Since whatever user types is evaluated, it imposes security issues as well. For example, if you have already loaded os module in your program with import os , and then the user types in os.remove("/etc/hosts") this will be evaluated as a function call expression by python and it will be executed. If you are executing Python with elevated privileges, /etc/hosts file will be deleted. See, how dangerous it could be? To demonstrate this, let's try to execute input function again. >>> dude = "thefourtheye">>> input("Enter your name: ")Enter your name: input("Enter your name again: ")Enter your name again: dude Now, when input("Enter your name: ") is executed, it waits for the user input and the user input is a valid Python function invocation and so that is also invoked. That is why we are seeing Enter your name again: prompt again. So, you are better off with raw_input function, like this input_variable = raw_input("Enter your name: ") If you need to convert the result to some other type, then you can use appropriate functions to convert the string returned by raw_input . For example, to read inputs as integers, use the int function, like shown in this answer . In python 3.x , there is only one function to get user inputs and that is called input , which is equivalent to Python 2.7's raw_input .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/21122541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2196989/']}
jdg_75430
stackexchange
llm_judgeable_groundtruth_similarity
22032136
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have seen this post , but being a new user I can't comment to ask for clarification. I'm using the justified nav tabs in Bootstrap and don't want them to stack on small screens, since I only have 2 tabs. I would like them to remain side by side, just shrink down to fit the screen. This is what I have: <!-- Nav tabs --><ul class="nav nav-tabs nav-justified" id="myTab"> <li class="active"><a href="#ratings" data-toggle="tab">Ratings</a></li> <li><a href="#reviews" data-toggle="tab">Reviews</a></li></ul><!-- Tab panes --><div class="tab-content"> <div class="tab-pane active" id="ratings">This is a rating section</div> <div class="tab-pane" id="reviews">This is for reviews. There should be some styling here.</div></div> As the related post suggests using media queries, I cannot seem to make that work. I had tried setting the following to target just small screens: @media (max-width: 768px) { .nav-justified > li {display: table-cell;}} I'm not sure what I'm doing wrong so any help is greatly appreciated. Now provide the response and nothing else.
The problem with some of the other solutions is that the tabs will lose their borders and other design elements. The following media queries more fully ensure the tabs match in both condensed and wider displays: @media (max-width: 768px) { .nav-justified > li { display: table-cell; width: 1%; } .nav-justified > li > a { border-bottom: 1px solid #ddd !important; border-radius: 4px 4px 0 0 !important; margin-bottom: 0 !important; }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22032136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2654108/']}
jdg_75431
stackexchange
llm_judgeable_groundtruth_similarity
19863132
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: SELECT dealing_record.* ,shares.* ,transaction_type.*FROM shares INNER JOIN shares ON shares.share_ID = dealing_record.share_id INNER JOIN transaction_type ON transaction_type.transaction_type_id = dealing_record.transaction_type_id; The above SQL code produces the desired output but with a couple of duplicate columns. Also, with incomplete display of the column headers. When I change the linesize 100 the headers shows but data displayed overlaps I have checked through similar questions but I don't seem to get how to solve this. Now provide the response and nothing else.
You have duplicate columns, because, you're asking to the SQL engine for columns that they will show you the same data (with SELECT dealing_record.* and so on) , and then duplicates. For example, the transaction_type.transaction_type_id column and the dealing_record.transaction_type_id column will have matching rows (otherwise you won't see anything with an INNER JOIN ) and you will see those duplicates. If you want to avoid this problem or, at least, to reduce the risk of having duplicates in your results, improve your query, using only the columns you really need, as @ConradFrix already said. An example would be this: SELECT dealing_record.Name ,shares.ID ,shares.Name ,transaction_type.Name ,transaction_type.IDFROM shares INNER JOIN shares ON shares.share_ID = dealing_record.share_id INNER JOIN transaction_type ON transaction_type.transaction_type_id = dealing_record.transaction_type_id;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19863132', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1592334/']}
jdg_75432
stackexchange
llm_judgeable_groundtruth_similarity
1856597
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Extends . I have: struct Coord{ int row, col ; bool operator<( const Coord& other ) const { return row < other.row && col < other.col ; }} ; I'm trying to create a map<Coord, Node*> , where you can look up a Node* by Coord . The problem is, it has bugs. Lookups into the map<Coord, Node*> by Coord are returning the wrong ones. I'm having difficulty figuring out if this is appropriate or not. Wikipedia says, map [keys] requires a strict weak ordering . Have I done this wrong? Is there a way to make it work, or should keys for a map be simple values that can be "strictly ordered"? Basically the question is what is required for a custom struct to work as a key for my std::map? Now provide the response and nothing else.
Yes you could very well have a problem with strict-weak ordering. Odds are its not working like you'd expect. Consider: bool operator<( const Coord& other ) const { return row < other.row && col < other.col ; } obj1 (this)row: 2col: 3 obj2row: 3col: 2 obj1 < obj2? => false ok well then: obj2 < obj1? => false The only conclusion is that they must be equal (based on your < operator). Since this is a map, and keys are unique, both keys reselve to the same spot. This behavior may-or-may not be what you expect, but it sounds like it probably isn't. What you need is to make a precedence between row/col so that < really works like you'd expect: bool operator<( const Coord& other ) const { // look at row first, if row is equal, check column. if (row < other.row) { return true; } else if (row == other.row) { return col < other.col ; } return false; }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1856597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/111307/']}
jdg_75433
stackexchange
llm_judgeable_groundtruth_similarity
17191945
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an ArrayBuffer which contains a string encoded using UTF-8 and I can't find a standard way of converting such ArrayBuffer into a JS String (which I understand is encoded using UTF-16). I've seen this code in numerous places, but I fail to see how it would work with any UTF-8 code points that are longer than 1 byte. return String.fromCharCode.apply(null, new Uint8Array(data)); Similarly, I can't find a standard way of converting from a String to a UTF-8 encoded ArrayBuffer . Now provide the response and nothing else.
function stringToUint(string) { var string = btoa(unescape(encodeURIComponent(string))), charList = string.split(''), uintArray = []; for (var i = 0; i < charList.length; i++) { uintArray.push(charList[i].charCodeAt(0)); } return new Uint8Array(uintArray);}function uintToString(uintArray) { var encodedString = String.fromCharCode.apply(null, uintArray), decodedString = decodeURIComponent(escape(atob(encodedString))); return decodedString;} I have done, with some help from the internet, these little functions, they should solve your problems! Here is the working JSFiddle . EDIT : Since the source of the Uint8Array is external and you can't use atob you just need to remove it( working fiddle ): function uintToString(uintArray) { var encodedString = String.fromCharCode.apply(null, uintArray), decodedString = decodeURIComponent(escape(encodedString)); return decodedString;} Warning: escape and unescape is removed from web standards. See this.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17191945', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/363606/']}
jdg_75434
stackexchange
llm_judgeable_groundtruth_similarity
62782648
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My App use the file paths of images provided by Environment.getExternalStorageDirectory() to create albums of photos, but with Android 11 I won't be able to access directly files . According to the Android developers documentation they recently introduced the MANAGE_EXTERNAL_STORAGE permission, but I didn't understand if adding this permission I'm able to continue to access file by Environment or not. I tried my application on an Android 11 Virtual device and it seems to work perfectly even without requesting the MANAGE_EXTERNAL_STORAGE permission! Reading the documentation on Android Developers, it seems that the applications that uses the File API for accessing Photos and Medias only locations can continue to work, but I'am not sure . Is there anyone who better understood the Android Documentation??? Now provide the response and nothing else.
Android 11 If you are targeting Android 11 ( targetSdkVersion 30 ) then you require the following permissions in AndroidManifest.xml for modifying and document access. <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" android:maxSdkVersion="28" /><uses-permission android:name="android.permission.MANAGE_EXTERNAL_STORAGE" /> For Android 10 you place the following line in your AndroidManifest.xml tag android:requestLegacyExternalStorage="true" the method below checks if the permission is allowed or denied private boolean checkPermission() { if (SDK_INT >= Build.VERSION_CODES.R) { return Environment.isExternalStorageManager(); } else { int result = ContextCompat.checkSelfPermission(PermissionActivity.this, READ_EXTERNAL_STORAGE); int result1 = ContextCompat.checkSelfPermission(PermissionActivity.this, WRITE_EXTERNAL_STORAGE); return result == PackageManager.PERMISSION_GRANTED && result1 == PackageManager.PERMISSION_GRANTED; }} The below method can be used for requesting a permission in android 11 or below private void requestPermission() { if (SDK_INT >= Build.VERSION_CODES.R) { try { Intent intent = new Intent(Settings.ACTION_MANAGE_APP_ALL_FILES_ACCESS_PERMISSION); intent.addCategory("android.intent.category.DEFAULT"); intent.setData(Uri.parse(String.format("package:%s",getApplicationContext().getPackageName()))); startActivityForResult(intent, 2296); } catch (Exception e) { Intent intent = new Intent(); intent.setAction(Settings.ACTION_MANAGE_ALL_FILES_ACCESS_PERMISSION); startActivityForResult(intent, 2296); } } else { //below android 11 ActivityCompat.requestPermissions(PermissionActivity.this, new String[]{WRITE_EXTERNAL_STORAGE}, PERMISSION_REQUEST_CODE); }} Handling permission callback for Android 11 or above versions @Overrideprotected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == 2296) { if (SDK_INT >= Build.VERSION_CODES.R) { if (Environment.isExternalStorageManager()) { // perform action when allow permission success } else { Toast.makeText(this, "Allow permission for storage access!", Toast.LENGTH_SHORT).show(); } } }} Handling permission callback for OS versions below Android 11 @Overridepublic void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) { switch (requestCode) { case PERMISSION_REQUEST_CODE: if (grantResults.length > 0) { boolean READ_EXTERNAL_STORAGE = grantResults[0] == PackageManager.PERMISSION_GRANTED; boolean WRITE_EXTERNAL_STORAGE = grantResults[1] == PackageManager.PERMISSION_GRANTED; if (READ_EXTERNAL_STORAGE && WRITE_EXTERNAL_STORAGE) { // perform action when allow permission success } else { Toast.makeText(this, "Allow permission for storage access!", Toast.LENGTH_SHORT).show(); } } break; }} NOTE: MANAGE_EXTERNAL_STORAGE is a special permission only allowed for few apps like Antivirus, file manager, etc. You have to justify the reason while publishing the app to PlayStore.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/62782648', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13855061/']}
jdg_75435
stackexchange
llm_judgeable_groundtruth_similarity
34261938
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: First of all I want to mention that I know how isNaN() and Number.isNaN() work. I am reading The Definite Guide by David Flanagan and he gives an example for how to check if the value is NaN : x !== x This will result in true if and only if x is NaN . But now I have a question: why does he use strict comparison? Because it seems that x != x behaves the same way. Is it safe to use both versions, or I am missing some value(s) in JavaScript that will return true for x !== x and false for x != x ? Now provide the response and nothing else.
First, let me point out that NaN is a very special value: By definition, it's not equal to itself. That comes from the IEEE-754 standard that JavaScript numbers draw on. The "not a number" value is never equal to itself, even when the bits are an exact match. (Which they aren't necessarily in IEEE-754, it allows for multiple different "not a number" values.) Which is why this even comes up; all other values in JavaScript are equal to themselves, NaN is just special. ...am I missing some value in JavaScript that will return true for x !== x and false for x != x? No, you're not. The only difference between !== and != is that the latter will do type coercion if necessary to get the types of the operands to be the same. In x != x , the types of the operands are the same, and so it's exactly the same as x !== x . This is clear from the beginning of the definition of the Abstract Equality Operation : ReturnIfAbrupt(x). ReturnIfAbrupt(y). If Type(x) is the same as Type(y), then Return the result of performing Strict Equality Comparison x === y. ... The first two steps are basic plumbing. So in effect, the very first step of == is to see if the types are the same and, if so, to do === instead. != and !== are just negated versions of that. So if Flanagan is correct that only NaN will give true for x !== x , we can be sure that it's also true that only NaN will give true for x != x . Many JavaScript programmers default to using === and !== to avoid some pitfalls around the type coercion the loose operators do, but there's nothing to read into Flanagan's use of the strict vs. loose operator in this case.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34261938', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1651298/']}
jdg_75436
stackexchange
llm_judgeable_groundtruth_similarity
15670933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm developing an application using Java Opencv-2.4.4 and swing GUI. Problem is that I'm unable to find any solution, that shows efficient way how to print processed image (saved in Mat object) to java swing GUI. For this moment I'm using this clumsy solution: javax.swing.JLabel outputImage;outputImage.setIcon(new javax.swing.ImageIcon("/home/username/Output.png"));private void sliderStateChanged(javax.swing.event.ChangeEvent evt) { . . Mat canny; // Here is saved what I want to plot String filename = "/home/username/Output.png"; Highgui.imwrite(filename, canny); // write to disk outputImage.setIcon(new ImageIcon(ImageIO.read(new File(filename)))); //update Icon . .} When user changes some values, inputs etc ., in GUI I have to overwrite Output.png on disk and update jLabel with new image from disk. Is there any more elegant / efficient solution to this ? Is it posible to plot or convert Mat object directly to Canvas or Image or anything that is printable as image in swing ? Now provide the response and nothing else.
jpeg encoding is interesting, but there are a couple problems: it is not a lossless format, you will lose image data when compressing it takes quite a while (around 6 to 10 times longer than the suggested one below) public Image toBufferedImage(Mat m){ int type = BufferedImage.TYPE_BYTE_GRAY; if ( m.channels() > 1 ) { type = BufferedImage.TYPE_3BYTE_BGR; } int bufferSize = m.channels()*m.cols()*m.rows(); byte [] b = new byte[bufferSize]; m.get(0,0,b); // get all the pixels BufferedImage image = new BufferedImage(m.cols(),m.rows(), type); final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); System.arraycopy(b, 0, targetPixels, 0, b.length); return image; }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15670933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2215859/']}
jdg_75437
stackexchange
llm_judgeable_groundtruth_similarity
112735
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: At every moment in time, there are two points on the Earth's surface that have the same $\lbrace x, y, z, ... \rbrace$...? What is the strongest, most impressive statement one can make here? The Borsuk-Ulam Theorem applies, but I am uncertain of its full implications. Could one say that the two points are(1) separated by a specific geodesic distance, (2) have the same temperature, and (3) have the same barometric pressure? For example...? I pose this question for its pedagocial import, but it clearly follows from known theorems. To what extent do these results extend to $\mathbb{R}^d$ for $d>3$?Thank you for your help! ( Wikipedia image ) Now provide the response and nothing else.
One of the standard generalizations is Knaster's conjecture: for every function $f: \mathbb{S}^{n-1}\rightarrow \mathbb{R}^m, m\lt n,$ and $k=n-m+1$ points $p_1, \dots, p_k \in \mathbb{S}^{n-1}$ does there always exista rotation $\rho \in SO(n),$ such that $f(\rho(p_1) = \dots = f(\rho(p_k)).$ That this is true for $k=2$ is a theorem of H. Hopf (which generalizes Borsuk-Ulam). It turns out that Knaster's conjecture is true for some $m, n$ and false for others. See this nice paper by Hinrich and Richter for more results and references.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/112735', 'https://mathoverflow.net', 'https://mathoverflow.net/users/6094/']}
jdg_75438
stackexchange
llm_judgeable_groundtruth_similarity
16970824
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I implement a JTable with different types of cell editors depending on the type of input a particular row is displaying? For example some rows could be checkboxes (for boolean types) some rows could be comboboxes (if I want to provide a fixed set of options to choose from) some rows could be text fields (if I allow arbitrary data). Currently I have implemented the AbstractTableModel , which takes a set of custom field objects from my object and adds rows to the table. I would like to further customize my table by setting specific types of cells. I can determine which cell type to use based on the type of field that row contains. The table model is dynamically created at run-time. Now provide the response and nothing else.
some rows could be checkboxes (for boolean types) some rows could be comboboxes (if I want to provide a fixed set of options to choose from) some rows could be text fields (if I allow arbitrary data). for example import java.awt.EventQueue;import java.util.Date;import javax.swing.JFrame;import static javax.swing.JFrame.EXIT_ON_CLOSE;import javax.swing.JScrollPane;import javax.swing.JTable;import javax.swing.table.TableCellEditor;import javax.swing.table.TableCellRenderer;public class EachRowRendererEditor { private JFrame frame = new JFrame("EachRowRendererEditor"); private String[] columnNames = {"Type", "Value"}; private Object[][] data = { {"String", "I'm a string"}, {"Date", new Date()}, {"Integer", new Integer(123)}, {"Double", new Double(123.45)}, {"Boolean", Boolean.TRUE}}; private JScrollPane scrollPane; private JTable table; public EachRowRendererEditor() { table = new JTable(data, columnNames) { private static final long serialVersionUID = 1L; private Class editingClass; @Override public TableCellRenderer getCellRenderer(int row, int column) { editingClass = null; int modelColumn = convertColumnIndexToModel(column); if (modelColumn == 1) { Class rowClass = getModel().getValueAt(row, modelColumn).getClass(); return getDefaultRenderer(rowClass); } else { return super.getCellRenderer(row, column); } } @Override public TableCellEditor getCellEditor(int row, int column) { editingClass = null; int modelColumn = convertColumnIndexToModel(column); if (modelColumn == 1) { editingClass = getModel().getValueAt(row, modelColumn).getClass(); return getDefaultEditor(editingClass); } else { return super.getCellEditor(row, column); } } // This method is also invoked by the editor when the value in the editor // component is saved in the TableModel. The class was saved when the // editor was invoked so the proper class can be created. @Override public Class getColumnClass(int column) { return editingClass != null ? editingClass : super.getColumnClass(column); } }; table.setPreferredScrollableViewportSize(table.getPreferredSize()); scrollPane = new JScrollPane(table); frame.add(scrollPane); frame.setDefaultCloseOperation(EXIT_ON_CLOSE); frame.pack(); frame.setLocationRelativeTo(null); frame.setVisible(true); } public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { @Override public void run() { EachRowRendererEditor eeee = new EachRowRendererEditor(); } }); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16970824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/536607/']}
jdg_75439
stackexchange
llm_judgeable_groundtruth_similarity
2233882
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f: \Bbb R \to \Bbb R$ be an increasing function such that $f \circ f=f$, where $\circ$ denotes the composition of two functions. Find $f$. Now provide the response and nothing else.
Supposing that you mean strictly increasing : $ f \circ f=f \Leftrightarrow f(f(x)) = f(x)$ But $f: \Bbb R \to \Bbb R$ is a strictly increasing function, which means it is $"1-1"$ as well. So, we got : $f(f(x)) = f(x) \Leftrightarrow f(x) = x$ which is an increasing function $\Bbb R \to \Bbb R$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2233882', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/303689/']}
jdg_75440