source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
20176
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: The Eilenberg-Maclane space $K(\mathbb{Z}/2\mathbb{Z}, 1)$ has a particularly simple cell structure: it has exactly one cell of each dimension. This means that its "Euler characteristic" should be equal to$$1 - 1 + 1 - 1 \pm ...,$$or Grandi's series. Now, we "know" (for example by analytic continuation) that this sum is morally equal to $\frac{1}{2}$. One way to see this is to think of $K(\mathbb{Z}/2\mathbb{Z}, 1)$ as infinite projective space, e.g. the quotient of the infinite sphere $S^{\infty}$ by antipodes. Since $S^{\infty}$ is contractible, the "orbifold Euler characteristic" of the quotient by the action of a group of order two should be $\frac{1}{2}$. More generally, following John Baez $K(G, 1)$ for a finite group $G$ should be "the same" (I'm really unclear about what notion of sameness is being used here) as $G$ thought of as a one-object category, which has groupoid cardinality $\frac{1}{|G|}$. In particular, $K(\mathbb{Z}/n\mathbb{Z}, 1)$ should have groupoid cardinality $\frac{1}{n}$. I suspect that $K(\mathbb{Z}/n\mathbb{Z}, 1)$ has $1, n-1, (n-1)^2, ...$ cells of each dimension, hence orbifold Euler characteristic $$\frac{1}{n} = 1 - (n-1) + (n-1)^2 \mp ....$$Unfortunately, I don't actually know how to construct Eilenberg-Maclane spaces... Question 1a: How do I construct $K(\mathbb{Z}/n\mathbb{Z}, 1)$, and does it have the cell structure I think it has? (I've been told that one can write down the cell structure of $K(G, 1)$ for a finitely presented group $G$ explicitly, but I would really appreciate a reference for this construction.) Question 2: $K(\mathbb{Z}/2\mathbb{Z}, 1)$ turns out to be "the same" as the set of all finite subsets of $(0, 1)$, suitably interpreted; the finite subsets of size $n$ form the cell of dimension $n$. Jim Propp and other people who think about combinatorial Euler characteristic would write this as $\chi(2^{(0, 1)}) = 2^{\chi(0, 1)}$. Is it true more generally that $K(\mathbb{Z}/n\mathbb{Z}, 1)$ is "the same" as the set of all functions $(0, 1) \to [n]$, suitably interpreted? Question 3: What notion of "sameness" makes the above things I said actually true? Question 4: Let $G$ be a finite group and let $K(G, 1)$ be constructed using the standard construction I asked about in Question 1. If $c_n$ denotes the number of cells of dimension $n$, let $f_G(z) = \sum_{n \ge 0} c_n z^n$. Can $f_G$ always be analytically continued to $z = -1$ so that $f_G(-1) = \frac{1}{|G|}$? Now provide the response and nothing else.
There are multiple possible cell structures on K(Z/n,1). One is generic. For any finite group G there is a model for BG that has (|G|-1) k new simplices in each nonzero degree k. This is the standard simplicial bar construction of K(G,1). This gives you that BG has Euler characteristic 1/|G|, if you like. One is more specific. There is another cell structure on K(Z/n,1), viewing it as a union of generalized lens spaces, that has exactly one cell in each degree. This is a topological avatar of the "simple" resolution of Z by free Z[Z/n]-modules. Obviously this doesn't give you the Euler characteristic argument you're seeking - one needs to keep track of more intricate information about the cell attachments in order to extract something.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/20176', 'https://mathoverflow.net', 'https://mathoverflow.net/users/290/']}
jdg_74241
stackexchange
llm_judgeable_groundtruth_similarity
55070857
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Working on an automation test using codeceptjs/selenium, I need to read the text of the nodes of a select drop down without knowing what they will be ahead of time and excluding the greyed out default option (since the app will not function until a choice is made). <select id="itemname1"> <option class="greydefault">How Many?</option> <option>6</option> <option>8</option> <option>10</option> <option>12</option></select><select id="itemname2"> <option class="greydefault">What Type?</option> <option>a</option> <option>b</option> <option>c</option> <option>d</option></select> Given the above simple dom, question is how to write an xpath that grab all 4 of the option nodes that contain 6 8 10 and 12. And another xpath that returns all 4 of the option nodes a b c and d. Here are my attempts that don't work for me: .//option/preceding::option[text()="What Type?"]/following::option[text()="How Many?"].//option/following::option[text()="How Many?"]/preceding::option[text()="What Type?"].//option/preceding::option[text()="What Type?"] and following::option[text()="How Many?"].//option/[preceding::option[text()="What Type?"] and following::option[text()="How Many?"]] The first 2 attempts returns only the node "How many" for me, and the next attempts are invalid. The output of the xpath query should return each option node other than the default node. That xpath is put into a function that reads the texts of the nodes, it looks like this: getElementsText(locator) { let driver = this.helpers.Protractor.browser; return driver.element.all(locator).getAttribute("textContent").then((result) => { return result.toString().split(','); }); } Now provide the response and nothing else.
Join a data.table to a subset of itself, by group, to get values from rows matching non-equal criteria. Summary: Below I show 5 working data.table solutions that were candidates toperformance test against the OP's actual data set (1.4M records). All 5 solutions use "non-equi" joins (using inequality to comparecolumns for the join) in the on clause. Each solution is just a small progressive code change so it should beeasy to follow along to compare different data.table options and syntax choices. Approach To work through data.table syntax for this I broke it into to the following steps for the OP's problem: Join the dt to a subset of itself (or another data.table for that matter). Select (and rename) the columns you want from either dt or the subset. Define the join criteria based on columns from dt compared to columns in the subset, including using "non-equi" (non-equal) comparisons. Optionally define whether first or last match should be selected when multiple matching records are found in the subset. Solution 1: # Add row numbers to all records in dt (only because you # have criteria based on comparing sequential rows):dt[, row := .I] # Compute result columns ( then standard assignment into dt using <- )dt$found_date <- dt[code=='p'][dt, # join dt to the data.table matching your criteria, in this case dt[code=='p'] .( x.date_up ), # columns to select, x. prefix means columns from dt[code=='p'] on = .(id==id, row > row, date_up > date_down), # join criteria: dt[code=='p'] fields on LHS, main dt fields on RHS mult = "first"] # get only the first match if multiple matches Note in the join expressions above: i in this case is your main dt. This way you get all records from your main data.table. x is the subset (or any other data.table) from which you want to find matching values. Result matches requested output: dt id code date_down date_up row found_date 1: 1 p 2019-01-01 2019-01-02 1 <NA> 2: 1 f 2019-01-02 2019-01-03 2 <NA> 3: 2 f 2019-01-02 2019-01-02 3 <NA> 4: 2 p 2019-01-03 <NA> 4 <NA> 5: 3 p 2019-01-04 <NA> 5 <NA> 6: 4 <NA> 2019-01-05 2019-01-05 6 <NA> 7: 5 f 2019-01-07 2019-01-08 7 2019-01-08 8: 5 p 2019-01-07 2019-01-08 8 2019-01-09 9: 5 p 2019-01-09 2019-01-09 9 <NA>10: 6 f 2019-01-10 2019-01-10 10 2019-01-1111: 6 p 2019-01-10 2019-01-10 11 2019-01-1112: 6 p 2019-01-10 2019-01-11 12 <NA> Note: You may remove the row column by doing dt[, row := NULL] if you like. Solution 2: Identical logic as above to join and find the result columns , but now using "assign by reference" := to create found_date in dt : dt[, row := .I] # add row numbers (as in all the solutions)# Compute result columns ( then assign by reference into dt using := # dt$found_date <- dt[, found_date := # assign by reference to dt$found_date dt[code=='p'][dt, .( x.date_up ), on = .(id==id, row > row, date_up > date_down), mult = "first"]] In Solution 2, the slight variation to assign our results "by reference" into dt should be more efficient than Solution 1. Solution 1 calculated results the exact same way - the only difference is Solution 1 used standard assignment <- to create dt$found_date (less efficient). Solution 3: Like Solution 2 but now using .(.SD) in place of dt to refer to the original dt without naming it directly. dt[, row := .I] # add row numbers (as in all the solutions)setkey(dt, id, row, date_down) #set key for dt # For all rows of dt, create found_date by reference :=dt[, found_date := # dt[code=='p'][dt, dt[code=='p'][.(.SD), # our subset (or another data.table), joined to .SD (referring to original dt) .( x.date_up ), on = .(id==id, row > row, date_up > date_down), mult = "first"] ] .SD above references back to the original dt that we are assigning back into. It corresponds to the subset of data.table that contains the rows selected in the first dt[, which is all the rows because we didn't filter it. Note: In Solution 3 I used setkey() to set the key. I should have done that in Solution 1 & Solution 2 - however I didn't want to change those solutions after @OllieB tested them successfully. Solution 4: Like Solution 3 but using .SD once more than previously. Our main data.table name dt now appears only once across our entire expression! # add row column and setkey() as previous solutionsdt[, found_date := # dt[code=='p'][.(.SD), .SD[code=='p'][.SD, # .SD in place of dt at left! Also, removed .() at right (not sure on this second change) .(found_date = x.date_up), on = .(id==id, row > row, date_up > date_down), mult = "first"]] With the change above our data.table name dt appears only once. I like that a lot because it makes it easy to copy, adapt and reuse elsewhere. Also note: Where I'd previously used .(SD) I've now removed the .() around .SD because it doesn't appear to require it. However for that change I'm not sure if it has any performance benefit or whether it's data.table preferred syntax. I would be grateful if anyone can add a comment to advise on that point. Solution 5: Like previous solutions but making use of by to explicitly group subsets over operations when joining # add row column and setkey() as previous solutionsdt[, found_date := .SD[code=='p'][.SD, .(found_date = x.date_up), # on = .(id==id, row > row, date_up > date_down), on = .(row > row, date_up > date_down), # removed the id column from here mult = "first"] , by = id] # added by = id to group the .SD subsets On this last solution I changed it to use the by clause to explicitly group the .SD subsets on id . Note: Solution 5 did not perform well against OllieB's actual data compared to Solutions 1 - 4. However, testing my own mock data I found that Solution 5 could perform well when the number of unique groups from the id column were low: - With only 6 groups in 1.5M records this solution worked just as fast as the others. - With 40k groups in 1.5M records I saw similar poor performance as OllieB reported. Results Solutions 1 - 4 performed well: For 1.45M records in OllieB's actual data each of Solutions 1 to 4 were all 2.42 seconds or less "elapsed" time according to OllieB's feedback. Solution 3 appears worked fastest for OllieB having "elapsed=1.22" seconds. I personally prefer Solution 4 because of the simpler syntax. Solution 5 Solution 5 (using by clause) performed poorly taking 577 seconds for OllieB's testing on his real data. Versions used data.table version: 1.12.0 R version 3.5.3 (2019-03-11) Possible further improvements: Changing the date fields to integer may help join more efficiently. See as.IDate() to convert dates to integer in data.tables. The setkey() step may no longer bee needed: As explained here by @Arun due to on envoking [often] more efficient secondary indicies and auto indexing. References to data.table As part of your question you've asked for "any good references to data.table". I've found the following helpful: data.table Getting started Wiki on GitHub is the place to start. In particular for this problem it's worth reading: What does .SD stand for in data.table in R The HTML vignette for Secondary indices and auto indexing Importantly note this answer by @Arun which explains "the reason for implementing on= argument" suggests it may no longer be necessary to set keys any more: It is therefore essential to figure out if the time spent on reordering the entire data.table is worth the time to do a cache-efficient join/aggregation. Usually, unless there are repetitive grouping / join operations being performed on the same keyed data.table, there should not be a noticeable difference. In most cases therefore, there shouldn't be a need to set keys any more. We recommend using on= wherever possible, unless setting key has a dramatic improvement in performance that you'd like to exploit. This SO question seems to be the hub of information about the different data.table joins: How to join (merge) data frames (inner, outer, left, right)? Finally, data.table cheat sheet is an excellent reference (from link found on the data.table Getting started Wiki on GitHub). As always I'm grateful if anyone has suggestions as perhaps this can be improved further. Please feel free to comment, correct or post other solutions if you can add anything.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55070857', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_74242
stackexchange
llm_judgeable_groundtruth_similarity
1873482
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is another way to ask if Wiles's proof can be converted into a "purely number-theoretic" one. If there is no proof in Peano Arithmetic then there should be non-standard integers that satisfy the Fermat equation. I vaguely remember that most proofs in analytic number theory are known to be convertible into elementary ones, probably because some version of predicative analysis is a conservative extension of arithmetic. But Wiles's proof uses not as much analysis as high end algebraic geometry, so I am not sure. Is it convertible into elementary arithmetic, is that even known? Now provide the response and nothing else.
I am going to respond to two questions quoted below, which come from this comment . Here "TP" means "transfer principle". I've switched the order of the questions. ... why it does not prove that any sentence for which TP is preserved and which is true in the standard model, must be true in all models, and hence provable in PA. Here is an informal summary of how the hyperreal construction works. If we begin with any structure $M$, with an associated formal language, and take an ultrapower, we will obtain a structure $M^*$. By Los's theorem, $M^*$ will satisfy the same sentences as $M$ in that formal language. In the cases of interest, $M$ has an internal real line $\mathbb{R}$ and an internal semiring of naturals $\mathbb{N}$. The ultraproduct $M^*$ with then have an internal "hyperreal" line $\mathbb{R}^*$ and an internal semiring of "hypernaturals" $\mathbb{N}^*$. If we begin with different models $M_1$ and $M_2$ (e.g. if one of them is nonstandard), we simply obtain different models $M_1^*$ and $M_2^*$. Sentences from the appropriate language are true in $M_1$ if and only if they are true in $M_1^*$, and true in $M_2$ if and only if they are true in $M_2^*$, but if there was no connection between $M_1$ and $M_2$ to begin with then the ultrapower construction can't create one. The transfer principle only holds between each individual model $M$ and its ultrapower $M^*$. At this point I am most curious as to why a naive transplantation of the TP argument to "nonstandard" embedding does not (indirectly) prove that FLT holds in all models of PA, even if it does not provide any means of (directly) converting Wiles's proof into PA. The original question is right - "Wiles's proof uses not as much analysis as high end algebraic geometry". This is bad for nonstandard analysis, though, because when we make the hyperreal line we normally begin with a structure $M$ that is just barely more than the field of real numbers. This is good if we want to work with the kinds of constructions used in elementary analysis, but not as useful if we need to work with more general set-theoretic constructions. Wiles' proof as literally read uses various set-theoretic constructions of "universes". So, to try to approach that in a nonstandard setting, we would want to start with a model $M$ that is much more than just a copy of the real line. That would not seem to help us show that FLT holds in every model of PA. The thing about the proof is that the "literal" reading is too strong. I am no expert in the algebraic geometry used, but I followed the discussions in several forums, and here is the situation as I understand it. The proof relies on several general lemmas which, to be proved in utmost generality, were proved using very strong methods. However, in concrete cases such as FLT only weaker versions of the lemmas are needed, and experts seem to believe that the weaker versions should be provable in PA. But actually working out the proof would require a lot of effort to prove in PA the special, weaker cases of all the necessary lemmas and then combine them to get FLT. This is very analogous in my mind to the fact that in logic we often prove things using strong axioms, but experts in logic recognize that these things are also provable, in concrete cases, in much weaker systems. We don't generally dwell on that, or even point it out, unless there is a specific reason. Indeed, the number of things which I know are provable in PA is much larger than the number for which I have ever written out a proof in PA. Colin McLarty has published several papers on the axioms needed for FLT. You could look at What does it take to prove Fermat's Last Theorem? from the Bulletin of Symbolic Logic which is relatively accessible.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1873482', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152568/']}
jdg_74243
stackexchange
llm_judgeable_groundtruth_similarity
40943
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A recent (February 2018) report by the US government Council of Economic Advisors makes the following argument (my highlights): The United States both conducts and finances much of the biopharmaceutical innovation that the world depends on, allowing foreign governments to enjoy bargain prices for such innovations. This indicates that our current policies are neither wise nor just. Simply put, other nations are free-riding, or taking unfair advantage of the United States’ progress in this area. An article in the International Business Times argues something similar: Medicines in the U.S. frequently cost significantly more than the same versions in other advanced countries. ... But what often goes overlooked in these discussions is the fact that pricey medicines in the U.S. actually subsidize research and development for the rest of the world , and for all the proposals to lower drug prices in the U.S., a solution to this particular imbalance is nowhere in sight. Though the same article reproduces evidence that the industry spends far more on marketing than on R&D (which could be interpreted as an argument that it isn't R&D that is being subsidised but excessive marketing, an issue addressed in this question here ). Update According to some sources (eg The Daily Telegraph ) the belief that US prices for drugs subsidise the world is going to impact US trade policy: Alex Azar, the US Health and Human Services Secretary, has said Washington will use its muscle to push up drug prices abroad, to lower the cost paid by patients in the United States. "On the foreign side, we need to, through our trade negotiations and agreements, pressure them," Azar said on CNBC. "And so we pay less, they pay more. It shouldn't be a one-way ratchet. We all have some skin in this game." Do US prices for pharmaceuticals subsidise R&D that benefits the rest of the world? Now provide the response and nothing else.
tl:dr Yee-es?!? The pharmaceuticals in the US are subsidising something. But what that is is certainly not anything that benefits the rest of the world. The sentences in the claim in question are blatant lies . US prices end up as profits for a large part and are not reinvested, certainly not into research and development of new drugs, as the industry likes to claim. These lies are used, as the great philosopher DJ Trump once said: "to get away with murder." Let's look at the developer centres in a what if scenario: The claim is based on implicit assumptions and indirect implications: that almost all the innovation comes from US based firms and that they throw the money at the problem of development. This overlooks gracefully that amount spent is not equal to either efficiency or effectivity in conducting research. It also overlooks gracefully that drugs from non-US companies are also often more expensive in the US than elsewhere. And it assumes that all the money going round in the US for research is the greatest amount anywhere. Is that the case? Let's ask some investors: The top 10 pharma R&D budgets in 2016 Swiss oncology major Roche was tops in total terms, spending a massive CHF11.53 billion ($11.42 billion) last year, nearly 23% of its CHF50.57 billion in revenue. It also recorded a 20% jump in R&D spending compared with 2015, the biggest increase among the top 10, with most of this increase going into its pharmaceuticals divisions, the rest into diagnostics. That does of course not prove whether US profits are also driving Swiss innovation, but it shows that the self-reported relation between revenue and R&D might not be always so much in favour of US based companies in this regard. A recent analysis for some pharmaceutical companies found that US prices alone carry a hefty margin, unexplainable with any R&D: R&D Costs For Pharmaceutical Companies Do Not Explain Elevated US Drug Prices (Health Affairs, 2017) Excess Revenues Earned Through Premium Pricing Of Products In The US As A Percentage Of The Company’s Global Research And Development Expenditures, 2015 Comment: We found that the premiums pharmaceutical companies earn from charging substantially higher prices for their medications in the US compared to other Western countries generates substantially more than the companies spend globally on their research and development. This finding counters the claim that the higher prices paid by US patients and taxpayers are necessary to fund research and development. Rather, there are billions of dollars left over even after worldwide research budgets are covered. To put the excess revenue in perspective, lowering the magnitude of the US premium to a level where it matches global R&D expenditures across the 15 companies we assessed would have saved US patients, businesses, and taxpayers approximately $40 billion in 2015, a year for which the Centers for Medicare and Medicaid Services (CMS) reported that total US spending on pharmaceuticals was $325 billion. Revenues Earned From US Premium Pricing And Global Spending On Research And Development Of The 15 Pharmaceutical Companies Responsible For The World’s 20 Top-Selling Products In 2015 But these numbers above are based on the industry's own estimates, somewhat lacking in transparency and apparently erring on the inflated side. In reality the actual costs for developing a new drugs are very probably much lower: Vinay Prasad & Sham Mailankod: "Research and Development Spending to Bring a Single Cancer Drug to Market and Revenues After Approval", JAMA Intern Med. 2017;177(11):1569-1575 . doi:10.1001/jamainternmed.2017.3601 A common justification for high cancer drug prices is the sizable research and development (R&D) outlay necessary to bring a drug to the US market. A recent estimate of R&D spending is $2.7 billion (2017 US dollars). However, this analysis lacks transparency and independent replication. The cost to develop a cancer drug is $648.0 million, a figure significantly lower than prior estimates. The revenue since approval is substantial (median, $1658.4 million; range, $204.1 million to $22 275.0 million). Unregulated markets lead invariably to crisis and failure. Markup of Select Prescription Drugs Consumer Price for 100 Tablets Cost of Active Ingredients Percent MarkupXanax 1mg $136.79 $0.024 569,958%Prozac 20mg $247.47 $0.11 224,973%Norvasec 10mg $188.2 $0.14 134,493%Claritin 10mg $215.17 $0.71 30,306%Celebrex 100mg $130.27 $0.60 21,712%Keflex 250mg $157.39 $1.88 8,372%Lipitor 20mg $272.37 $5.80 4,696% Source: The list was published by members of the U.S. Department of Commerce and the Bureau of Economic Analysis . Xanax, or Alprazolam is covered under U.S. Patent 3,987,052, which was filed on 29 October 1969, granted on 19 October 1976, and expired in September 1993. Market prices for this drug do not reflect recent development costs. If the situations in Europe and America are compared, correlations between anything like manufactures, developers or researcher and market prices of drugs are very hard to find. –– But a big correlation exists in how the local government policy plays out: Olivier J. Wouters & Panos G. Kanavos & Martin Mckee: "Comparing Generic Drug Markets in Europe and the United States: Prices, Volumes, and Spending", The Milbank Quarterly, Volume95, Issue3, September 2017, Pages 554-601, https://doi.org/10.1111/1468-0009.12279 Meanwhile, telling “innovation crisis” stories to politicians and the press serves as a ploy, a strategy to attract a range of government protections from free market, generic competition. How much does research and development cost? Although the pharmaceutical industry emphasises how much money it devotes to discovering new drugs, little of that money actually goes into basic research. Data from companies, the United States National Science Foundation, and government reports indicate that companies have been spending only 1.3% of revenues on basic research to discover new molecules, net of taxpayer subsidies. More than four fifths of all funds for basic research to discover new drugs and vaccines come from public sources. Moreover, despite the industry’s frequent claims that the cost of new drug discovery is now $1.3bn (£834m; €1bn), this figure, which comes from the industry supported Tufts Center, has been heavily criticised. Half that total comes from estimating how much profit would have been made if the money had been invested in an index fund of pharmaceutical companies that increased in value 11% a year, compounded over 15 years. While used by finance committees to estimate whether a new venture is worth investing in, these presumed profits (far greater than the rise in the value of pharmaceutical stocks) should not be counted as research and development costs on which profits are to be made. Half of the remaining $0.65bn is paid by taxpayers through company deductions and credits, bringing the estimate down to one quarter of $1.3bn or $0.33bn. The Tufts study authors report that their estimate was done on the most costly fifth of new drugs (those developed in-house), which the authors reported were 3.44 times more costly than the average, reducing the estimate to $90m. The median costs were a third less than the average, or $60m. Deconstructing other inflators would lower the estimate of costs even further. Myth of unsustainable research and development Complementing the stream of articles about the innovation crisis are those about the costs of research and development being “unsustainable” for the small number of new drugs approved. Both claims serve to justify greater government support and protections from generic competition, such as longer data exclusivity and more taxpayer subsidies. However, although reported research and development costs rose substantially between 1995 and 2010, by $34.2bn, revenues increased six times faster, by $200.4bn. Companies exaggerate costs of development by focusing on their self reported increase in costs and by not mentioning this extraordinary revenue return. Net profits after taxes consistently remain substantially higher than profits for all other Fortune 500 companies. This hidden business model for pharmaceutical research, sales, and profits has long depended less on the breakthrough research that executives emphasise than on rational actors exploiting ever broader and longer patents and other government protections against normal free market competition. Companies are delighted when research breakthroughs occur, but they do not depend on them, declarations to the contrary notwithstanding. The 1.3% of revenues devoted to discovering new molecules compares with the 25% that an independent analysis estimates is spent on promotion, and gives a ratio of basic research to marketing of 1:19. The true crisis in pharmaceutical research The number of new drugs licensed remains at the long term average range of 15-25 a year However, 85-90% of new products over the past 50 years have provided few benefits and considerable harms The pharmaceutical industry devotes most research funds to developing scores of minor variations that produce a steady stream of profits Heavy promotion of these drugs contributes to overuse and accounts for as much as 80% of a nation’s increase in drug expenditure Overinflated estimates of the average cost of research and development are used to lobby for more protection from free market competition From: Donald W Light & Joel R Lexchin: "Pharmaceutical research and development: what do we get for all that money?" BMJ 2012; 345 doi: https://doi.org/10.1136/bmj.e4348 (Published 07 August 2012) Prescription drug prices in the United States have been among the highest in the world. And given the facts available, that is not easily interpreted in any other way as U.S. Prescription Drug Costs Are a Crime . The United States, which leaves pricing to market competition, has higher drug prices than other countries where governments directly or indirectly control medicine costs. That makes it by far the most profitable market for pharmaceutical companies, leading to complaints that Americans are effectively subsidizing health systems elsewhere. Many of the biggest differences were evident for older drugs, reflecting the fact that prices are typically hiked each year in the United States, said University of Liverpool drug pricing expert Andrew Hill. "It shows the U.S. drug pricing situation isn't just a matter of isolated cases like Turing Pharmaceuticals ," he said. The latest furore over U.S. drug costs was prompted by the decision by unlisted Turing to hike the cost of an old drug against a parasitic infection to $750 a pill from $13.50. It has since promised to roll back the increase. The same medicine is sold in Britain by GlaxoSmithKline for 43 pence (66 cents). (From: Ben Hirschler: "How the U.S. Pays 3 Times More for Drugs", Scientific American ) Looking at the relationship between advertising, promotion on one side and the actual development put forward with increasingly abysmal results for US based innovations makes it much more urgent to look at other reasons for price increases. One reason often heard is that pharmaceutical companies are very simply not non-profit-organisations. Paying for Prescription Drugs Around the World: Why Is the U.S. an Outlier? One reason U.S. prescription drug prices are higher may be the relative lack of price control strategies. Unlike the U.S., many other countries employ centralized price negotiations, national formularies, and comparative and cost-effectiveness research for determining price ceilings. In the U.S., health care delivery and payment are fragmented, with numerous, separate negotiations between drug manufacturers and payers and complex arrangements for various federal and state health programs. And, in general, the U.S. allows wider latitude for monopoly pricing of brand-name drugs than other countries are willing to accept. Summary Most of the research and development is financed from external, public sources. The money that is captured via US prices is not spent on research, but on manipulation, mergers, and maximum payouts. Why are they doing this? Because they can. What are the drug prices in the US subsidising? Profits.
{}
{'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/40943', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/3943/']}
jdg_74244
stackexchange
llm_judgeable_groundtruth_similarity
11018609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to Backbone and started by working through the Todos example. After that I created a new version of the example, for Contacts rather than Todos, that uses a Ruby on Rails web app and and it's associated REST API rather than localstorage for persistence. After making the modifications I am able to successfully have the Backbone app update the Rails app but I cannot get the Backbone views to render the data that the Backbone app receives from the Rails REST API. I have stepped through the code in the debugger and can see that: the events that call the functions to populate the views are being bound to the collection of models when I fetch the model data the collection is getting updated with the data from the server however, the reset event bound to the collection does not fire Can anybody point me to what might be causing the reset event to not fire? My code is below: Collection : var ContactsList = Backbone.Collection.extend({ model: Contact, url: 'http://localhost:3000/contacts.json',});var Contacts = new ContactsList; AppView: var AppView = Backbone.View.extend({ el: $("#contactapp"), events: { "keypress #new-contact": "createOnEnter" }, initialize: function() { this.input = this.$("#new-contact"); Contacts.bind('add', this.addOne, this); Contacts.bind('reset', this.addAll, this); Contacts.bind('all', this.render, this); Contacts.fetch(); }, addOne: function(contact) { var view = new ContactView({model: contact}); this.$("#contact-list").append(view.render().el); }, addAll: function() { Contacts.each(this.addOne); }, createOnEnter: function(e) { if (e.keyCode != 13) return; if (!this.input.val()) return; Contacts.create({first_name: this.input.val()}); this.input.val(''); },});var App = new AppView; Now provide the response and nothing else.
Is there any generic alternative / implementation for MemoryCache? Not in the base class library. You'd have to roll your own, though I, personally, would just make a wrapper around MemoryCache that provides the API you wish. This would provide type safety and provide performance benefits as no boxing/unboxing The type safety can be handled fairly easily in a wrapper class. The boxing/unboxing would only be an issue if you were storing value types (not classes), and even then, would likely be minimal, as it's unlikely that you're pushing and pulling from cache often enough to have this be a true performance issue. As for type safety and usability, I've actually written my own methods to wrap the MemoryCache item's calls in a generic method , which allows a bit nicer usage from an API standpoint. This is very easy - typically just requires a method like: public T GetItem<T>(string key) where T : class{ return memoryCache[key] as T;} Similarly, you can make a method to set values the same way. EDIT: Another thing I'm interested in is having a different key type. The default is a System.String. This is not supported directly with MemoryCache , so it would require a fair bit of work to make your own key generation. One option would be to make a type safe wrapper which also provided a Func<T, string> to generate a string key based off your value - which would allow you to generate a cache entry for any type T . You'd have to be careful, of course, to include all data in the string that you wanted as part of your comparison, however.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11018609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1259424/']}
jdg_74245
stackexchange
llm_judgeable_groundtruth_similarity
936830
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I made the differential equation : $$dQ = (-1/100)2Q dt$$ I separate it and get: $\int_a^b x (dQ/Q) = \int_a^b x (-2/100)dt$ this leads me to: $\log(|Q|) = (-t/50) + C$ I simplify that to $Q = e^{-t/50}$ My TI-Nspire differential equation solver, however, gives me: $Q = Ce^{-t/50}$ I'm confused as to why the calculator is multiple my answer with a constant and which one is the correct answer. Now provide the response and nothing else.
It is quite likely that you were told that $X$ and $Y$ are independent random variables, but neglected to pass on this information to us. Assuming that $X$ and $Y$ are independent Poisson random variables, $U = X+Y$ isalso a Poisson random variable with parameter $\theta+\lambda$. Thus, theprobability that $U = m$ ($m$ is a nonnegative integer here) is$$p_{X+Y}(m) = \frac{e^{-(\theta+\lambda)}(\theta+\lambda)^m}{m!}.$$Conditioned on $X+Y = m$, the conditional probability that $Y = n$ is $0$ if$n >m$, while for $0 \leq n \leq m$,$$\begin{align}p_{Y\mid X+Y=m}(n\mid m) &= \frac{P\left(\{Y=n\} \cap \{X+Y=m\}\right)}{P\{X+Y=m\}}&{\scriptstyle{\text{definition of comditional probability}}}\\&= \frac{P\{X=m-n,Y=n\}}{P\{X+Y=m\}}&{\scriptstyle{\text{a re-write}}}\\&= \frac{P\{X=m-n\}P\{Y=n\}}{\frac{e^{-(\theta+\lambda)}(\theta+\lambda)^m}{m!}}&{\scriptstyle{\text{independence of}~X~\text{and}~Y}}\\&= \frac{\frac{e^{-(\theta)}(\theta)^{m-n}}{(m-n)!}\frac{e^{-(\lambda)}(\lambda)^{n}}{(n!}}{\frac{e^{-(\theta+\lambda)}(\theta+\lambda)^m}{m!}}\\&= \binom{m}{n}\left(\frac{\theta}{\theta+\lambda}\right)^{m-n}\left(\frac{\lambda}{\theta+\lambda}\right)^{n}\end{align}$$which shows that, conditioned on the value of $X+Y$, $Y$ is a binomial random variable.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/936830', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/112964/']}
jdg_74246
stackexchange
llm_judgeable_groundtruth_similarity
9472488
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm starting to dive into dependently-typed programming and have found that the Agda and Idris languages are the closest to Haskell, so I started there. My question is: which are the main differences between them? Are the type systems equally expresive in both of them? It would be great to have a comprehensive comparative and a discussion about benefits. I've been able to spot some: Idris has type classes à la Haskell, whereas Agda goes with instance arguments Idris includes monadic and applicative notation Both of them seem to have some sort of rebindable syntax, although not really sure if they are the same. Edit : there are some more answers in the Reddit page of this question: http://www.reddit.com/r/dependent_types/comments/q8n2q/agda_vs_idris/ Now provide the response and nothing else.
I may not be the best person to answer this, as having implemented Idris I'm probably a bit biased! The FAQ - http://docs.idris-lang.org/en/latest/faq/faq.html - has something to say on it, but to expand on that a bit: Idris has been designed from the ground up to support general purpose programming ahead of theorem proving, and as such has high level features such as type classes, do notation, idiom brackets, list comprehensions, overloading and so on. Idris puts high level programming ahead of interactive proof, although because Idris is built on a tactic-based elaborator, there is an interface to a tactic based interactive theorem prover (a bit like Coq, but not as advanced, at least not yet). Another thing Idris aims to support well is Embedded DSL implementation. With Haskell you can get a long way with do notation, and you can with Idris too, but you can also rebind other constructs such as application and variable binding if you need to. You can find more details on this in the tutorial, or full details in this paper: http://eb.host.cs.st-andrews.ac.uk/drafts/dsl-idris.pdf Another difference is in compilation. Agda goes primarily via Haskell, Idris via C. There is an experimental back end for Agda which uses the same back end as Idris, via C. I don't know how well maintained it is. A primary goal of Idris will always be to generate efficient code - we can do a lot better than we currently do, but we're working on it. The type systems in Agda and Idris are pretty similar in many important respects. I think the main difference is in the handling of universes. Agda has universe polymorphism, Idris has cumulativity (and you can have Set : Set in both if you find this too restrictive and don't mind that your proofs might be unsound).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9472488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1236540/']}
jdg_74247
stackexchange
llm_judgeable_groundtruth_similarity
173735
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we want to define a first-order language to do set theory (so we can formalize mathematics).One such construction can be found here .What makes me uneasy about this definition is that words such as "set", "countable", "function", and "number" are used in somewhat non-trivial manners.For instance, behind the word "countable" rests an immense amount of mathematical knowledge: one needs the notion of a bijection, which requires functions and sets.One also needs the set of natural numbers (or something with equal cardinality), in order to say that countable sets have a bijection with the set of natural numbers. Also, in set theory one uses the relation of belonging " $\in$ ".But relation seems to require the notion an ordered pair, which requires sets, whose properties are described using belonging... I found the following in Kevin Klement's, lecture notes on mathematical logic (pages 2-3). "You have to use logic to study logic. There’s no getting away from it.However, I’m not going to bother stating all the logical rules that are valid in the metalanguage, since I’d need to do that in the metametalanguage, and that would just get me started on an infinite regress.The rule of thumb is: if it’s OK in the object language, it’s OK in the metalanguage too." So it seems that, if one proves a fact about the object language, then one can also use it in the metalanguage.In the case of set theory, one may not start out knowing what sets really are, but after one proves some fact about them (e.g., that there are uncountable sets) then one implicitly "adds" this fact also to the metalanguage. This seems like cheating: one is using the object language to conduct proofs regarding the metalanguage, when it should strictly be the other way round. To give an example of avoiding circularity, consider the definition of the integers.We can define a binary relation $R\subseteq(\mathbf{N}\times\mathbf{N})\times(\mathbf{N}\times\mathbf{N})$ , where for any $a,b,c,d\in\mathbf{N}$ , $((a,b),(c,d))\in R$ iff $a+d=b+c$ , and then defining $\mathbf{Z}:= \{[(a,b)]:a,b\in\mathbf{N}\}$ , where $[a,b]=\{x\in \mathbf{N}\times\mathbf{N}: xR(a,b)\}$ , as in this question or here on Wikipedia. In this definition if set theory and natural numbers are assumed, then there is no circularity because one did not depend on the notion of "subtraction" in defining the integers. So my question is: Question Is the definition of first-order logic circular? If not, please explain why. If the definitions are circular, is there an alternative definition which avoids the circularity? Some thoughts: Perhaps there is the distinction between what sets are (anything that obeys the axioms) and how sets are expressed (using a formal language).In other words, the notion of a set may not be circular, but to talk of sets using a formal language requires the notion of a set in a metalanguage. In foundational mathematics there also seems to be the idea of first defining something, and then coming back with better machinery to analyse that thing.For instance, one can define the natural numbers using the Peano axioms, then later come back to say that all structures satisfying the axioms are isomorphic. (I don't know any algebra, but that seems right.) Maybe sets, functions, etc., are too basic? Is it possible to avoid these terms when defining a formal language? Now provide the response and nothing else.
It's only circular if you think we need a formalization of logic in order to reason mathematically at all. However, mathematicians reasoned mathematically for many centuries before formal logic was invented, so this assumption is obviously not true. It's an empirical fact that mathematical reasoning existed independently of formal logic back then. I think it is reasonably self-evident, then, that it still exists without needing formal logic to prop it up. Formal logic is a mathematical model of the kind of reasoning mathematicians accept -- but the model is not the thing itself. A small bit of circularity does creep in, because many modern mathematicians look to their knowledge of formal logic when they need to decide whether to accept an argument or not. But that's not enough to make the whole thing circular; there are enough non-equivalent formal logics (and possible foundations of mathematics) to choose between that the choice of which one to use to analyze arguments is still largely informed by which arguments one intuitively wants to accept in the first place, not the other way around.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/173735', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35525/']}
jdg_74248
stackexchange
llm_judgeable_groundtruth_similarity
10965984
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a standalone test suite using mocha, that in a perfect world would start up my express.js application, use zombie to render a page, check a bunch of stuff and then teardown/kill the express.js application. Is there an easy/best way to do this? NB. I could just have the express application server running prior to running the tests, but what good are Yaks if you're not going to shave them. Now provide the response and nothing else.
First, you need to move your actual app setting up into a module, and import that into the file that actually starts your app. Now that this is seperate, you can have the app in its complete state before actually listening. You should move the actual setting up of your app into a separate file, let's call it app.js, can call listen from the file you run node off of, let's call it index.js. So, app.js would look like: var express = require('express') , routes = require('./routes');var app = module.exports = express.createServer();// Configurationapp.configure(function(){ app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.bodyParser()); app.use(express.methodOverride()); app.use(app.router); app.use(express.static(__dirname + '/public'));});app.configure('development', function(){ app.use(express.errorHandler({ dumpExceptions: true, showStack: true }));});app.configure('production', function(){ app.use(express.errorHandler());});// Routesapp.get('/', routes.index); and index.js would look like: var app = require('./app');app.listen(3000, function(){ console.log("Express server listening on port %d in %s mode", app.address().port, app.settings.env);}); This seperates loading of your app from actually having it listen, allowing you to load that app into your unit tests. In your unit tests, you would do something in a setup method and teardown method to bring up and bring down the server. In the file test/app_tests.js: describe('app', function(){ var app = require('../app'); beforeEach(function(){ app.listen(3000); }); // tests here afterEach(function(){ app.close(); })});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10965984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/599251/']}
jdg_74249
stackexchange
llm_judgeable_groundtruth_similarity
167938
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does it make a difference if I declare variables inside or outside a loop in Java? Is this for(int i = 0; i < 1000; i++) { int temp = doSomething(); someMethod(temp);} equal to this (with respect to memory usage)? int temp = 0;for(int i = 0; i < 1000; i++) { temp = doSomething(); someMethod(temp);} And what if the temporary variable is for example an ArrayList? for(int i = 0; i < 1000; i++) { ArrayList<Integer> array = new ArrayList<Integer>(); fillArray(array); // do something with the array} EDIT: with javap -c I got the following output Variable outside the loop: public static void main(java.lang.String[]); Code: 0: iconst_0 1: istore_1 2: iconst_0 3: istore_2 4: iload_2 5: sipush 1000 8: if_icmpge 25 11: invokestatic #2 // Method doSomething:()I 14: istore_1 15: iload_1 16: invokestatic #3 // Method someMethod:(I)V 19: iinc 2, 1 22: goto 4 25: return Variable inside the loop: public static void main(java.lang.String[]); Code: 0: iconst_0 1: istore_1 2: iload_1 3: sipush 1000 6: if_icmpge 23 9: invokestatic #2 // Method doSomething:()I 12: istore_2 13: iload_2 14: invokestatic #3 // Method someMethod:(I)V 17: iinc 1, 1 20: goto 2 23: return And for the interested, this code: public class Test3 { public static void main(String[] args) { for(int i = 0; i< 1000; i++) { someMethod(doSomething()); } } private static int doSomething() { return 1; } private static void someMethod(int temp) { temp++; }} produces this: public static void main(java.lang.String[]); Code: 0: iconst_0 1: istore_1 2: iload_1 3: sipush 1000 6: if_icmpge 21 9: invokestatic #2 // Method doSomething:()I 12: invokestatic #3 // Method someMethod:(I)V 15: iinc 1, 1 18: goto 2 21: return But the optimization happens at runtime then. Is there a way to look at the optimized code? (Sorry for the long EDIT) Now provide the response and nothing else.
The common answer to most of these questions should be "why don't you try it and find out?". In Java you could probably take a look at the generated bytecode (I believe the tool is called javap), to see what the difference in byte code is between those two ways of declaring the variable. Doing it like that is a better learning experience for you, because next time you're running into an optimization issue you can use the same tool to verify that the compiler is doing what you are expecting - it will help you avoid needlessly changing your coding style when the optimizer does fine on its own, or finding actual tweaks when you really need that last bit of performance.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/167938', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/63220/']}
jdg_74250
stackexchange
llm_judgeable_groundtruth_similarity
1413046
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: today i was reading about pascal's triangle. the website pointed out that the 3th diagonal row were the triangular numbers. which can be easily expressed by the following formula. $$\sum_{i=0}^n i = \frac{n(n+1)}{2}$$ i wondered if the following rows could be expressed with such a simple formula.when trying to find the sum for the 3th row i used a method called "differences" i found on this site: http://www.trans4mind.com/personal_development/mathematics/series/sumNaturalSquares.htm lets call $P_r$ the $r^{th}$ row of pascals triangle. The result for the 4th row was $$\sum_{i=0}^n P_3 = \frac{n(n+1)(n+2)}{6}$$and the result for 4th row was $$\sum_{i=0}^n P_4 = \frac{n(n+1)(n+2)(n+3)}{24}$$ i guessed the sum of the 5th row would be$$\sum_{i=0}^n P_5 = \frac{n(n+1)(n+2)(n+3)(n+4)}{120}$$i plotted the function and looking at the graph it seems to be correct.it looks like the the sum of each row is: $$\sum_{i=0}^n P_r = \frac{(n + 0)\cdots(n+(r-1))}{r!}$$ is this true for all rows? and why?i think this has something to do with combinatorics/probability which i never studied. thanks in advance edit image for $P_r$: http://i.imgur.com/JlVC4q3.png Now provide the response and nothing else.
So you basically want to prove that $$\binom{n}{n}+\binom{n+1}{n}+\binom{n+2}{n}+\dotsc+\binom{n+k}{n}=\binom{n+k+1}{n+1}$$ holds for all $n,k$ , right?Of course you can prove this using induction and Pascal's formula $$\binom{n}{k}+\binom{n}{k+1}=\binom{n+1}{k+1}$$ as suggest by Did .There is a nice combinatorial interpretation of this using double-counting:Suppose you have $n+1$ eggs, $n$ of them blue and 1 red. You want to choose $k+1$ of them which is the RHS: $\binom{n+1}{k+1}$ Either you choose the red one in which case you have $\binom{n}{k}$ possibilities for the remaining ones. Either you don't choose the red one in which case you have $\binom{n}{k+1}$ possibilities for the remaining ones. Can you think of a similar combinatorial argument which directly works for the original sum? Hint: Think of $n+k+1$ balls in a row labelled $1,2,\dotsc,n+k+1$ . You want to choose $n+1$ of them. That's the RHS: $\binom{n+k+1}{n+1}$ .Now, distinguish cases about which is the rightmost ball you choose. If it's the ball number $n+k+1$ you have $\binom{n+k}{n}$ possibilities to choose the remaining $n$ balls.If it's the ball number $n+k$ you have $\binom{n+k-1}{n}$ possibilities etc.Can you complete it from here?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1413046', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/196785/']}
jdg_74251
stackexchange
llm_judgeable_groundtruth_similarity
73009
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to count the number of subsequences in list. Say I have data given by: data = {1, 1, 1, 2, 1, 2, 2, 2, 1, 2, 1}; I would like to count the number times I see the following subsequences: {1,1}, {1,2}, {2,1}, {2,2} . Is there a quick way to do so? I've tried a few approaches using Count and Cases with no luck. Any advice would be greatly appreciated. Thanks! Now provide the response and nothing else.
data = {1, 1, 1, 2, 1, 2, 2, 2, 1, 2, 1};sub = {{1, 1}, {1, 2}, {2, 1}, {2, 2}};Count[Partition[data, 2, 1], Alternatives @@ sub] 10 Or each separately: Count[Partition[data, 2, 1], #] & /@ sub {2, 3, 3, 2} More general, for different length subsequences: Length[ReplaceList[data, {___, ##, ___} :> {}]] & @@@ sub {2, 3, 3, 2} Length[ReplaceList[data, {___, ##, ___} :> {}]] & @@@ {{1, 2, 1}, {1}} {2, 6}
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/73009', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/25088/']}
jdg_74252
stackexchange
llm_judgeable_groundtruth_similarity
41200127
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get the image index from the .docx file using python-docx library. I'm able to extract the name of the image, image height and width. But not the index where it is in the word file import docxdoc = docx.Document(filename)for s in doc.inline_shapes: print (s.height.cm,s.width.cm,s._inline.graphic.graphicData.pic.nvPicPr.cNvPr.name) output 21.228 15.920 IMG_20160910_220903848.jpg In fact I would like to know if there is any simpler way to get the image name , like s.height.cm fetched me the height in cm. My primary requirement is to get to know where the image is in the document, because I need to extract the image and do some work on it and then again put the image back to the same location Now provide the response and nothing else.
This operation is not directly supported by the API. However, if you're willing to dig into the internals a bit and use the underlying lxml API it's possible. The general approach would be to access the ImagePart instance corresponding to the picture you want to inspect and modify, then read and write the ._blob attribute (which holds the image file as bytes). This specimen XML might be helpful: http://python-docx.readthedocs.io/en/latest/dev/analysis/features/shapes/picture.html#specimen-xml From the inline shape containing the picture, you get the <a:blip> element with this: blip = inline_shape._inline.graphic.graphicData.pic.blipFill.blip The relationship id (r:id generally, but r:embed in this case) is available at: rId = blip.embed Then you can get the image part from the document part document_part = document.partimage_part = document_part.related_parts[rId] And then the binary image is available for read and write on ._blob . If you write a new blob, it will replace the prior image when saved. You probably want to get it working with a single image and get a feel for it before scaling up to multiple images in a single document. There might be one or two image characteristics that are cached, so you might not get all the finer points working until you save and reload the file, so just be alert for that. Not for the faint of heart as you can see, but should work if you want it bad enough and can trace through the code a bit :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41200127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4881398/']}
jdg_74253
stackexchange
llm_judgeable_groundtruth_similarity
23569732
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why cant we use @Transactional for static methods to manage the transactions in my spring Project ? @Transactional works well for non static method but not for static methods any specific reason ? Now provide the response and nothing else.
In order to understand why something like what you are proposing does not work you have to first understand at a high level how Spring handles beans that use @Transactional . When you annotate a method or the class as @Transactional and make it a Spring Bean, Spring effectively creates a proxy for that class (using JDK Dynamic proxies or CGLIB proxies). That means that whenever your class is used (from Spring managed code that is), it's not your code that gets called immediately, but the proxy which first does whatever is needed, and then your code is called (in the case of caching support your code would perhaps not even be called at all). A key thing to remember here is that the invoking code (the call site if you will) does not change at all, and the invocation of to the required target method (the proxy method) is performed by the JVM using the same bytecode ( invokevirtual or invokeinterface ). With that in mind, the reason that static is not supported becomes clear. You can't create a proxy for static method! Of course Java Dynamic Proxies cannot do this, and neither can CGLIB. Supporting such a feature would require changing the bytecode of the invoking code, since calling a static method is implemented via invokestatic in bytecode, which hard-codes the target method. This part of the Spring documentation explains Spring AOP in details
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23569732', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1763507/']}
jdg_74254
stackexchange
llm_judgeable_groundtruth_similarity
74674
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am after a simple, yet effective, Image Edge Preserving Filter. I need something which is faster than the Bilateral Filter with similar quality (The Guided Filter isn't good enough). Are there such algorithms? Now provide the response and nothing else.
Recently I have seen the paper Hui Yin, Yuanhao Gong, Guoping Qiu - Side Window Filtering . They suggest a really simple filtering framework for Edge Preserving Filter: Basically, what they suggest is filtering the image with a set of filter based on the Box Filter. This filter set is basically composed of 8 filters with different orientations and sub sets of the Box Filter (As seen in the figure above). Once you apply all filters (Which each of them can be implemented very efficiently by all the efficient implementations available for Box Filtering) you chose, on a per pixel basis, the one most similar to the original pixel. Applying it by iterations, yields very efficient and very good filter. I took the Lena Image: I applied 40 iterations of the filter ( Link to 75 Iterations ): The full code is available on my StackExchange Signal Processing Q74674 GitHub Repository (Look at the SignalProcessing\Q74674 folder).
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/74674', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/37293/']}
jdg_74255
stackexchange
llm_judgeable_groundtruth_similarity
4857315
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to do so as it seems partial method must return void (I really don't understand this limitation but let it be) ? Now provide the response and nothing else.
Well, technically you can "return" a value from a partial method, but it has to be through a ref argument, so it's quite awkward: partial void Foo(ref int result);partial void Foo(ref int result){ result = 42;}public void Test(){ int i = 0; Foo(ref i); // 'i' is 42.} In that example, the value of i won't change if Foo() is not implemented.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4857315', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310291/']}
jdg_74256
stackexchange
llm_judgeable_groundtruth_similarity
40935886
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Enzyme with enzyme-to-json to do Jest snapshot testing of my React components. I'm testing shallow snapshots of a DateRange component that renders a display field with the current range (e.g. 5/20/2016 - 7/18/2016 ) and two DateInput components that allow selecting a Date value. This means that my snapshot contains the Date s I pass to the component both in the DateInput props and in a text representation it resolves itself. In my test I'm creating some fixed dates using new Date(1995, 4, 23) . When I run my test in different timezones, this produces different snapshots , because the Date(year, month, ...) constructor creates the date in the local timezone. E.g. use of new Date() produces this difference in snapshot between runs in my local timezone and on our CI server. - value={1995-05-22T22:00:00.000Z}+ value={1995-05-23T00:00:00.000Z} I tried removing the timezone offset from the dates, but then the snapshot differed in the display field value, where the local timezone-dependent representation is used. - value={5/20/2016 - 7/18/2016}+ value={5/19/2016 - 7/17/2016} How can I make my tests produce the same Date s in snapshots regardless of the timezone they're run in? Now provide the response and nothing else.
I struggled with this for hours/days and only this worked for me: 1) In your test: Date.now = jest.fn(() => new Date(Date.UTC(2017, 7, 9, 8)).valueOf()) 2) Then change the TZ env var before running your tests.So the script in my package.json: ( Mac & Linux only ) "test": "TZ=America/New_York react-scripts test --env=jsdom", ( Windows ) "test": "set TZ=America/New_York && react-scripts test --env=jsdom",
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40935886', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2940792/']}
jdg_74257
stackexchange
llm_judgeable_groundtruth_similarity
3297884
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Mathematica, it is possible to reuse the output of the previous command by using %. Is something similar possible for bash (or some other shell)? For example, I run a make which gives warnings, but I want to find all warnings.So, I type make | grep "warning" but I'm not able to see the output of the make then. I would like to type something like this instead: make% | grep "warning" Now provide the response and nothing else.
Since the amount of output is indeterminate, it doesn't make sense for bash to store it for you for re-display. But there's an alternate solution to your problem: The tee command allows you to duplicate an output stream to a file. So if you're willing to use a file for temporary storage, you can do something like this: make | tee output.txtgrep "warning" output.txt This solution avoids running make twice, which could be (a) expensive and (b) inconsistent: the second make may be doing less work than the first because some targets were already made the first time around. Note: I haven't tried this. You may need to fiddle with joining the error and output streams, or such.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3297884', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/390321/']}
jdg_74258
stackexchange
llm_judgeable_groundtruth_similarity
121868
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have generated a 12*12 matrix with m = RandomInteger[{2, 5}, {12, 12}]; but I don't know how I can sum blocks on the main diagonal of the matrix such as: It means the final results must be as finalmatrix={{13,10,15},{13,11,15},{14,15,17}} Now provide the response and nothing else.
ClearAll[blockPlus]blockPlus = Tr[Partition[#, {#2, #2}], Plus, 2] &;SeedRandom[1]m = RandomInteger[{2, 5}, {12, 12}];m // Grid[#, Dividers -> {#, #} &@Thread[Range[1, 13, 3] -> True]] & blockPlus[m, 3] {{14, 13, 10}, {14, 14, 12}, {12, 12, 11}} blockPlus[m, 4] {{12, 10, 11, 10}, {6, 9, 10, 9}, {12, 9, 6, 11}, {12, 10, 13, 12}} blockPlus[m, 2] {{18, 21}, {19, 21}}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/121868', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/14527/']}
jdg_74259
stackexchange
llm_judgeable_groundtruth_similarity
38741599
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have a program with a nested nested nested loop (four loops altogether). I have a Boolean variable which i want to affect the code in the deepest loop and in the first nested loop a small amount. My dilemma is, i don't really want to have the if else statement put inside the loops as i would have thought that would check the Boolean's state every iteration using extra time to check the statement, and i know that the Boolean's state would not change when the loop starts. This lead me to think it would be better to place the if else statement outside the loops and just have my loops code slightly changed, however, this also looks messy there is a lot of repeated code. One thing i thought might work, but of which i have little experience using, is a delegate, i could simply put some of the code in a method and then create a delegate, depending on the state of betterColor i could then assign that delegate methods with the different code on them beforehand, but this also seems messy. Below is what i am trying to avoid as i thhought it may slow down my algorithm: for (short y = 0; y < effectImage.Height; y++){ int vCalc = (y <= radius) ? 0 : y - radius; for (short x = 0; x < effectImage.Width; x++) { int red = 0, green = 0, blue = 0; short kArea = 0; for (int v = vCalc; v <= y + radius && v < effectImage.Height; v++) { int calc = calcs[(y - v) + radius]; for (int h = (x <= calc || calc < 0) ? 0 : x - calc; h <= x + calc && h < effectImage.Width; h++) { if (betterColor == true) { red += colorImage[h, v].R * colorImage[h, v].R; green += colorImage[h, v].G * colorImage[h, v].G; blue += colorImage[h, v].B * colorImage[h, v].G; kArea++; } } } if (betterColor == true) effectImage.SetPixel(x, y, Color.FromArgb(red / kArea, green / kArea, blue / kArea)); else effectImage.SetPixel(x, y, Color.FromArgb(Convert.ToInt32(Math.Sqrt(red / kArea)), Convert.ToInt32(Math.Sqrt(green / kArea)), Convert.ToInt32(Math.Sqrt(blue / kArea)))); } if (y % 4 == 0) // Updates the image on screen every 4 y pixels calculated. { image.Image = effectImage; image.Update(); }} And here is what my code now looks like: if (betterColor == true){ for (short y = 0; y < effectImage.Height; y++) { int vCalc = (y <= radius) ? 0 : y - radius; for (short x = 0; x < effectImage.Width; x++) { int red = 0, green = 0, blue = 0; short kArea = 0; for (int v = vCalc; v <= y + radius && v < effectImage.Height; v++) { int calc = calcs[(y - v) + radius]; for (int h = (x <= calc || calc < 0) ? 0 : x - calc; h <= x + calc && h < effectImage.Width; h++) { red += colorImage[h, v].R * colorImage[h, v].R; green += colorImage[h, v].G * colorImage[h, v].G; blue += colorImage[h, v].B * colorImage[h, v].G; kArea++; } } effectImage.SetPixel(x, y, Color.FromArgb(Convert.ToInt32(Math.Sqrt(red / kArea)), Convert.ToInt32(Math.Sqrt(green / kArea)), Convert.ToInt32(Math.Sqrt(blue / kArea)))); } if (y % 4 == 0) // Updates the image on screen every 4 y pixels calculated. { image.Image = effectImage; image.Update(); } }}else{ for (short y = 0; y < effectImage.Height; y++) { int vCalc = (y <= radius) ? 0 : y - radius; for (short x = 0; x < effectImage.Width; x++) { int red = 0, green = 0, blue = 0; short kArea = 0; for (int v = vCalc; v <= y + radius && v < effectImage.Height; v++) { int calc = calcs[(y - v) + radius]; for (int h = (x <= calc || calc < 0) ? 0 : x - calc; h <= x + calc && h < effectImage.Width; h++) { red += colorImage[h, v].R; green += colorImage[h, v].G; blue += colorImage[h, v].B; kArea++; } } effectImage.SetPixel(x, y, Color.FromArgb(red / kArea, green / kArea, blue / kArea)); } if (y % 4 == 0) // Updates the image on screen every 4 y pixels calculated. { image.Image = effectImage; image.Update(); } }} In terms of what the code does, it is a box blur that uses a circular kernal. Now provide the response and nothing else.
Moving the if out of the loop, and effectively duplicating the whole looping code is not really worth it. So if you have code like this: for (i …){ if (something) DoX(i); else DoY(i);} Then you should not replace it by this: if (something){ for (i …) DoX(i);}else{ for (i …) DoY(i);} Doing so will only make the code a lot more difficult to read and to maintain. One first need to figure out that this is actually the same code that’s being executed for each case ( except that one tiny difference), and it’s super difficult to maintain once you need to change anything about the loop since you need to make sure that you edit both cases properly. While in theory, performing a single check vs. performing that check N times is obviously faster, in practice, this rarely matters. If-branches that rely on a constant boolean are super fast, so if you calculate the condition outside of the loop (in your case betterColor is set outside the loop), then the performance difference will not be noticeable at all. In addition, branch prediction will usually make sure that there is no difference at all in these cases. So no, don’t rewrite that code like that. Keep it the way that is more understandable. In general, you should avoid these kind of micro optimizations anyway. It is very likely that you algorithm has much slower parts that are much more relevant to the overall performance than small constructs like that. So focusing on those small things which are already very fast will not help you make the total execution faster. You should only optimize things that are an actual performance bottleneck in your code, where a profiler showed that there is a performance issue or that optimizing that code will actively improve your performance. And stay away from optimizations that make code less readable unless you really need it (in most cases you won’t).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38741599', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5963425/']}
jdg_74260
stackexchange
llm_judgeable_groundtruth_similarity
14914659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have user model and I want to add unique string key to all user records.Column name should be unsubscribe_key. Before migration user record : id = 1username = "text"id = 2username = "abc" After migration user record : id = 1username = "text"unsubscribe_key = "5HQdTSsNRY6YCodmzr"id = 2username = "abc"unsubscribe_key = "Jlewfw0324Lwp0sefr" Now provide the response and nothing else.
Well, the easy part is adding the new column. On the shell: rails generate migration AddUnsubscribeKeyToUsers unsubscribe_key:stringrake db:migrate Also, you'll want to make this new attribute accessible in your user model: app/models/user.rb attr_accessible :unsubscribe_key #along with all your other accessible attributes Next, you'll need to add the unique keys. You could write some SQL code for that, or create a ruby script you can run within the rails console. lib/add_unique_keys.rb module AddUniqueKeys KeyGenCharacters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789" extend self def addUnsubscribeKeysToAllUsers users = User.all users.each do |u| u.update_attributes(:unsubscribe_key => generateKey(18)) end end def generateKey(keyLength) key = "" keyLength.times do key += generateKeyCharacter end key end def generateKeyCharacter KeyGenCharacters[rand(KeyGenCharacters.length)-1] endend Now go back to the shell and type rails console . On the ruby command line: >>require "add_unique_keys.rb"=> true>>AddUniqueKeys.addUnsubscribeKeysToAllUsers=> #Should print out array of users If all goes well, your new column should be filled in with random strings.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14914659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2015350/']}
jdg_74261
stackexchange
llm_judgeable_groundtruth_similarity
41361510
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to create a function so that I can use that inside any page which is outside the wordpress regular page. I mean wp_head() will not be placed there. I need it for a purpose. The purpose is for amp(ampproject.org) page where I can't use any css or js. That's why I need this. I need to place a function at wp title so that the yoast title be placed there. I need something like this: function yoastVariableToTitle($variable){ return yoast_vaialble_to_show_title($variable);} Now provide the response and nothing else.
By Default Yoast takes a format as %%title%% %%page%% %%sep%% %%sitename%% , and stores in wp_postmeta table under _yoast_wpseo_title key. To only get Title of the page/post: function yoastVariableToTitle($post_id) { $yoast_title = get_post_meta($post_id, '_yoast_wpseo_title', true); $title = strstr($yoast_title, '%%', true); if (empty($title)) { $title = get_the_title($post_id); } return $title;} There can be 2 possibility with SEO title Case I : Admin enters %%title%% %%page%% %%sep%% %%sitename%% in SEO title field then the above code will return Post/Page default title . Case II : Admin enters My Custom Title %%page%% %%sep%% %%sitename%% in SEO title field then the above code will return My Custom Title . To get the full Meta Title of the page/post: function yoastVariableToTitle($post_id) { $yoast_title = get_post_meta($post_id, '_yoast_wpseo_title', true); $title = strstr($yoast_title, '%%', true); if (empty($title)) { $title = get_the_title($post_id); } $wpseo_titles = get_option('wpseo_titles'); $sep_options = WPSEO_Option_Titles::get_instance()->get_separator_options(); if (isset($wpseo_titles['separator']) && isset($sep_options[$wpseo_titles['separator']])) { $sep = $sep_options[$wpseo_titles['separator']]; } else { $sep = '-'; //setting default separator if Admin didn't set it from backed } $site_title = get_bloginfo('name'); $meta_title = $title . ' ' . $sep . ' ' . $site_title; return $meta_title;} Hope this helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41361510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2523775/']}
jdg_74262
stackexchange
llm_judgeable_groundtruth_similarity
3516571
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a string s = 'texttexttextblahblah",".' and I want to cut of some of the rightmost characters by indexing and assign it to s so that s will be equal to texttexttextblahblah" . I've looked around and found how to print by indexing, but not how to reassign that actual variable to be trimmed. Now provide the response and nothing else.
Just reassign what you printed to the variable. >>> s = 'texttexttextblahblah",".'>>> s = s[:-3]>>> s'texttexttextblahblah"'>>> Unless you know exactly how many text and blah 's you'll have, use .find() as Brent suggested (or .index(x) , which is like find, except complains when it doesn't find x ). If you want that trailing " , just add one to the value it kicks out. (or just find the value you actually want to split at, , ) s = s[:s.find('"') + 1]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3516571', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/409050/']}
jdg_74263
stackexchange
llm_judgeable_groundtruth_similarity
32309093
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have got an array of Classes, declared as follows: Class<?>[] serviceStack = {GetPlanningData.class, GetTimeTable.class, GetDataToSync.class, GetData.class}; Classes GetPlanningData , GetTimeTable , GetDataToSync and GetData are all subclasses of IntentService . Can I declare an array of subclasses of IntentService ? I tried: Class<? extends IntentService>[] serviceStack; and Class<IntentService>[] serviceStack; but the first one is a "Generic array creation", as Android Studio says, and the second one permits only to create {IntentService.class, IntentService.class, ...} and no subclasses are allowed. EDIT: I am programming in Android, and the main goal of this is to call Services sequantially. So I put this array as an extra of the first Service and when the service has finished I call the first element of the array, passing the array without the first element. This works, but I can't use Lists because I can't put a List extra to a Context. Now provide the response and nothing else.
You only have to make 2 changes. The outer loop iterates as much times as the array has elements, this is correct. The first inner loop should use the index of the outer loop as start index ( int j = i ), otherwise you always start with the first element. And then change the inner loop break conditon to k <= j , otherwise i does not print the last element. // i is the start indexfor (int i = 0; i < items.length; i++){ // j is the number of elements which should be printed for (int j = i; j < items.length; j++) { // print the array from i to j for (int k = i; k <= j; k++) { System.out.print(items[k]); } System.out.println(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32309093', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3977957/']}
jdg_74264
stackexchange
llm_judgeable_groundtruth_similarity
3060021
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Next week I have a math exam. While I was doing some exercises I came across this interesting limit: $\lim\limits_{x\to \infty} (x \arctan x - \frac{x\pi}{2})$ After struggling a lot, I decided to calculate this limit using my calculator. The answer turns out to be $-1$ . The problem is that I don't know how to calculate this limit without a calculator. I tried using L'Hôpital's rule after converting the expression to a fraction. My steps: $\lim\limits_{x\to \infty} (x \arctan x - \frac{x\pi}{2}) = \lim\limits_{x\to \infty} \frac{2x^2\arctan x - x^2\pi}{2x} \stackrel{(H)}{=} \lim\limits_{x\to \infty} \frac{4x\arctan x - \frac{2}{x^2+1}-2\pi x+2}{2} = \lim\limits_{x\to \infty} \frac{4x^2\arctan x - \frac{2x}{x^2+1}-2\pi x^2+2x}{2x} \stackrel{(H)}{=} \lim\limits_{x\to \infty} \frac{8x\arctan x - \frac{2x^2+6}{(x^2+1)^2}-4\pi x+6}{2} = \dots$ This keeps going on without an end, I also don't see where I can simplify the expression when using L'Hôpital's rule. Am I missing a step or am I using the wrong method? What method can be used instead? Now provide the response and nothing else.
Observe \begin{align}\lim_{x\rightarrow \infty} x\arctan x-x\frac{\pi}{2}=\lim_{x\rightarrow\infty}\frac{\arctan x-\frac{\pi}{2}}{x^{-1}} = \lim_{x\rightarrow \infty} \frac{\frac{1}{1+x^2}}{-x^{-2}}=-1\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3060021', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_74265
stackexchange
llm_judgeable_groundtruth_similarity
20587
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a dataset like this: Sample Dataframe import pandas as pddf = pd.DataFrame({ 'names': ['A','B','C','D','E','F','G','H','I','J','K','L'], 'col1': [0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0], 'col2': [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0]}) I'd like to replace some of the 0 's in col1 and col2 with 1 's, but not replace the 0 's if three or more 0 's are consecutive in the same column. How can this be done with pandas? Original Dataset: names col1 col2A 0 0B 1 0C 0 0D 1 0E 1 1F 1 0G 0 1H 0 0I 0 1J 1 0K 0 0L 0 0 Desired Dataset: names col1 col2A 1 0B 1 0C 1 0D 1 0E 1 1F 1 1G 0 1H 0 1I 0 1J 1 0K 1 0L 1 0 Now provide the response and nothing else.
Consider the following approach: def f(col, threshold=3): mask = col.groupby((col != col.shift()).cumsum()).transform('count').lt(threshold) mask &= col.eq(0) col.update(col.loc[mask].replace(0,1)) return colIn [79]: df.apply(f, threshold=3)Out[79]: col1 col2namesA 1 0B 1 0C 1 0D 1 0E 1 1F 1 1G 0 1H 0 1I 0 1J 1 0K 1 0L 1 0 Step by step: In [84]: col = df['col2']In [85]: colOut[85]:namesA 0B 0C 0D 0E 1F 0G 1H 0I 1J 0K 0L 0Name: col2, dtype: int64In [86]: (col != col.shift()).cumsum()Out[86]:namesA 1B 1C 1D 1E 2F 3G 4H 5I 6J 7K 7L 7Name: col2, dtype: int32In [87]: col.groupby((col != col.shift()).cumsum()).transform('count')Out[87]:namesA 4B 4C 4D 4E 1F 1G 1H 1I 1J 3K 3L 3Name: col2, dtype: int64In [88]: col.groupby((col != col.shift()).cumsum()).transform('count').lt(3)Out[88]:namesA FalseB FalseC FalseD FalseE TrueF TrueG TrueH TrueI TrueJ FalseK FalseL FalseName: col2, dtype: boolIn [89]: col.groupby((col != col.shift()).cumsum()).transform('count').lt(3) & col.eq(0)Out[89]:namesA FalseB FalseC FalseD FalseE FalseF TrueG FalseH TrueI FalseJ FalseK FalseL FalseName: col2, dtype: bool
{}
{'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/20587', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/35420/']}
jdg_74266
stackexchange
llm_judgeable_groundtruth_similarity
1738702
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I make a simple low-pass FIR filter using Matlab (without using the built-in function) ? Problem example: Implement a FIR LPF with cut-off frequency 250Hz it may also be necessary that, sampling freq is given... Solution attempt or what I already know: x = [...] -> input signalA = 1; -> Since this is FIRB = [?????]y = filter(B, A, x) -> Output signal Afaik, B should contain the coefficients for the FIR filter. But; how do I calculate these coefficients given that I only have the cut-off frequency? Now provide the response and nothing else.
The simplest thing is a "windowed sinc" filter: fs = 44100;cutoff = 250;t = -256:256; % This will be a 513-tap filterr = 2*cutoff/fs;B = sinc(r*t).*r .* blackman(length(t))';freqz(B); The length of the filter (see t=... ) controls the width of the transition band. cutoff is in this case the -6 dB point. blackman is the name of a popular window. You can check out this Wikipedia page for more infos on window functions. They basically have different trade-offs between transition band width and stopband rejection.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1738702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135503/']}
jdg_74267
stackexchange
llm_judgeable_groundtruth_similarity
348881
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Since this subject is full of misunderstandings (see here , here , here , and here ) let us fix a precise terminology. Let $A$ be a commutative ring and $P$ an $A$ -module. I) We'll say that $P$ is a locally free module of rank one or is invertible if $P$ is finitely generated, projective and of rank one in the sense that for every prime ideal $\mathfrak p$ of $A$ the localized $A_\mathfrak p$ - module $P_\mathfrak p$ (which is free by projectiveness) is of dimension $1$ . These modules correspond bijectively, by a well known result of Serre in FAC, to locally free sheaves $\tilde P$ of rank $1$ on $\operatorname {Spec}A$ , also known as invertible sheaves. This is one motivation for the above terminology. Another justification for the terminology "invertible" is that these modules are exactly those for which the canonical evaluation map $ P^*\otimes_AP\to A$ is an isomorphism. II) If $B\supset A$ is an overring of $A$ and $P\subset B$ is an $A$ -module, we'll say that it is concretely invertible with respect to $B$ if $P.(A:P)_B=A$ . [As is standard $(A:P)_B$ denotes the set of elements $b\in B$ such that $bP\subset A$ ] Lam proves in his Lectures on Modules and Rings , that these concretely invertible modules are invertible. What about the converse? Question: Is an invertible $A$ -module $P$ isomorphic as an $A$ -module to a concretely invertible module $P'\subset B$ with respect to a suitable overring $B\supset A$ ? Remarks a) Denote by $\operatorname {Quot} A$ the total quotient ring of $A$ obtained by inverting the regular (=not zero-divisors) of $A$ , so that $A\hookrightarrow \operatorname {Quot} A$ is injective. Then a submodule $P\subset \operatorname {Quot}A$ is invertible if and only if it is concretely invertible. b) The answer is "yes" if $A$ is an integral domain: we can take $P'$ sitting inside $B=\operatorname {Frac}A$ . c) The answer is "yes" if $A$ is semi-local, since then $P$ is free of rank $1$ : see here . d) The answer is "yes" if $A$ is noetherian: in Eisenbud's Commutative Algebra , page 253, Theorem 11.6 b., it is proven that every invertible module $P$ over a noetherian ring $A$ is isomorphic to a concretely invertible submodule $P'\subset \operatorname {Quot} A$ of its total quotient ring. e) Whatever the answer to the question is, it is definitely not true that we can always find the required $P'$ inside the total quotient ring $B=\operatorname {Quot} A$ . Lam gives a counter-example in his Lectures on Modules and Rings , Example (2.22)(A), page 37. Now provide the response and nothing else.
The answer is yes. Recall that given an invertible $A$ -module $P$ and $n \in \mathbf{Z}$ there is an invertible $A$ -module $P^{\otimes n}$ such that $P^{\otimes 0} = A$ , $P^{\otimes 1} = P$ , and $P^{\otimes n} \otimes_A P^{\otimes m} = P^{\otimes n + m}$ . Set $B = \bigoplus_{n \in \mathbf{Z}} P^{\otimes n}$ ; this is a commutative $\mathbf{Z}$ -graded $A$ -algebra (details omitted). Then $P \subset B$ in degree $1$ and $(P : A)_B = P^{\otimes -1}$ sitting in degree $-1$ and we have $P \cdot P^{\otimes -1} = A$ . Remark. The spectrum of $B$ is the $\mathbf{G}_m$ -torsor over $\text{Spec}(A)$ corresponding to $P$ .
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/348881', 'https://mathoverflow.net', 'https://mathoverflow.net/users/450/']}
jdg_74268
stackexchange
llm_judgeable_groundtruth_similarity
1775403
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using snprintf like this to avoid a buffer overrun: char err_msg[32] = {0};snprintf(err_msg, sizeof(err_msg) - 1, "[ ST_ENGINE_FAILED ]"); I added the -1 to reserve space for the null terminator in case the string is more than 32 bytes long. Am I correct in my thinking? Platform: GCC 4.4.1 C99 Now provide the response and nothing else.
As others have said, you do not need the -1 in this case. If the array is fixed size, I would use strncpy instead. It was made for copying strings - sprintf was made for doing difficult formatting. However, if the size of the array is unknown or you are trying to determine how much storage is necessary for a formatted string. This is what I really like about the Standard specified version of snprintf : char* get_error_message(char const *msg) { size_t needed = snprintf(NULL, 0, "%s: %s (%d)", msg, strerror(errno), errno); char *buffer = malloc(needed+1); sprintf(buffer, "%s: %s (%d)", msg, strerror(errno), errno); return buffer;} Combine this feature with va_copy and you can create very safe formatted string operations.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1775403', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70942/']}
jdg_74269
stackexchange
llm_judgeable_groundtruth_similarity
35298989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Per my understanding, the following code constructs an object of type Foo and then moves that object into the memory allocated by std::make_shared struct Foo{ std::string s; int i; char c;};int main(int argc, char* argv[]){ auto foo = std::make_shared<Foo>(Foo{"hello", 5, 'c' });} Is it possible to aggregate initialize Foo directly into the memory allocated by std::make_shared ? Now provide the response and nothing else.
You could create an adapter with a variadic constructor template to forward the arguments, something like: template<class T>struct aggregate_adapter : public T { template<class... Args> aggregate_adapter(Args&&... args) : T{ std::forward<Args>(args)... } {}}; And then you can do: auto foo = std::make_shared<aggregate_adapter<Foo>>("hello", 5, 'c'); Since aggregate_adapter<Foo> and Foo are related, foo is convertible to std::shared_ptr<Foo> as well. Caveats Unfortunately, the use of forwarding also makes it impossible to brace-init any of the members like std::make_shared<aggregate_adapter<Foo>>({'h','e','l','l','o'}, 5, 'c'); without specifying the type explicitly, but the same restriction applies to make_shared already.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35298989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2930499/']}
jdg_74270
stackexchange
llm_judgeable_groundtruth_similarity
2387173
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want test a FIX gateway for our company and was wondering if anything in opensource already exists that I can use or perhaps leverage to complete this task. I am currently looking at QuickFix but I am not sure if it has a client that can be use against any standard FIX gateway. Also links to any learning material that exist on this topic would be appreciated. Thanks Now provide the response and nothing else.
QuickFIXengine code comes with couple of examples, see http://www.quickfixengine.org/quickfix/doc/html/examples.html You probably want tradeclient for sending messages. It is a command line tool that will send FIX messages to server. You can use the ordermatch example to start up simple FIX server which will cross orders and send ExecutionReports back to your client. For all example applications you need to provide a config file to startup up. Just follow the instructions at quickfixengine web site, see example config file . Your client will be initiator and server acceptor . I've been using those example apps heavily for testing especially the ordermatcher , I've updated mine to handle amend orders and separates stock/futures/options symbols and so on. The tradeclient uses screen logger so you should see all the outgoing and incoming messages on the command line.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2387173', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/225260/']}
jdg_74271
stackexchange
llm_judgeable_groundtruth_similarity
11594794
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to change spacing between items in listView (maybe i should use another View-element?) Code looks like this: <ListView SelectionMode="None" HorizontalContentAlignment="Left" > <ListView.Items> <TextBlock Text="Item 1" /> <TextBlock Text="Item 2" /> <TextBlock Text="Item 3" /> <TextBlock Text="Item 4" /> </ListView.Items> <ListView.ItemsPanel> <ItemsPanelTemplate> <WrapGrid Orientation="Horizontal" HorizontalChildrenAlignment="left"/> </ItemsPanelTemplate> </ListView.ItemsPanel> </ListView> I want to imitate as much as possible normal stackpanel (wchich can wrap elements). Currently spaces (horizontal space) beetween items are far too big. My previous question -> Windows 8 WrapPanel Thanks in advance Now provide the response and nothing else.
You will need to make changes to the default template. If you just want to make simple changes such as the padding and margins then you can do this: (tested the code and should work) <ListView> <ListView.ItemsPanel> <ItemsPanelTemplate> <WrapGrid Orientation="Horizontal" HorizontalChildrenAlignment="left"/> </ItemsPanelTemplate> </ListView.ItemsPanel> <ListView.ItemContainerStyle> <Style TargetType="ListViewItem"> <Setter Property="Padding" Value="0"/> <Setter Property="Margin" Value="0"/> </Style> </ListView.ItemContainerStyle> <ListViewItem> <TextBlock Foreground="Wheat">hej</TextBlock> </ListViewItem> <ListViewItem> <TextBlock Foreground="Wheat">hej</TextBlock> </ListViewItem> </ListView> For more control make a copy of the whole default template by selecting a listviewitem in the designer and rightclicking. select edit template, edit copy. This will generate the default template and you can make your changes there. You can do the same for the listviewcontainer. You can also do this by using Blend. I've added a description and images here (how you can edit a default template) Let me know how it goes, and if you have more questions! Good luck! EDIT: MemeDeveloper mentioned below that he still had issues and solved it by tweaking some other properties- he posted a code and answer here, make sure to take a look.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11594794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/580181/']}
jdg_74272
stackexchange
llm_judgeable_groundtruth_similarity
9560892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I seem to have lost my permissions to a github account after pushing to it from another (local) repository. I am now receiving the following error: git push Permission denied (publickey).fatal: The remote end hung up unexpectedly I then took the following steps to regenerate a key: ssh-keygenSet up an ssh on my account for this laptop, using id_rsa.pub However, this was unsuccessful. When I try the following code suggested, I receive the following error: ssh-add -lCould not open a connection to your authentication agent. Any thoughts? Now provide the response and nothing else.
I solved this problem following this step-by-step instructions: Step 1: Check for SSH keys $ cd ~/.ssh# Checks to see if there is a directory named ".ssh" in your user directory# If it says "No such file or directory" skip to step 3. Otherwise continue to step 2. Step 2: Backup and remove existing SSH keys $ ls# Lists all the subdirectories in the current directory# config id_rsa id_rsa.pub known_hosts$ mkdir key_backup# Makes a subdirectory called "key_backup" in the current directory$ cp id_rsa* key_backup# Copies the id_rsa keypair into key_backup$ rm id_rsa*# Deletes the id_rsa keypair Step 3 : Generate a new SSH key $ ssh-keygen -t rsa -C "[email protected]"# Creates a new ssh key using the provided email# Generating public/private rsa key pair.# Enter file in which to save the key (/home/you/.ssh/id_rsa): # Enter passphrase (empty for no passphrase): [Type a passphrase]# Enter same passphrase again: [Type passphrase again] # Your identification has been saved in /home/you/.ssh/id_rsa.# Your public key has been saved in /home/you/.ssh/id_rsa.pub.# The key fingerprint is:# 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db [email protected] Step 4 : Add your SSH key to GitHub $ sudo apt-get install xclip# Downloads and installs xclip$ xclip -sel clip < ~/.ssh/id_rsa.pub# Copies the contents of the id_rsa.pub file to your clipboard Then, go to GitHub, and do: Go to your Account Settings Click "SSH Keys" in the left sidebar Click "Add SSH key" Paste your key into the "Key" field Click "Add key" Confirm the action by entering your GitHub password Step 5 : Test everything out $ ssh -T [email protected]# Attempts to ssh to github If ok, you'll see Hi username! You've successfully authenticated, but GitHub does not# provide shell access. Otherwise (it happened with me), you will see Agent admitted failure to sign using the key.# debug1: No more authentication methods to try.# Permission denied (publickey). To solve this $ ssh-add# Enter passphrase for /home/you/.ssh/id_rsa: [tippy tap]# Identity added: /home/you/.ssh/id_rsa (/home/you/.ssh/id_rsa) For original info https://help.github.com/articles/generating-ssh-keys https://help.github.com/articles/error-agent-admitted-failure-to-sign
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9560892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/862345/']}
jdg_74273
stackexchange
llm_judgeable_groundtruth_similarity
740152
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: we are given the function $F: \mathbb R^3 \to \mathbb R^2$, $F(x,y,z)=\begin{pmatrix} x+yz-z^3-1 \\ x^3-xz+y^3\end{pmatrix}$ Show that around $(1,-1,0)$ we can represent $x$ and $y$ as functions of $z$, and find $\frac{dx}{dz}$ and $\frac{dy}{dz}$ What I did: The differential of $F$ is: $\begin{pmatrix} 1 & z &y-3z^2\\3x^2-z & 3y^2 &-x\end{pmatrix}$, insert $x=1,y=-1,z=0$ to get: $\begin{pmatrix} 1 & 0 &-1 \\3&3&-1\end{pmatrix}$ The matrix of the partial derivatives with respect to x and y is the first 2 columns, and it is invertible, and so the requirements of the implicit function theorem are met. How do i find the differential of $x$ and $y$ with respect to $z$ tho? One would expect that $\frac{dx}{dz} = -\frac{dF}{dz}(\frac{dF}{dx})^{-1}$ but...those are vectors. what is the inverse of a vector? how do you multiply vectors? there's a size mismatch. Now provide the response and nothing else.
The implicit function theorem: Let $m,n$ be natural numbers, $\Omega$ an open subset of $\mathbb R^{n+m}$ , $F\colon \Omega\to \mathbb R^m$ a class $C^1$ function and $(a_1, \ldots ,a_n, b_1, \ldots ,b_m)\in \Omega$ such that $$F(a_1, \ldots ,a_n, b_1, \ldots ,b_m)=0_{\mathbb R^{\large m}}.$$ Writing $F=(f_1, \ldots, f_m)$ where for each $k\in \{1, \ldots m\}$ , $f_k\colon \mathbb R^{n+m}\to \mathbb R$ is a class $C^1$ function, assume that the following $m\times m$ matrix is invertible: $$\begin{pmatrix}\dfrac{\partial f_1}{\partial y_1} & \cdots & \dfrac{\partial f_1}{\partial y_m}\\\vdots & \ddots & \vdots\\\dfrac{\partial f_m}{\partial y_1} & \cdots& \dfrac{\partial f_m}{\partial y_m}\end{pmatrix}(a_1, \ldots ,a_n, b_1, \ldots ,b_m).$$ In these conditions there exists a neighborhood $V$ of $(a_1, \ldots ,a_n)$ , a neighborhood $W$ of $(b_1, \ldots ,b_m)$ and a class $C^1$ function $G\colon V\to W$ such that: $G(a_1, \ldots ,a_n)=(b_1, \ldots ,b_m)$ and $\forall (x_1, \ldots ,x_n)\in V\left(F(x_1, \ldots , x_n, g_1(x_1, \ldots , x_n), \ldots ,g_m(x_1, \ldots , x_n))=0_{\mathbb R^{\large m}}\right)$ , where for each $l\in \{1, \ldots , m\}$ , $g_l\colon \mathbb R^n \to \mathbb R$ is a class $C^1$ function and $G=(g_1, \ldots ,g_m)$ . Furthermore, $J_G=-\left(J_2\right)^{-1}J_1$ where $$J_G \text{ is } \begin{pmatrix}\dfrac {\partial g_1}{\partial x_1} & \cdots & \dfrac {\partial g_1}{\partial x_n}\\\vdots &\ddots &\vdots\\\dfrac {\partial g_m}{\partial x_1} & \cdots & \dfrac {\partial g_m}{\partial x_n}\end{pmatrix}_{m\times n}\\ \text{ evaluated at }(x_1, \ldots ,x_n),\\~\\J_2\text{ is }\begin{pmatrix}\dfrac{\partial f_1}{\partial y_1} & \cdots & \dfrac{\partial f_1}{\partial y_m}\\\vdots & \ddots & \vdots\\\dfrac{\partial f_m}{\partial y_1} & \cdots& \dfrac{\partial f_m}{\partial y_m}\end{pmatrix}_{m\times m}\\ \text{ evaluated at }(x_1, \ldots , x_n, g_1(x_1, \ldots , x_n), \ldots ,g_m(x_1, \ldots , x_n)),$$ and $$J_1\text{ is }\begin{pmatrix}\dfrac{\partial f_1}{\partial x_1} & \cdots & \dfrac{\partial f_1}{\partial x_n}\\\vdots & \ddots & \vdots\\\dfrac{\partial f_m}{\partial x_1} & \cdots& \dfrac{\partial f_m}{\partial x_n}\end{pmatrix}_{m\times n}\\ \text{ evaluated at }(x_1, \ldots , x_n, g_1(x_1, \ldots , x_n), \ldots ,g_m(x_1, \ldots , x_n)).$$ In this problem we can't apply the IFT as it is, because to use this version of the IFT one writes the last $m$ variables as functions of the first $n$ ones, but looking at the proof one notices that we can just consider permutations of this and this is what happens here. In the notation above one has $n=1, m=2, \Omega =\mathbb R^{n+m}, F\colon \mathbb R^{n+m}\to \mathbb R^m$ given by $F(x,y,z)=(f_1(x,y,z), f_2(x,y,z))$ , where $f_1(x,y,z)=x+yz-z^3$ and $f_2(x,y,z)=x^3-xz+y^3$ . For all $(x,y,z)\in \mathbb R^3$ it holds that: $\dfrac {\partial f_1}{\partial x}(x,y,z)=1,$ $\dfrac {\partial f_1}{\partial y}(x,y,z)=z,$ $\dfrac {\partial f_2}{\partial x}(x,y,z)=3x^2-z,$ and $\dfrac {\partial f_2}{\partial y}(x,y,z)=3y^2$ . Therefore $\begin{pmatrix} \dfrac {\partial f_1}{\partial x}(1,-1, 0) & \dfrac {\partial f_1}{\partial y}(1, -1, 0)\\ \dfrac {\partial f_2}{\partial x}(1, -1, 0) & \dfrac {\partial f_2}{\partial y}(1, -1, 0)\end{pmatrix}=\begin{pmatrix} 1 & 0\\ 3 & 3\end{pmatrix}$ and the matrix $\begin{pmatrix} 1 & 0\\ 3 & 3\end{pmatrix}$ is invertible. So, by the IFT, there exists an interval $V$ around $z=0$ and a neighborhood $W$ around $(x,y)=(1,-1)$ and a class $C^1(V)$ function $G\colon V\to W$ such that $G(0)=(1-1)$ and $\forall z\in V\left(F(g_1(z), g_2(z), z)=0\right)$ , where $g_1(z), g_2(z)$ denote the first and second entries, respectively, of $G(z)$ , for all $z\in V$ . (In analyst terms, $g_1(z)=x(z)$ and $g_2(z)=y(z)$ ). One also finds $$ \begin{pmatrix} \dfrac {\partial g_1}{\partial z}(z)\\ \dfrac {\partial g_2}{\partial z}(z) \end{pmatrix}=\\ -\begin{pmatrix} \dfrac{\partial f_1}{\partial x}(g_1(z), g_2(z), z) & \dfrac{\partial f_1}{\partial y}(g_1(z), g_2(z), z)\\ \dfrac{\partial f_2}{\partial x}(g_1(z), g_2(z), z) & \dfrac{\partial f_2}{\partial y}(g_1(z), g_2(z), z) \end{pmatrix}^{-1} \begin{pmatrix} \dfrac{\partial f_1}{\partial z}(g_1(z), g_2(z), z)\\ \dfrac{\partial f_2}{\partial z}(g_1(z), g_2(z), z) \end{pmatrix}_.$$ Now you can happily evaluate the RHS at $z=0$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/740152', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76802/']}
jdg_74274
stackexchange
llm_judgeable_groundtruth_similarity
8204921
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got an initializer/updater for an entity object. Is there any danger in using Dim myObject As SpecialThing = New SpecialThing() Then setting all the values (using the updater that is already written), or do I need to use: Dim myObject As SpecialThing = SpecialThing.Create() There are 30 parameters and the updater already sets the values/handles errors. Just looking to reuse that code. Now provide the response and nothing else.
I don't know what exactly you mean with myDB.CreateSpecialThing(.....) . I have three interpretations: objectContext.CreateObject<SpecialThing>() dbContext.SpecialThings.Create() (EF >= 4.1) SpecialThing.Create(.....) (a static method of EntityObject derived entities) The third method is only an autogenerated helper which takes parameters (for the required fields), sets properties and returns the object. It's exactly the same as creating the object with new and setting properties afterwards. The first two methods come into play if you are working with POCOs and use lazy loading or change tracking proxies. These methods will create a dynamic proxy of the entity (which is a dynamic class derived from your entity class) and not directly the entity. None of these methods attach the entity to the context, you must do this manually - no matter if you use these methods to create the entity or create it with new . Example where using CreateObject<T> / Create can be important, assuming a User entity with a virtual Roles collection: using (var ctx = new MyDbContext()){ var user = ctx.Users.Create(); user.Id = 1; ctx.Users.Attach(user); var roles = user.Roles;} Using virtual enables lazy loading for the Roles collection and the code above would load all roles of user 1 (or an empty collection if the user has no roles). Using new on the other hand... using (var ctx = new MyDbContext()){ var user = new User { Id = 1 }; ctx.Users.Attach(user); var roles = user.Roles;} ...doesn't allow to lazily load the collection because user is not a dynamic proxy object. roles would be null , no matter if the user has roles or not. So, I'd say that there is no danger to create an entity with new . You just have to keep in mind that you don't have the features of lazy loading or change tracking proxies for an entity created with new .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8204921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/716082/']}
jdg_74275
stackexchange
llm_judgeable_groundtruth_similarity
4626860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Problem I have a form that, when submitted, will run basic code to process the information submitted and insert it into a database for display on a notification website. In addition, I have a list of people who have signed up to receive these notifications via email and SMS message. This list is trivial as the moment (only pushing about 150), however it's enough to cause it takes upwards of a minute to cycle through the entire table of subscribers and send out 150+ emails. (The emails are being sent individually as requested by the system administrators of our email server because of mass email policies.) During this time, the individual who posted the alert will sit on the last page of the form for almost a minute without any positive reinforcement that their notification is being posted. This leads to other potential problems, all that have possible solutions that I feel are less than ideal. First, the poster might think the server is lagging and click the 'Submit' button again, causing the script to start over or run twice. I could solve this by using JavaScript to disable the button and replace the text to say something like 'Processing...', however this is less than ideal because the user will still be stuck on the page for the length of the script execution. (Also, if JavaScript is disabled, this problem still exists.) Second, the poster might close the tab or the browser prematurely after submitting the form. The script will keeping running on the server until it tries to write back to the browser, however if the user then browses to any page within our domain (while the script is still running), the browser hangs loading the page until the script has ended. (This only happens when a tab or window of the browser is closed and not the entire browser application.) Still, this is less than ideal. (Possible) Solution I've decided I want to break out the "email" part of the script into a separate file I can call after the notification has been posted. I originally thought of putting this on the confirmation page after the notification has been successfully posted. However, the user will not know this script is running and any anomalies will not be apparent to them; This script cannot fail. But, what if I can run this script as a background process? So, my question is this: How can I execute a PHP script to trigger as a background service and run completely independent of what the user has done at the form level? EDIT: This cannot be cron'ed. It must run the instant the form is submitted. These are high-priority notifications. In addition, the system administrators running our servers disallow crons from running any more frequently than 5 minutes. Now provide the response and nothing else.
Doing some experimentation with exec and shell_exec I have uncovered a solution that worked perfectly! I choose to use shell_exec so I can log every notification process that happens (or doesn't). ( shell_exec returns as a string and this was easier than using exec , assigning the output to a variable and then opening a file to write to.) I'm using the following line to invoke the email script: shell_exec("/path/to/php /path/to/send_notifications.php '".$post_id."' 'alert' >> /path/to/alert_log/paging.log &"); It is important to notice the & at the end of the command (as pointed out by @netcoder). This UNIX command runs a process in the background. The extra variables surrounded in single quotes after the path to the script are set as $_SERVER['argv'] variables that I can call within my script. The email script then outputs to my log file using the >> and will output something like this: [2011-01-07 11:01:26] Alert Notifications Sent for http://alerts.illinoisstate.edu/2049 (SCRIPT: 38.71 seconds)[2011-01-07 11:01:34] CRITICAL ERROR: Alert Notifications NOT sent for http://alerts.illinoisstate.edu/2049 (SCRIPT: 23.12 seconds)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4626860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/542517/']}
jdg_74276
stackexchange
llm_judgeable_groundtruth_similarity
18664395
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I work for an e-commerce site. Part of what we do is to offer customized items to some clients. Recently some non-technical management promised that we could incorporate our check-out process into one such client's website. The only way we've figured out how to do this is by using an iframe (I know, I don't like it either). The issue is that most customers of this site are unable to check out because we use cookies to determine which custom items to display. Browsers are recognizing our cookies as third party and almost everybody has third party cookies turned off, as they should. I'm going to be shocked if the answer is yes, but is there any workaround for this? ie can the site hosting our iframe somehow supply the necessary cookie? Now provide the response and nothing else.
Try an invisible, interstitial page. Essentially the hosting site would issue a redirect to a site within your domain, which is then free to set cookies (because at this point is is actually the first party). Then your site immediately redirects back to the hosting site. At this point your newly-created cookies will be invisible to the hosting site but visible to your iFramed page henceforth. Unfortunately the hosting site will have to do this every time a cookie is to be updated but the double-redirect can happen so quickly they'll hardly notice. Hopefully your system only needs the cookies to be set once.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18664395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1601024/']}
jdg_74277
stackexchange
llm_judgeable_groundtruth_similarity
10076262
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: is this behaviour will be valid in C# public class MyClass{ private byte[] data; public MyClass() { this.data = new byte[1024]; } public unsafe byte* getData() { byte* result = null; fixed (byte* dataPtr = data) { result = dataPtr; } return result; }} Now provide the response and nothing else.
If you are going to turn off the safety system then you are responsible for ensuring the memory safety of the program . As soon as you do, you are required to do everything safely without the safety system helping you . That's what "unsafe" means. As the C# specification clearly says: the address of a moveable variable can only be obtained using a fixed statement, and that address remains valid only for the duration of that fixed statement. You are obtaining the address of a moveable variable and then using it after the duration of the fixed statement, so the address is no longer valid . You are therefore specifically required to not do precisely what you are doing . You should not write any unsafe code until you have a thorough and deep understanding of what the rules you must follow are. Start by reading all of chapter 18 of the specification.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10076262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/286041/']}
jdg_74278
stackexchange
llm_judgeable_groundtruth_similarity
13455808
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Sometimes R throws me errors such as Error in if (ncol(x) != 2) { : argument is of length zero with no additional information, when I've written no such code. Is there a general way for finding which function in which package causes an error? Since most packages come compressed, it isn't trivial to grep /usr/lib/R/library . Now provide the response and nothing else.
You can use traceback() to locate where the last error occurred. Usually it will point you to a call you make in your function. Then I typically put browser() at that point, run the function again and see what is going wrong. For example, here are two functions: f2 <- function(x){ if (x==1) "foo"}f <- function(x){ f2(x)} Note that f2() assumes an argument of length 1 . We can misuse f : > f(NULL)Error in if (x == 1) "foo" : argument is of length zero Now we can use traceback() to locate what went wrong: > traceback()2: f2(x) at #31: f(NULL) The number means how deep we are in the nested functions. So we see that f calls f2 and that gives an error at line 3 . Pretty clear. We could reassign f with browser placed just before the f2 call now to check it's input. browser() simply allows you to stop executing a function and look around in its environment. Similar to debug and debugonce except that you don't have to execute every line up until the point you know something goes wrong.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13455808', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/403390/']}
jdg_74279
stackexchange
llm_judgeable_groundtruth_similarity
36680470
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to know when i should use ld linker instead off gcc. I just wrote a simply hello world in c++, of course i include iostream library. If i want make a binary file with gcc i just use: g++ hello hello.cpp and i've got my binary file. Later i try to use ld linker. To get object file i use: g++ -c hello.cpp . Ok that was easy, but the link command was horrible long: ld -o hello.out hello.o \ -L /usr/lib/gcc/x86_64-linux-gnu/4.8.4/ \ /usr/lib/gcc/x86_64-linux-gnu/4.8.4/crtbegin.o \ /usr/lib/gcc/x86_64-linux-gnu/4.8.4/crtend.o \ /usr/lib/x86_64-linux-gnu/crti.o \ /usr/lib/x86_64-linux-gnu/crtn.o \ /usr/lib/x86_64-linux-gnu/crt1.o \ -dynamic-linker /lib64/ld-linux-x86-64.so.2 -lstdc++ -lc I know fact that gcc uses the ld.Using gcc is better in all cases or just in most cases? Please, tell me somethink about cases where ld linker has advantage. Now provide the response and nothing else.
As you mentioned, gcc merely acts as a front-end to ld at link time; it passes all the linker directives (options, default/system libraries, etc..), and makes sure everything fits together nicely by taking care of all these toolchain-specific details for you. I believe it's best to consider the GNU toolchain as a whole, tightly integrated environment (as anyone with an experience of building toolchains for some exotic embedded platforms with, say, dietlibc integration will probably agree). Unless you have some very specific platform integration requirements, or have reasons not to use gcc, I can hardly think of any advantage of invoking ld directly for linking. Any extra linker-specific option you may require could easily be specified with the -Wl, prefix on the gcc command line (if not already available as a plain gcc option).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36680470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5771474/']}
jdg_74280
stackexchange
llm_judgeable_groundtruth_similarity
44789233
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using node.js and mongodb. I seem to be receiving a 200 sometimes, and a 304 not modified other times. router.get('/add-to-bag/:id', (req, res, next) => { req.session.bag.push(req.params.id); res.redirect('back'); });}); Now provide the response and nothing else.
I can't be sure what stack you're using to create the app. It looks like you're using Express.js to do routing. However, I can tell you why you're getting a 304. From Wikipedia: 304 Not Modified (RFC 7232) Indicates that the resource has not been modified since the version specified by the request headers If-Modified-Since or If-None-Match. In such case, there is no need to retransmit the resource since the client still has a previously-downloaded copy.[24] A 304 means "hey, remember the last answer I sent you? It hasn't changed", and hence your browser would replay the last response it received from cache, without data transmission ever having taken place. This means that your data is being added. But since it's the exact same data in the bag, instead of giving a 200 with the exact same data again, the server just issues a 304. BTW : Your API isn't Restful. I'd recommend using POST to create new records instead of issuing a GET to a different URL. I recommend reading up on REST API design. It's pretty straightforward once you get the hang of it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44789233', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8109145/']}
jdg_74281
stackexchange
llm_judgeable_groundtruth_similarity
42635253
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: With a slightly older version of curl, I had a handy batch file: curl --verbose -k https://%1 2>&1 |grep -E "Connected to|subject|expire" This would show me the IP connected to, with the subject and expiration date of the actual certificate negotiated, even if that was not the correct certificate for that domain name -- which is sometimes a problem for our hosting (we host literally thousands of domains on our multitenant application, about half with their own certs). In specific, I would see something like this in the stderr output before grep filtered it: * Server certificate:* subject: CN=academy-fran.chi.v6.pressero.com* start date: Feb 22 04:55:00 2017 GMT* expire date: May 23 04:55:00 2017 GMT* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3* SSL certificate verify ok. Today I had to reinstall the OS on my machine, and reinstalled curl. Now at version 7.52.1 (x86_64-w64-mingw32); previous one seems to have been 7.49.1 (i686-pc-cygwin). Curl no longer displays ANY certificate information, regardless of whether -k is used or not, if the TLS connection succeeds or not. Is there an option that will give it back to me? Now provide the response and nothing else.
For anyone else on OSX or Linux, you can add this to your ~/.zshrc file: function seecert () { nslookup $1 (openssl s_client -showcerts -servername $1 -connect $1:443 <<< "Q" | openssl x509 -text | grep -iA2 "Validity")} Example usage, after you have run a source ~/.zshrc after the above additions: % seecert www.google.comServer: 1.1.1.1Address: 1.1.1.1#53Non-authoritative answer:Name: www.google.comAddress: 172.217.10.100depth=2 OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSignverify return:1depth=1 C = US, O = Google Trust Services, CN = GTS CA 1O1verify return:1depth=0 C = US, ST = California, L = Mountain View, O = Google LLC, CN = www.google.comverify return:1DONE Validity Not Before: Nov 3 07:39:18 2020 GMT Not After : Jan 26 07:39:18 2021 GMT Thanks go to @ross-presser and his answer for the inspiration for this function.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42635253', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/864696/']}
jdg_74282
stackexchange
llm_judgeable_groundtruth_similarity
815134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem with a continue statement in my C# Foreach loop. I want it to check if there is a blank cell in the datagridview, and if so, then skip printing the value out and carry on to check the next cell. Help appreciated greatly. Here is the code: foreach (DataGridViewRow row in this.dataGridView1.Rows){ foreach (DataGridViewCell cell in row.Cells) { if (cell.Size.IsEmpty) { continue; } MessageBox.Show(cell.Value.ToString()); }} Now provide the response and nothing else.
Well, you're currently checking whether the cell's size is zero. In a grid, every cell in a column has the same width and every cell in a row has the same height (typically, anyway). You want to be checking based on the value of the cell. For example: if (cell.Value == null || cell.Value.Equals("")){ continue;} Tweak this for any other representations of "empty" values that you're interested in. If there are lots, you might want to write a specific method for this, and call it in the check: if (IsEmptyValue(cell.Value)){ continue;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/815134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/99900/']}
jdg_74283
stackexchange
llm_judgeable_groundtruth_similarity
3339141
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it true that the x86 ASM "LOCK" command prefix causes all cores to freeze while the instruction following "LOCK" is being executed? I read this in a blog post and it doesn't make sense. I can't find anything that indicates if this is true or not. Now provide the response and nothing else.
It's about locking the memory bus for that address. The Intel 64 and IA-32 Architectures Software Developer's Manual - Volume 3A: System Programming Guide, Part 1 tells us: 7.1.4 Effects of a LOCK Operation on Internal Processor Caches. For the Intel486 and Pentium processors, the LOCK# signal is always asserted on the bus during a LOCK operation, even if the area of memory being locked is cached in the processor. For the P6 and more recent processor families, if the area of memory being locked during a LOCK operation is cached in the processor that is performing the LOCK operation as write-back memory and is completely contained in a cache line, the processor may not assert the LOCK# signal on the bus . Instead, it will modify the memory location internally and allow [its] cache coherency mechanism to insure that the operation is carried out atomically. This operation is called "cache locking." The cache coherency mechanism automatically prevents two or more processors that have the same area of memory from simultaneously modifying data in that area. (emphasis added) Here we learn that the P6 and newer chips are smart enough to determine if they really have to block off the bus or can just rely on intelligent caching. I think this is a neat optimization. I discussed this more in my blog post " How Do Locks Lock? "
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3339141', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/401584/']}
jdg_74284
stackexchange
llm_judgeable_groundtruth_similarity
2626072
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I heard that the equivalent integral: $-\int_0^\infty \frac{x}{e^x-1}dx$ can be done using Contour integration (I never studied this). Also that sometimes "Leibniz integral rule" is used instead of Contour integration. So can "Feynman's trick" be used to show that $\int_0^1 \frac{\ln(1-x)}{x}dx = -\frac{\pi^2}{6}$ $\:\:?$ Now provide the response and nothing else.
Let $\displaystyle J=\int_0^1 \frac{\ln(1-x)}{x}\,dx$ Let $f$ be a function defined on $\left[0;1\right]$, $\displaystyle f(s)=\int_0^{\frac{\pi}{2}} \arctan\left(\frac{\cos t-s}{\sin t}\right)\,dt$ Observe that, $\begin{align} f(0)&=\int_0^{\frac{\pi}{2}}\arctan\left(\frac{\cos t}{\sin t}\right)\,dt\\&=\int_0^{\frac{\pi}{2}} \left(\frac{\pi}{2}-t\right)\,dt\\&=\left[\frac{t(\pi-t)}{2}\right]_0^{\frac{\pi}{2}}\\&=\frac{\pi^2}{8}\end{align}$ $\begin{align} f(1)&=\int_0^{\frac{\pi}{2}}\arctan\left(\frac{\cos t-1}{\sin t}\right)\,dt\\&=\int_0^{\frac{\pi}{2}}\arctan\left(-\tan\left(\frac{t}{2}\right)\right)\,dt\\&=-\int_0^{\frac{\pi}{2}}\arctan\left(\tan\left(\frac{t}{2}\right)\right)\,dt\\&=-\int_0^{\frac{\pi}{2}} \frac{t}{2}\,dt\\&=-\frac{\pi^2}{16}\end{align}$ For $0<s<1$, $\begin{align}f^\prime(s)&=-\int_0^{\frac{\pi}{2}}\frac{\sin t}{1-2s\cos t+s^2}\,dt\\&=-\Big[\frac{\ln(1-2s\cos t+s^2)}{2s}\Big]_0^{\frac{\pi}{2}}\\&=\frac{\ln(\left(1-s)^2\right)}{2s}-\frac{\ln(1+s^2)}{2s}\\&=\frac{\ln(1-s)}{s}-\frac{\ln(1+s^2)}{2s}\\\end{align}$ Therefore, $\begin{align}f(1)-f(0)&=\int_0^1 f^\prime(s)\,ds\\&=\int_0^1 \left(\frac{\ln(1-s)}{s}-\frac{\ln(1+s^2)}{2s}\right)\,ds\\-\frac{\pi^2}{16}-\frac{\pi^2}{8}&=J-\int_0^1 \frac{\ln(1+s^2)}{2s}\,ds\\-\frac{3\pi^2}{16}&=J-\int_0^1 \frac{\ln(1+s^2)}{2s}\,ds\\\end{align}$ In the latter integral perform the change of variable $y=s^2$, $\begin{align}-\frac{3\pi^2}{16}&=J-\frac{1}{4}\int_0^1 \frac{\ln(1+y)}{y}\,dy\\&=J-\frac{1}{4}\int_0^1 \frac{\ln(1-y^2)-\ln(1-y)}{y}\,dy\\&=J+\frac{1}{4}J-\frac{1}{4}\int_0^1 \frac{\ln(1-y^2)}{y}\,dy\\\end{align}$ In the latter integral perform the change of variable $x=y^2$, $\begin{align}-\frac{3\pi^2}{16}&=J+\frac{1}{4}J-\frac{1}{4}\times \frac{1}{2}J\\&=\frac{9}{8}J\\\end{align}$ Therefore, $\begin{align}J&=\frac{8}{9}\times -\frac{3}{16}\pi^2\\&=\boxed{-\frac{\pi^2}{6}}\end{align}$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2626072', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/401136/']}
jdg_74285
stackexchange
llm_judgeable_groundtruth_similarity
36876570
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to write a shorthand function that returns the first element of a vector: pub fn first() -> Option<&T> { let v = Vec::new(); v.first()} Which of course fails with: error: missing lifetime specifier [E0106] Is there any way to make this work? Now provide the response and nothing else.
Not in its current state.. no. Basically, when first() returns here, v is dropped. Which makes returning a reference out of the function unsafe, because now the reference points into a void. One option is to pass the vector in and return a reference to the first item out: fn main () { let v = vec![1,2,3,4]; println!("{:?}", first(&v).unwrap()); // Prints 1}fn first<T>(v: &Vec<T>) -> Option<&T> { v.first()} This seems redundant though and so without knowing exactly what you're trying to do this seems like an okay option. If you expand your question I will expand my answer.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36876570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/941764/']}
jdg_74286
stackexchange
llm_judgeable_groundtruth_similarity
71768999
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working with 1 other developer who has created a branch that needs to be merged with master. I get this error when attempting to Pull in Git in Visual Studio Community (not Visual Studio Code) to a Bitbucket repo If I attempt to Push it says "unable to push because your local branch is behind the remote branch". This is the error: Hint: You have divergent branches and need to specify how to reconcile them.Hint: You can do so by running one of the following commands sometime beforeHint: your next pull:Hint: Hint: git config pull.rebase false # mergeHint: git config pull.rebase true # rebaseHint: git config pull.ff only # fast-forward onlyHint: Hint: You can replace "git config" with "git config --global" to set a defaultHint: preference for all repositories. You can also pass --rebase, --no-rebase,Hint: or --ff-only on the command line to override the configured default perHint: invocation.Git failed with a fatal error.Git failed with a fatal error.Need to specify how to reconcile divergent branches. I've found various things that discuss this, eg https://laracasts.com/discuss/channels/code-review/git-pull-error-pulling-without-specifying-how-to-reconcile-divergent-branches-is-discouraged?page=1&replyId=773818 and How can I deal with this Git warning? "Pulling without specifying how to reconcile divergent branches is discouraged" But none of them explain WHY this is happening and what the actions actually do. How can I merge in what's in the other branch into master, why is this message coming up and what effect do all the suggestions in the hints have? Thanks Now provide the response and nothing else.
As is often the case with confusing stuff in Git, there's some history involved. The first thing to know is that git pull does too much stuff. Well, for some people (me), it does too much; others like that it does this much; but in fact, it does two jobs, each of which has its own separate Git command: git pull runs git fetch . Most, but not all, of the arguments you give to git pull are passed directly to git fetch . So git pull means run git fetch and git pull origin somebranch means run git fetch origin somebranch . Assuming the first step succeeds, git pull runs a second Git command. The reason to have a step 2 at all is simple enough: git fetch obtains new commits from some other Git repository, stuffing those new commits into your own repository where you now have access to them. But then it stops . You have access to the new commits, but nothing is actually using the new commits yet. To use the new commits, you need a second step. Initially, that second step was always git merge . The git merge command is pretty big and complicated but it has a meaning that's pretty simple to describe: Merge means combine work . Git will attempt to take work you have done, if you have done any, and work they have done, if they have done any, and combine the work, using simple and stupid automated rules. These rules have no clue as to how or why you did the work, or what anything you changed means . They just work based on "lines" in diffs. There are, however, four possibilities here: Perhaps you did no work and they did no work. You got no new commits. There's literally nothing to do, and git merge does nothing. Perhaps you did some work and they did nothing; you got no new commits; there's nothing to do and git merge does nothing again. Perhaps you did no work and they did do some work. You got some new commits. Combining your lack-of-work with their actual work is easy and git merge will take a shortcut if you allow it. Perhaps you and they both did work. You have new commits and you got new commits from them, and git merge has to use its simple-and-stupid rules to combine the work. Git cannot take any shortcuts here and you will get a full-blown merge. The shortcut that Git may be able to take is to simply check out their latest commit while dragging your branch name forward. The git merge command calls this a fast-forward merge , although there's no actual merging involved. This kind of not-really-a-merge is trivial, and normally extremely safe: the only thing that can go wrong is if their latest commit doesn't actually function properly. (In that case, you can go back to the older version that does.) So a "fast forward" merge is particularly friendly: there's no complicated line-by-line merging rules that can go awry. Many people like this kind of "merge". Sometimes the shortcut is not possible, and sometimes some people don't want Git to take the shortcut (for reasons we won't cover here to keep this answer short, or short for me anyway). There is a way to tell git merge do not take the shortcut, even if you can . So, for git merge alone, that gives us three possibilities: nothing to do (and git merge is always willing to do nothing); fast-forward is possible, but maybe Git shouldn't do it; and fast-forward is not possible, which means this merge isn't trivial. The git merge command has options to tell it what to do in all but the "nothing to do" case: (no flags): do a fast-forward if possible, and if not, attempt a real merge. --ff-only : do a fast-forward if that's possible. If not, give an error stating that fast-forward is not possible; do not attempt a merge. --no-ff : even if a fast-forward is possible, don't use the shortcut: attempt a full merge in every case (except of course the "nothing to do" case). The git pull command accepts all of these flags and will pass them on to git merge , should you choose to have git pull run git merge as its step 2. But wait, there's more Not everyone wants Git to do merges. Suppose you have made one or two new commits, which we'll call I and J , and your git fetch from origin brings in two new commits that they made since you started, which we will call K and L . That gives you a set of commits that, if you were to draw them, might look like this: I--J <-- your-branch /...--G--H <-- main \ K--L <-- origin/main You can fast-forward your main to match their origin/main : I--J <-- your-branch /...--G--H \ K--L <-- main, origin/main And, whether or not you do that, you can merge your commit J with their commit L to produce a new merge commit M : I--J / \...--G--H M <-- your-branch (HEAD) \ / K--L <-- origin/main But some people prefer to rebase their commits—in this case I and J —so that they come after commit L , so that the picture now looks like this: I--J [abandoned] /...--G--H--K--L <-- origin/main \ I'-J' <-- your-branch Here, we have copied commits I and J to new-and-improved commits I' and J' . These commits make the same changes to L that I-J made to H , but these commits have different big-ugly-hash-IDs and look like you made them after the origin guys made their K-L commits. The git pull command can do this kind of rebasing: git switch your-branchgit pull --rebase origin main does this all in one shot, by running git fetch to get their commits, then running git rebase with the right arguments to make Git copy I-J to I'-J' as shown above. Once the rebase is done—remember that, like git merge , it may have merge conflicts that you have to solve first—Git will move the branch name your-branch to select the last copied commit: J' in this example. Not very long after git pull was written, this --rebase was added to it. And since many people want this sort of thing to happen automatically , git pull gained the ability to default to using --rebase . You configured your branch to do this (by setting branch. branch .rebase to true ) and git pull would do a rebase for you. (Note that the commit on which your rebase occurs now depends on two things: the upstream setting of the branch, and some of the arguments you can pass to git pull . I've kept things explicit in this example so that we do not have to worry about smaller details, but in practice, you do.) This brings us to 2006 or 2008 or so At this point in Git's development, we have: git fetch : obtains new commits from somewhere else (an "upstream" or origin repository for instance), often updating origin/* style remote-tracking names; git merge : does nothing, or a fast-forward, or a true merge, of some specified commit or the branch's upstream; git rebase : copies some set of existing commits to new-and-improved commits, using a specified commit or the branch's upstream, then abandons the original commits in favor of the copies; and git pull : using the branch's upstream or explicit arguments, run git fetch and then run either git merge or git rebase . Because git merge can take --ff-only or --no-ff arguments, git pull must be able to pass these to git merge if we're using git merge . As time goes on, more options start appearing, such as auto-stashing, rebase's "fork point", and so on. Also, it turns out that many people want rebasing to be their default for git pull , so Git acquires a new configuration option, branch.autoSetupRebase . When set to remote or always , this does what many of these folks want (though there are actually four settings today; I don't remember if it had four back then and have not bothered to check). Time continues marching on and we reach the 2020s By now—some time between 2020 and 2022—it has become clear that git pull does the wrong thing for many, maybe even most, people who are new to Git. My personal recommendation has been to avoid git pull . Just don't use it: run git fetch first, then look at what git fetch said. Then, if git fetch did a lot, maybe use git log next. And then , once you're sure whether you want git merge with whatever options, or git rebase also with whatever options, run that command . If you use this option, you are in full control. You dictate what happens, rather than getting some surprise from Git. I like this option: it's simple! You do need to run at least two commands, of course. But you get to run additional commands between those two, and that can be useful. Still, if a git pull brings in new commits that can be merged under git merge --ff-only , that often turns out to be what I want: do that fast-forward, or else stop and let me look around and decide whether I want a rebase, a merge, or whatever else I might want. 1 And that often turns out to be what others want as well, and now git pull , run with no arguments at all, can be told do that directly: git config --global pull.ff only achieves this. Meanwhile, the other two git config --global commands in the hint you show in your question make the second command be merge or rebase. So now , in 2022, it's easy to tell git pull to do what I would want it to do. Furthermore, it seems that the Git maintainers have come around to my point of view: that git pull without some forethought is bad , and newbies should not use it. So they've set up git pull so that it now requires that you pick one of these three options, if you want to run it with no arguments. 2 So, you need to pick one. The old default was git config pull.rebase false , but that was a bad default. I do not recommend it. I do recommend git config pull.ff only (though I still don't actually use it due to 15+ years of habits). 1 One real-world example: I encounter some bug that's a problem for me. I make a change to the code that I know is wrong, but lets me get my work done. I commit this horrible hack. I then wait for the upstream to make changes. They do and I bring in the new commits. If they've fixed the bug, I want to drop my fix, not merge or rebase it. If they have not fixed the bug, I want to rebase my hack (which may or may not need some tweaking). The "have they fixed the bug" test requires something git pull cannot test on its own. 2 Note that running git pull with arguments is not supposed to generate this kind of complaint. I still don't run it much, so I'm not quite sure what the bug was, but in the first round or two of implementation of the new feature, there was a bug where git pull would complain inappropriately. I believe it is fixed in 2.35 and am almost positive it's fixed in 2.36, which should be out any time now.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/71768999', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2254951/']}
jdg_74287
stackexchange
llm_judgeable_groundtruth_similarity
66488498
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a new component: Vue.component('my-component', { props: { displayed: { type: Boolean } }, template: ` <div v-bind:class="{'modal': true, 'auth-required': true, 'show-modal': displayed }"> <div class="modal__content"> <img src="/img/popup/close.svg" v-on:click="displayed = false;" alt="close" class="modal__closeBtn modal__closeBtn-questions" /> <slot></slot> <img src="/img/popup/dog.png" alt="dog" class="modal__contentImg" /> </div> </div>`, data: function () { return { isDisplayed: this.displayed }; },}) But when i'm trying to bind displayed property to another property from the page it doesn't work when modal.authRequired value changes: <mycomponent :displayed="modal.authRequired"></mycomponent> How to make isDisplayed to be reactive when modal.authRequired changes Now provide the response and nothing else.
In the module you are developing, add the conflicting packages to peerDependencies (and remove them from dependencies or devDependencies ): // package.json "peerDependencies": { "react": "16.13.1", "react-dom": "16.13.1" }, Execute npm install in your module. Now add react and react-dom to the webpack configuration of your module as externals . These packages shouldnt be included in the bundle of the module (the app that uses the module will provide them): // webpack.config.jsmodule.exports = { /* rest of config... */ output: { filename: "index.js", pathinfo: false, libraryTarget: 'umd', // In my case, I use libraryTarget as 'umd'. Not sure if relevant }, externals: { // Use external version of React "react": { "commonjs": "react", "commonjs2": "react", "amd": "react", "root": "React" }, "react-dom": { "commonjs": "react-dom", "commonjs2": "react-dom", "amd": "react-dom", "root": "ReactDOM" } },}; Then, after building your module, in your application you can check that both versions are now the same: // node_modules/mymodule/src/index.jsexport { default as ReactFromModule } from 'react' // src/index.jsimport React from 'react'import { ReactFromModule } from 'mymodule'console.log(React === ReactFromModule) // true :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66488498', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14713582/']}
jdg_74288
stackexchange
llm_judgeable_groundtruth_similarity
136665
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am not able to prove that this set is dense in $\mathbb{R}$ . Will be pleased if you help in a easiest way. $A=\{a+b\alpha: a,b\in \mathbb{Z}\}$ where $\alpha$ is a fixed irrational number. Now provide the response and nothing else.
I will write $\{x\}$ to mean the fractional part of $x$, i.e. for $x$ minus the floor of $x$. What we need to show is that we can get arbitrarily close to $0$ by taking $\{m\alpha\}$ for varying integers $m$. Note that, because $\alpha$ is irrational, $\{m\alpha\} \neq \{m'\alpha\}$ for $m \neq m'$. Let's show that we can get within $1/n$ of $0$ for an arbitrary positive integer $n$. Divide up the interval $[0, 1]$ into $n$ closed intervals of length $1/n$. We have $n+1$ distinct quantities $0, \{\alpha\}, \{2\alpha\}, \ldots, \{n\alpha\}$. By the pigeonhole principle, two of these, say $\{i\alpha\}$ and $\{j\alpha\}$ with $i > j$, lie in the same closed interval $[k/n, (k+1)/n]$, and so their difference, which is $\{(i - j)\alpha\}$, is closer than $1/n$ to $0$; as $n$ was arbitrary, we're done.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/136665', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24690/']}
jdg_74289
stackexchange
llm_judgeable_groundtruth_similarity
385088
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just flagged a comment that simply consisted of several idownvotedbecau.se links and nothing else, something like: idownvotedbecau.se/nomcve* idownvotedbecau.se/noexceptiondetails idownvotedbecau.se/noattempt idownvotedbecau.se/unclearquestion It disappeared instantly, so presumably idownvotedbecau.se is one of the magic phrases that deletes comments immediately. I hesitated between 'no longer needed' and 'unfriendly or unkind' - I seem to remember the welcome wagon specifically calling out idownvotedbecau.se as unfriendly, but couldn't quite convince myself that it was worthy of a telling off. Is there a right answer here, or is it a matter of context? *should I submit a pull request to add idownvotedbecau.se/nomin-reproex? :P Now provide the response and nothing else.
“No longer needed” is appropriate for this and most other general-purpose comment flagging. Comment flags mean “please delete this comment”, and “no longer needed” captures that message quite adequately. As I’ve described elsewhere, I’m no big fan of these comments. They don’t help much, if at all. Despite it being a perennial feature request, experience tells us that the vast majority of people do not like being told why their question is being downvoted. At best, it’s just a jumping-off point for an argument. Furthermore, most everything that you would use such a link to explain is adequately covered by a close reason, so once the question is closed, there’s no point whatsoever in having a link to an off-site resource that says the same thing as the big yellow banner underneath the question. This is another excellent use-case for “no longer needed” flags on such comments. The information is now conveyed in a more appropriate place. On a tangential note, what is with people copying and pasting close reason descriptions into the comments? That’s just noise. Vote to close, and move on. Don’t give people more mess to clean up. Comments that are not useful or have become obsolete should pretty much always be flagged as “no longer needed”. Please don’t flag comments that give specific, concrete advice on how to improve that particular question, regardless of whether they have links to idownvotedbecau.se or not. Those are the types of comments people should be leaving. They’re only obsolete when the problem has been clearly fixed. While it may be true that certain folks are using these links as a “fork” (in the words of Tim Post), I’ve yet to be able to deduce that from a single comment, and when you’re flagging a single comment, you’re only flagging that comment to request its deletion. If you wish to point out that someone is repeatedly leaving unconstructive or even rude comments, please flag one of their posts instead, and describe the problem in detail. That will allow a moderator to assess the bigger picture and take an appropriate action. Although I’m far from being an advocate of these sorts of link-only comments, I have a hard time imagining one that would really rise to the level of “unfriendly or unkind”. I don’t see any inherent Code of Conduct violation. I’d be inclined to dismiss such a flag but still delete the comment—a clear sign that “no longer needed” would have been a more appropriate choice. If you disagree, or if you see a comment that really does rise to the level of “unfriendly or unkind” in your judgment, definitely feel free to flag it as such. Moderators do take all flags seriously, and will handle it as our judgment dictates. Just don’t be surprised if it gets declined, and don’t expect it to catalyze an in-depth investigation into a user’s broader commenting patterns of behavior.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/385088', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/325900/']}
jdg_74290
stackexchange
llm_judgeable_groundtruth_similarity
3192310
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to wrap my head around this problem: Daniel randomly chooses balls from the group of $6$ red and $4$ green. What is the probability that he picks $2$ red and $3$ green if balls are drawn without replacement. What I remember from my college days that the probability is found by this formula: $$P(A)=\frac{\binom{6}{2}\binom{4}{3}}{\binom{10}{5}}=\frac{5}{21}$$ Is this correct? I am trying to understand why this works. Wouldn't probability depend on the order of balls drawn as the number of balls is changing after each draw? I get how we obtain numerator and denominator, I just feel that the probability should be dependent on the order. For example, the probability to pick red first is $\frac{6}{10}$ so the probability for the second draw becomes $\frac{5}{9}$ for red and $\frac{4}{9}$ for green. But if the first picked ball is green, the probability for the second draw becomes $\frac{6}{9}$ for red and $\frac{3}{9}$ for green. What am I missing? Now provide the response and nothing else.
The probability of picking a red ball first and then a green ball is $$ \frac{6}{10} \cdot \frac{4}{9} $$ The probability of picking a green ball first and then a red ball is $$ \frac{4}{10} \cdot \frac{6}{9} $$ Notice that the numbers in the denominator are the same, while the numbers inthe numerator are the same but in reverse order? Multiplication is commutative. Another way of looking at this: we don't care about the process you go through in picking the balls, as long as it is fair: each possible outcome (i.e. each possible subset of $5$ of the $10$ balls, where we consider the balls as in principle distinguishable) has the same probability. If this is the case, you just need to count the number of outcomes that belong to the event you're considering, and divide by the total number ofoutcomes.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3192310', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/469083/']}
jdg_74291
stackexchange
llm_judgeable_groundtruth_similarity
601763
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to Arduinos. I made a simple digital clock with an Arduino, a 2 x 16 LCD, and two push buttons to control the digits in the clock, one on the digital pin 10 to increase the digits of the minutes, and one on digital pin 9 to increase the digits of the hours. My problem is when I push one of the buttons to change the digits it takes time to do so, about one second. Is there a method to make the push buttons be detected outside the loop function so whenever I press on one of them it activates instantly? Here is the code: // include the library code:#include <LiquidCrystal.h>// initialize the library by associating any needed LCD interface pin// with the arduino pin number it is connected toconst int rs = 12, en = 11, d4 = 5, d5 = 4, d6 = 3, d7 = 2;LiquidCrystal lcd(rs, en, d4, d5, d6, d7);// these constants won't change. But you can change the size of// your LCD using them:const int numRows = 2;const int numCols = 16;int button1 = 10;int button2 = 9;int buttonState1 = 0;int buttonState2 = 0;byte cursor[8] = { 0b00000, 0b00100, 0b00000, 0b00000, 0b00000, 0b00000, 0b00100, 0b00000};String days[7] = { "Saturday", "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday"};int secOnes = 0;int secTens = 3;int minOnes = 9;int minTens = 5;int hOnes = 1;int hTens = 1;int dayName = 0;boolean morning = false;boolean evening = true;void setup() { // set up the LCD's number of columns and rows: pinMode(button1, INPUT); pinMode(button2, INPUT); lcd.begin(numCols, numRows); lcd.createChar(7, cursor);}void loop() { buttonState1 = digitalRead(button1); buttonState2 = digitalRead(button2); secOnes = secOnes + 1; if (secOnes == 10) { secOnes = 0; secTens = secTens + 1; } if (secTens == 6) { secOnes = 0; secTens = 0; minOnes = minOnes + 1; } if (minOnes == 10) { minOnes = 0; minTens = minTens + 1; } if (minTens == 6) { minOnes = 0; minTens = 0; hOnes = hOnes + 1; } if (hOnes == 10) { hOnes = 0; hTens = hTens + 1; } if (hOnes == 2 && minOnes == 0 && minTens == 0 && secOnes == 0 && secTens == 0) { if (evening == true) { evening = false; morning = true; lcd.clear(); dayName = dayName + 1; if(dayName == 7) dayName = 0; } else if (morning == true) { morning = false; evening = true; } } if (hOnes == 3 && hTens == 1) { hOnes = 1; hTens = 0; } lcd.setCursor(10, 0); lcd.print(secTens); lcd.setCursor(11, 0); lcd.print(secOnes); lcd.setCursor(8, 0); lcd.print(minOnes); lcd.setCursor(7, 0); lcd.print(minTens); lcd.setCursor(4, 0); lcd.print(hTens); lcd.setCursor(5, 0); lcd.print(hOnes); lcd.setCursor(14, 0); if (evening == true) { lcd.print("PM"); } else { lcd.print("AM"); } lcd.setCursor(0, 1); lcd.print(days[dayName]); lcd.setCursor(6, 0); lcd.write(7); lcd.setCursor(9, 0); lcd.write(7); delay(500); lcd.setCursor(9, 0); lcd.print(" "); delay(500); if (buttonState1 == HIGH) { minOnes = minOnes + 1; } if (buttonState2 == HIGH) { hOnes = hOnes + 1; }} Now provide the response and nothing else.
delay() is generally bad programming except for very simple applications as it ties up the processor. Use the elapsedMillis() library function instead. In use you will have the following lines: #include <elapsedMillis.h> // Load the libraryelapsedMillis myTimer; // Create a timervoid loop() { ... if (myTimer > 2500){ // 2.5 s // Do stuff. myTimer = 0; // Reset the timer. }} The timer is checked on every cycle of the loop so other tasks such as button presses can be detected almost immediately. You can create as many independent timers as you require in your program with elapsedMillis myTimer2; etc.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/601763', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/268533/']}
jdg_74292
stackexchange
llm_judgeable_groundtruth_similarity
545635
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the book, "Elementary Number Theory - 6th Edition" written by David M. Burton, I have a question. The problem is If $p$ and $p^2+8$ are both prime numbers, prove that $p^3+4$ is also prime.(p. 58 problem 3.3 #20) I don't know how to prove this.Thank you very much if you solve this question. Now provide the response and nothing else.
It's a trick question. This is only possible for $p=3$. For suppose that $p$ was such that $p^2+8$ is prime, and $p \neq 3$. Then, $p \equiv 1 \pmod{3}$ or $p \equiv - 1 \pmod{3}$ and in either case $p^2 \equiv 1 \pmod{3}$. Thus, $p^2+8$ is divisible by $3$, and can't be a prime. Hence, it remains to check that $3^3 + 4 = 31$ is prime. It happens to be true.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/545635', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/100931/']}
jdg_74293
stackexchange
llm_judgeable_groundtruth_similarity
41811675
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have multiple data frames which all have identical vector names and I'd like to cbind all which have a commmon pattern. So for these 3 data frames: df.1 <- data.frame(column1 = factor(sample(c("Male","Female"), 10, replace=TRUE)), speed=runif(10))df.2 <- data.frame(column1 = factor(sample(c("Male","Female"), 10, replace=TRUE)), speed=runif(10))df.3 <- data.frame(column1 = factor(sample(c("Male","Female"), 10, replace=TRUE)), speed = runif(10)) I would like to rbind everything with the common pattern "df.*" I have tried creating a list and then creating a data-frame from this using: temp <- lapply(ls(pattern = "df.*"), get) temp2<- as.data.frame(temp) However this only produces a data frame of 6 columns effectively cbinding the whole thing rather than rbinding. Now provide the response and nothing else.
We can use ls with mget library(data.table)rbindlist(mget(ls(pattern = "^df\\.\\d+"))) Or with dplyr library(dplyr)mget(ls(pattern="^df\\.\\d+")) %>% bind_rows() Or with rbind from base R do.call(rbind, mget(ls(pattern="^df\\.\\d+")))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41811675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1517860/']}
jdg_74294
stackexchange
llm_judgeable_groundtruth_similarity
2235079
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to reverse a SHA-1? I'm thinking about using a SHA-1 to create a simple lightweight system to authenticate a small embedded system that communicates over an unencrypted connection. Let's say that I create a sha1 like this with input from a "secret key" and spice it with a timestamp so that the SHA-1 will change all the time. sha1("My Secret Key"+"a timestamp") Then I include this SHA-1 in the communication and the server, which can do the same calculation. And hopefully, nobody would be able to figure out the "secret key". But is this really true? If you know that this is how I did it, you would know that I did put a timestamp in there and you would see the SHA-1.Can you then use those two and figure out the "secret key"? secret_key = bruteforce_sha1(sha1, timestamp) Note1 :I guess you could brute force in some way, but how much work would that actually be? Note2 :I don't plan to encrypt any data, I just would like to know who sent it. Now provide the response and nothing else.
No, you cannot reverse SHA-1, that is exactly why it is called a Secure Hash Algorithm. What you should definitely be doing though, is include the message that is being transmitted into the hash calculation. Otherwise a man-in-the-middle could intercept the message, and use the signature (which only contains the sender's key and the timestamp) to attach it to a fake message (where it would still be valid). And you should probably be using SHA-256 for new systems now. sha("My Secret Key"+"a timestamp" + the whole message to be signed) You also need to additionally transmit the timestamp in the clear, because otherwise you have no way to verify the digest (other than trying a lot of plausible timestamps). If a brute force attack is feasible depends on the length of your secret key. The security of your whole system would rely on this shared secret (because both sender and receiver need to know, but no one else). An attacker would try to go after the key (either but brute-force guessing or by trying to get it from your device) rather than trying to break SHA-1.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2235079', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51425/']}
jdg_74295
stackexchange
llm_judgeable_groundtruth_similarity
23352789
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: ViewPropertyAnimators ( http://developer.android.com/reference/android/view/ViewPropertyAnimator.html ) have two types of methods to move Views around: translateX() vs x(), translateY() vs y(), translateXBy() vs xBy() and finally translateYBy() vs yBy(). Can someone tell me the difference between these methods? Now provide the response and nothing else.
The documentation states that X and Y properties are absolute positions, while translationX and translationY are referred to the LEFT and TOP properties respectively. So, the difference between animate().x() and animate().translationX() is that one animates to an absolute value, while the other animates to a still absolute value but determined by the view LEFT and TOP values. By contrast, animate().xBy() and animate().translationXBy() increase (algebraically) the property value by the specified amount. That is the origin of the "some mathematics" cited above.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23352789', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/499125/']}
jdg_74296
stackexchange
llm_judgeable_groundtruth_similarity
75785
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any complete guidance on doing AppBar docking (such as locking to the screen edge) in WPF? I understand there are InterOp calls that need to be made, but I'm looking for either a proof of concept based on a simple WPF form, or a componentized version that can be consumed. Related resources: http://www.codeproject.com/KB/dotnet/AppBar.aspx http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/05c73c9c-e85d-4ecd-b9b6-4c714a65e72b/ Now provide the response and nothing else.
Please Note: This question gathered a good amount of feedback, and some people below have made great points or fixes. Therefore, while I'll keep the code here (and possibly update it), I've also created a WpfAppBar project on github . Feel free to send pull requests. That same project also builds to a WpfAppBar nuget package I took the code from the first link provided in the question ( http://www.codeproject.com/KB/dotnet/AppBar.aspx ) and modified it to do two things: Work with WPF Be "standalone" - if you put this single file in your project, you can call AppBarFunctions.SetAppBar(...) without any further modification to the window. This approach doesn't create a base class. To use, just call this code from anywhere within a normal wpf window (say a button click or the initialize). Note that you can not call this until AFTER the window is initialized, if the HWND hasn't been created yet (like in the constructor), an error will occur. Make the window an appbar: AppBarFunctions.SetAppBar( this, ABEdge.Right ); Restore the window to a normal window: AppBarFunctions.SetAppBar( this, ABEdge.None ); Here's the full code to the file - note you'll want to change the namespace on line 7 to something apropriate. using System;using System.Collections.Generic;using System.Runtime.InteropServices;using System.Windows;using System.Windows.Interop;using System.Windows.Threading;namespace AppBarApplication{ public enum ABEdge : int { Left = 0, Top, Right, Bottom, None } internal static class AppBarFunctions { [StructLayout(LayoutKind.Sequential)] private struct RECT { public int left; public int top; public int right; public int bottom; } [StructLayout(LayoutKind.Sequential)] private struct APPBARDATA { public int cbSize; public IntPtr hWnd; public int uCallbackMessage; public int uEdge; public RECT rc; public IntPtr lParam; } private enum ABMsg : int { ABM_NEW = 0, ABM_REMOVE, ABM_QUERYPOS, ABM_SETPOS, ABM_GETSTATE, ABM_GETTASKBARPOS, ABM_ACTIVATE, ABM_GETAUTOHIDEBAR, ABM_SETAUTOHIDEBAR, ABM_WINDOWPOSCHANGED, ABM_SETSTATE } private enum ABNotify : int { ABN_STATECHANGE = 0, ABN_POSCHANGED, ABN_FULLSCREENAPP, ABN_WINDOWARRANGE } [DllImport("SHELL32", CallingConvention = CallingConvention.StdCall)] private static extern uint SHAppBarMessage(int dwMessage, ref APPBARDATA pData); [DllImport("User32.dll", CharSet = CharSet.Auto)] private static extern int RegisterWindowMessage(string msg); private class RegisterInfo { public int CallbackId { get; set; } public bool IsRegistered { get; set; } public Window Window { get; set; } public ABEdge Edge { get; set; } public WindowStyle OriginalStyle { get; set; } public Point OriginalPosition { get; set; } public Size OriginalSize { get; set; } public ResizeMode OriginalResizeMode { get; set; } public IntPtr WndProc(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled) { if (msg == CallbackId) { if (wParam.ToInt32() == (int)ABNotify.ABN_POSCHANGED) { ABSetPos(Edge, Window); handled = true; } } return IntPtr.Zero; } } private static Dictionary<Window, RegisterInfo> s_RegisteredWindowInfo = new Dictionary<Window, RegisterInfo>(); private static RegisterInfo GetRegisterInfo(Window appbarWindow) { RegisterInfo reg; if( s_RegisteredWindowInfo.ContainsKey(appbarWindow)) { reg = s_RegisteredWindowInfo[appbarWindow]; } else { reg = new RegisterInfo() { CallbackId = 0, Window = appbarWindow, IsRegistered = false, Edge = ABEdge.Top, OriginalStyle = appbarWindow.WindowStyle, OriginalPosition =new Point( appbarWindow.Left, appbarWindow.Top), OriginalSize = new Size( appbarWindow.ActualWidth, appbarWindow.ActualHeight), OriginalResizeMode = appbarWindow.ResizeMode, }; s_RegisteredWindowInfo.Add(appbarWindow, reg); } return reg; } private static void RestoreWindow(Window appbarWindow) { RegisterInfo info = GetRegisterInfo(appbarWindow); appbarWindow.WindowStyle = info.OriginalStyle; appbarWindow.ResizeMode = info.OriginalResizeMode; appbarWindow.Topmost = false; Rect rect = new Rect(info.OriginalPosition.X, info.OriginalPosition.Y, info.OriginalSize.Width, info.OriginalSize.Height); appbarWindow.Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new ResizeDelegate(DoResize), appbarWindow, rect); } public static void SetAppBar(Window appbarWindow, ABEdge edge) { RegisterInfo info = GetRegisterInfo(appbarWindow); info.Edge = edge; APPBARDATA abd = new APPBARDATA(); abd.cbSize = Marshal.SizeOf(abd); abd.hWnd = new WindowInteropHelper(appbarWindow).Handle; if( edge == ABEdge.None) { if( info.IsRegistered) { SHAppBarMessage((int)ABMsg.ABM_REMOVE, ref abd); info.IsRegistered = false; } RestoreWindow(appbarWindow); return; } if (!info.IsRegistered) { info.IsRegistered = true; info.CallbackId = RegisterWindowMessage("AppBarMessage"); abd.uCallbackMessage = info.CallbackId; uint ret = SHAppBarMessage((int)ABMsg.ABM_NEW, ref abd); HwndSource source = HwndSource.FromHwnd(abd.hWnd); source.AddHook(new HwndSourceHook(info.WndProc)); } appbarWindow.WindowStyle = WindowStyle.None; appbarWindow.ResizeMode = ResizeMode.NoResize; appbarWindow.Topmost = true; ABSetPos(info.Edge, appbarWindow); } private delegate void ResizeDelegate(Window appbarWindow, Rect rect); private static void DoResize(Window appbarWindow, Rect rect) { appbarWindow.Width = rect.Width; appbarWindow.Height = rect.Height; appbarWindow.Top = rect.Top; appbarWindow.Left = rect.Left; } private static void ABSetPos(ABEdge edge, Window appbarWindow) { APPBARDATA barData = new APPBARDATA(); barData.cbSize = Marshal.SizeOf(barData); barData.hWnd = new WindowInteropHelper(appbarWindow).Handle; barData.uEdge = (int)edge; if (barData.uEdge == (int)ABEdge.Left || barData.uEdge == (int)ABEdge.Right) { barData.rc.top = 0; barData.rc.bottom = (int)SystemParameters.PrimaryScreenHeight; if (barData.uEdge == (int)ABEdge.Left) { barData.rc.left = 0; barData.rc.right = (int)Math.Round(appbarWindow.ActualWidth); } else { barData.rc.right = (int)SystemParameters.PrimaryScreenWidth; barData.rc.left = barData.rc.right - (int)Math.Round(appbarWindow.ActualWidth); } } else { barData.rc.left = 0; barData.rc.right = (int)SystemParameters.PrimaryScreenWidth; if (barData.uEdge == (int)ABEdge.Top) { barData.rc.top = 0; barData.rc.bottom = (int)Math.Round(appbarWindow.ActualHeight); } else { barData.rc.bottom = (int)SystemParameters.PrimaryScreenHeight; barData.rc.top = barData.rc.bottom - (int)Math.Round(appbarWindow.ActualHeight); } } SHAppBarMessage((int)ABMsg.ABM_QUERYPOS, ref barData); SHAppBarMessage((int)ABMsg.ABM_SETPOS, ref barData); Rect rect = new Rect((double)barData.rc.left, (double)barData.rc.top, (double)(barData.rc.right - barData.rc.left), (double)(barData.rc.bottom - barData.rc.top)); //This is done async, because WPF will send a resize after a new appbar is added. //if we size right away, WPFs resize comes last and overrides us. appbarWindow.Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new ResizeDelegate(DoResize), appbarWindow, rect); } }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/75785', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7301/']}
jdg_74297
stackexchange
llm_judgeable_groundtruth_similarity
3845737
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to return from a Java method a reference to a Scala object. How can I do that? My Scala objects are like this: trait Environment object LocalEnvironment extends Environment {...}object ServerEnvironment extends Environment {...} ... and I want my Java method to be like this: Environment getEnvironment() { return LocalEnvironment; } // DOES NOT COMPILE Is there a way to do this? Now provide the response and nothing else.
{ return LocalEnvironment$.MODULE$; } should work. Edit: the reason why this works is that this is how Scala represents singleton objects. The class ObjectName$ has a field in it called MODULE$ that is populated with the single valid instance of that class. But there is also a class called ObjectName that copies all the methods as static methods. That way you can use it like Java (just call ObjectName.methodName ) in most cases, and Scala gets to have a real class to pass around. But when Java needs to pass the class around--not something normally done with a bunch of static methods, which is what object is designed to emulate in Java--you then have to know how Scala represents it internally.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3845737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156410/']}
jdg_74298
stackexchange
llm_judgeable_groundtruth_similarity
11381506
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: //binary_tree.h filetypedef struct node node;struct node{ node():left(0), right(0), value(-1){}; ~node(){if(left) delete left; if(right) delete right;}; node *left; node *right; int value;};inline void insert_node(node **root, node *new_node){ assert(new_node != NULL); if(*root == NULL) { *root = new_node; } else { node *itr = *root; while(1) { if(itr->value > new_node->value) itr = itr->left; else itr = itr->right; if(!itr) { itr = new_node; break; } } }}inline void inorder_print(node *root){ if(!root) return; inorder_print(root->left); printf("%d\n", root->value); inorder_print(root->right);}//main.cpp file#include "binary_tree.h"int main(){ node *node1 = new node(); node *node2 = new node(); node *node3 = new node(); node *node4 = new node(); node *node5 = new node(); node1->value = 5; node2->value = 10; node3->value = 3; node4->value = 1; node5->value = 4; node *binary_tree = NULL; insert_node(&binary_tree, node1); insert_node(&binary_tree, node2); insert_node(&binary_tree, node3); insert_node(&binary_tree, node4); insert_node(&binary_tree, node5); assert(binary_tree != NULL); inorder_print(binary_tree); return 0;} I have a very simple program and I want to create a binary tree and print the tree.Hoever the code segment shown below doesn't change the tree structure. node *itr = *root; while(1) { if(itr->value > new_node->value) itr = itr->left; else itr = itr->right; if(!itr) { itr = new_node; break; } } inorder_print function always prints '5' The problem is to use 'itr' variable. I don't really see how I could do this without using a local variable nor changing the pointer to the root. Now provide the response and nothing else.
There isn't a built-in control for WinForms to do this. You can use the GroupBox control though, set the Text property to an empty string, and set it's height to 2 . This will mimic a embossed line. Otherwise, you need to create a custom control and paint the line yourself. For a custom control, here's an example. using System.ComponentModel;using System.Drawing;using System.Windows.Forms;namespace WindowsFormsApplication12{ public partial class Line : Control { public Line() { InitializeComponent(); } private Color m_LineColor = Color.Black; /// <summary> /// Gets or sets the color of the divider line /// </summary> [Category("Appearance")] [Description("Gets or sets the color of the divider line")] public Color LineColor { get { return m_LineColor; } set { m_LineColor = value; Invalidate(); } } protected override void OnPaint(PaintEventArgs pe) { using (SolidBrush brush = new SolidBrush(LineColor)) { pe.Graphics.FillRectangle(brush, pe.ClipRectangle); } } }} It simply fills the ClientRectangle with the specified LineColor , so the height and width of the line is that of the control itself. Adjust accordingly.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11381506', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/615780/']}
jdg_74299
stackexchange
llm_judgeable_groundtruth_similarity
313256
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to migrate a VB.NET Winform solution to a 3.5 WPF solution. If so, any suggestions how to do it? Thanks in advance!JFV Now provide the response and nothing else.
Microsoft is doing everything for us so that we have to throw away everything we've written 2 years ago.According to Josh Smith, Is there a way to convert Winforms application to a WPF application? No. Those two UI platforms are very different and there is no app which converts a WinForms app to a WPF app. Of course, this will not prevent you from reusing any business/DAL/logging/Ioc/etc libraries you use in WinForms. However, you might want to check out this , quite basic but functional WinForms->XAML converter, written by Rob Relyea from WPF & Xaml Language Team. According to him, "We (Microsoft) haven't built a real converter because we think most people will not just move over controls, but often will rethink their application as they move it. We'd love feedback..." Also, you might want to check out Guidance for migrating an app from Windows Forms to WPF , also by Rob Relyea.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/313256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1391/']}
jdg_74300
stackexchange
llm_judgeable_groundtruth_similarity
49762835
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new in angular and I was wondering if it's possible to load and module and its components I made based on a conditional on the app.module or where would it be the best place to do this. Basically I want to do something like: if(user.deparment === 'coms') { //Use the communications.module and its components} I have attached some picture so you guys can see the structure of the app. if necessary I can add the code instead of a picture. App.module picture Communications.module picture Now provide the response and nothing else.
You can do a simple ternary check to conditionally import a module. Like this: import { NgModule, Optional } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { AppComponent } from './app.component';@NgModule({}) class MyModule {}// toggle and watch the consoleconst production = true;@NgModule({ imports: [ BrowserModule, production ? MyModule : []], declarations: [ AppComponent ], bootstrap: [ AppComponent ]})export class AppModule { constructor(@Optional() module: MyModule) { console.log(module); }} Live demo
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/49762835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5262452/']}
jdg_74301
stackexchange
llm_judgeable_groundtruth_similarity
13201
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like some Emacs Lisp forms to be indented like Common Lisp. For instance, in Emacs Lisp, if is indented like this: (if COND THEN ELSE) whilst I would prefer it like this: (if COND THEN ELSE) I could do this: (setq lisp-indent-function #'common-lisp-indent-function) But this impacts all forms, thus breaking the indentation of Emacs Lisp macros, like cl-letf and any user-defined macro. Now provide the response and nothing else.
Answer: Symbols for functions and macros have a lisp-indent-function property which you can set with declare or in your case, since if is already defined, you can just use put . You can read about the possible property values here: M-: (info "(elisp) Indenting Macros") return if 's default setting is 2 which means the second form get's special treatment and is offset. You can change the property value to 'defun and all the body forms will line up. (put 'if 'lisp-indent-function 'defun) Best Practice: It is important to note that there is a reason for the indentation style. if in Elisp is NOT like if in Common Lisp. Note the different signatures: Common Lisp: (if TEST THEN &OPTIONAL ELSE) Emacs Lisp: (if COND THEN ELSE...) In Emacs lisp, you can have as many forms after the condition as you want but only the first form is the THEN clause, all other forms are part of the ELSE clause. This is a valid Elisp if form that is not valid in CL: (if something (message "THEN") (message "all") (message "these") (message "run") (message "on") (message "ELSE")) This shows why it is important that the THEN clause is indented more, it is to make it stand out against all the ELSE forms. In addition to the indentation being important to the users ability to parse the code, it would also go against the accepted styling patterns of elisp. 99.99999% (carfefully calculated) of elisp code you encounter will use the exact same default indentation scheme. Changing the indentation of if for you own use will make it hard for you to work on other peoples elisp and make it hard for others to work on yours.
{}
{'log_upvote_score': 5, 'links': ['https://emacs.stackexchange.com/questions/13201', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/2763/']}
jdg_74302
stackexchange
llm_judgeable_groundtruth_similarity
3128486
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As far as I can tell, the only use for out parameters is that a caller can obtain multiple return values from a single method invocation. But we can also obtain multiple result values using ref parameters instead! So are there other situations where out parameters could prove useful and where we couldn't use ref parameters instead? Thank you. Now provide the response and nothing else.
Yes - the difference between ref and out is in terms of definite assignment: An out parameter doesn't have to be definitely assigned by the caller before the method call. It does have to be definitely assigned in the method before it returns normally (i.e. without an exception). The variable is then definitely assigned in the caller after the call. A ref parameter does have to be definitely assigned by the caller before the method call. It doesn't have to be assigned a different value in the method. So suppose we wanted to change int.TryParse(string, out int) to use ref instead. Usually the calling code looks like this: int value;if (int.TryParse(text, out value)){ // Use value}else{ // Do something else} Now if we used ref , we'd have to give value a value before the call, e.g.: int value = 0;if (int.TryParse(text, ref value)){ // Use value}else{ // Do something else} Obviously it's not a huge difference - but it gives the wrong impression. We're assigning a value that we have no intention of ever using, and that's not a good thing for readability. An out parameter indicates that a value will come out of the method (assuming there's no exception) and that you don't need to have a value to start with. Once of the suggestions I've made for C# 5 (I've no idea if it'll be taken up or not) is that a method with an out parameter should be able to regarded as a method returning a tuple of values. Combined with better support for tuples, that would mean we could do something like this: var (ok, value) = int.TryParse(text); In this case ok and value would be implicitly typed to bool and int respectively. That way it's clear what's going into the method ( text ) and what's coming out (two pieces of information: ok and value ). That would simply not be available if int.TryParse used a ref parameter instead - as the compiler can't know whether it's going to actually care about the initial value of the ref parameter.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3128486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/363828/']}
jdg_74303
stackexchange
llm_judgeable_groundtruth_similarity
46986431
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a machine learning assignment. By the when I was trying to install the following file brew install qt qt5brew install libxml2make qt5py3python labelImg.pypython labelImg.py [IMAGE_PATH] [PRE-DEFINED CLASS FILE] at make qt5py3 the output from terminal is make: *** No rule to make target 'qt5py3'. Stop I'm using OsX high sierra with default terminalDoes anyone know just what's happened? Now provide the response and nothing else.
To get the labelImg package (from the link you gave) working on OSX High Sierra, I needed to do the following. First installed the missing lxml dependency using the solution in this answer Also brew install qt qt5brew install libxml2brew install pyqt5 After installing pyqt5, ensure that pyrcc5 can be run from the command line. Download and unzip the labelImg .zip file from the link, and cd into the unzipped folder . Then make qt5py3 When trying to run, I found it necessary to simplify the libs/usty.py file to remove the PyQt4 dependency. I just changed it to: import sysdef ustr(x): return x At this point I can run the program with python labelImg.py Hope this is useful.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46986431', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6260113/']}
jdg_74304
stackexchange
llm_judgeable_groundtruth_similarity
2314526
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been playing around with Scala recently and was thinking about how to implement a generic version of quicksort in it (just to get a better feeling for the language) I came up with something like this object Main { def qs[T](a: List[T], f: (T, T) => Boolean): List[T] = { if (a == Nil) return a val (l, g) = a drop 1 partition (f(a(0),(_:T))) qs(l, f) ::: List(a(0)) ::: qs(g, f) } def main(args: Array[String]): Unit = { val a = List(5,3,2,1,7,8,9,4,6) val qsInt = qs(_: List[Int], (_: Int) > (_: Int)) println(qsInt(a)) }} This is not as generic as I wanted it to be, since I have to explicitly state how to order the elements rather then just doing something like val (l, g) = a drop 1 partition (a(0) >) How can I tell the compiler that T only needs to implement the greater-than operator to be sortable by this function? Regards Now provide the response and nothing else.
def qsort[T <% Ordered[T]](list: List[T]): List[T] = { list match { case Nil => Nil case x::xs => val (before, after) = xs partition (_ < x) qsort(before) ++ (x :: qsort(after)) }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2314526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1396134/']}
jdg_74305
stackexchange
llm_judgeable_groundtruth_similarity
8154210
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering how using an & to access a certain memory location changed the nature of a function call. For example, if I had written a function to set the radius of a circle //True, if success, False if radius is off-screenbool SetRadiusCircle(Circle &b, int r) This was an example given in the assignment my professor gave me. I just wanted to know how the "&" he included in the sample function call differed from simply using Circle b. Now provide the response and nothing else.
Yes. If you use &, you are passing the circle as a reference. If you don't use it, you are passing a copy of the circle object, copy that gets created by the compiler for you. This triggers the class copy constructor (either the one you defined, or the default one, which is a bitwise copy). With a reference, you are not copying the object. You are instead getting the object you had in the caller. It's equivalent of having an alias to the original object, and any change you perform will apply to the object passed. In the case of a copy, any change to the object is lost at the end of the call, because the copy is destroyed. When you use a reference, the internal use of that argument requires you to use the . to access that object's members (e.g. b.getRadius() ). If you defined your function to accept a pointer instead ( func (Circle *bPtr) ) then you must use the -> (e.g. bPtr->getRadius() ). Using a pointer is different from using a reference, but the final, practical effect is the same: you get to manipulate the original object, not a copy. Note that this is valid in the function definition. An & used in another context gives you the pointer where something resides. They are not the same, although they use the same symbol.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8154210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/895387/']}
jdg_74306
stackexchange
llm_judgeable_groundtruth_similarity
6105476
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems simple enough, right? Well, I don't know. Here's the code I'm trying: input = Regex.Replace(input, "\\", "\\\\\\"); However, I'm receiving an error, ArgumentException was unhandled - parsing "\" - Illegal \ at end of pattern. How do I do this? Now provide the response and nothing else.
The first one should be "\\\\" , not "\\" . It works like this: You have written "\\" . This translates to the sequence \ in a string. The regex engine then reads this, which translates as backslash which isn't escaping anything, so it throws an error. With regex, it's much easier to use a "verbatim string". In this case the verbatim string would be @"\\" . When using verbatim strings you only have to consider escaping for the regex engine, as backslashes are treated literally. The second string will also be @"\\" , as it will not be interpreted by the regex engine.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6105476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/764895/']}
jdg_74307
stackexchange
llm_judgeable_groundtruth_similarity
22814554
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using ui-router to represent states in my AngularJS app. In it I'd like to change the state without changing the URL (basically a "detail view" is updated but this should not affect the URL). I use <a ui-sref="item.detail({id: item.id})"> to display the detail but this only works if I specify a URL like url: "/detail-:id" in my $stateProvider . It seems to me that the current state is only defined through the URL. Now provide the response and nothing else.
Thanks for your answer, it did help me in the right direction but I'd just like to add a more complete description. In my specific issue there was a complicating factor because the state I needed to inject a non-URL parameter to was a child state. That complicated things slightly. The params: ['id'] part goes in the $stateProvider declaration like this: $stateProvider.state('parent', { url: '/:parentParam', templateUrl: '...', controller: '...'}).state('parent.child', { params: ['parentParam','childParam'], templateUrl: '...', controller: '...'}); And the param name is connected to the ui-sref attribute like this: <a ui-sref=".child({ childParam: 'foo' })"> And the catch is this: If the parent state also has a URL parameter then the child needs to also declare that in its params array. In the example above "parentParam" must be included in the childstate. If you don't do that then a module-error will be thrown when the application is initialized. This is at least true on the latest version at the time of writing (v.0.2.10). EDIT @gulsahkandemir points out that Declaration of params in a state definition has changed to params: { id: {} } from params: ['id'] Judging by the changelog, this seems to be the case starting from v0.2.11 Details of params can be found in the official docs
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22814554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/578588/']}
jdg_74308
stackexchange
llm_judgeable_groundtruth_similarity
23095060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to get the key of SelectedItem of ComboBox but do not figure it out how to get code which I have done is, void CboBoxSortingDatagridview(ComboBox sender){ foreach (var v in DictionaryCellValueNeeded) { if (!DictionaryGeneralUsers.ContainsKey(v.Key) && v.Value.RoleId == Convert.ToInt32(((ComboBox)sender).SelectedItem)) // here getting value {1,Admin} i want key value which is 1 but how? { DictionaryGeneralUsers.Add(v.Key, (GeneralUser)v.Value); } } dataGridViewMain.DataSource = DictionaryGeneralUsers.Values;} I binded the combo box in this way, cboRolesList.DataSource = new BindingSource(dictionaryRole, null); cboRolesList.DisplayMember = "Value"; cboRolesList.ValueMember = "Key"; Now provide the response and nothing else.
In cases like this, dictionaries are simply collections of key-value pairs, so each item on the ComboBox is a KeyValuePair<YourKeyType, YourValueType> . Cast SelectedItem to a KeyValuePair<YourKeyType, YourValueType> and then you can read the key. // get ComboBox from senderComboBox comboBox = (ComboBox) sender;// get selected KVPKeyValuePair<YourKeyType, YourValueType> selectedEntry = (KeyValuePair<YourKeyType, YourValueType>) comboBox.SelectedItem;// get selected KeyYourKeyType selectedKey = selectedEntry.Key; Or, a simpler way is to use the SelectedValue property. // get ComboBox from senderComboBox comboBox = (ComboBox) sender;// get selected KeyYourKeyType selectedKey = (YourKeyType) comboBox.SelectedValue;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23095060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3180716/']}
jdg_74309
stackexchange
llm_judgeable_groundtruth_similarity
16978
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For E2 reactions, why is a strong base like $\ce{NaOH}$ or $\ce{RONa}$ needed? Whereas for E1, even a weak base like $\ce{H2O}$ could be used. Wikipedia states: E2 typically uses a strong base, it needs a chemical strong enough to pull off a weakly acidic hydrogen. However it does not explain why. The lone pair of the base directly attacks the hydrogen, regardless of E1 or E2. Now provide the response and nothing else.
E2 is a concerted mechanism. The alpha proton in an E2 reaction substrate is also only weakly acidic because the bond between the alpha proton and the alpha carbon is relatively strong; there is only some inductive withdrawal of electron density from the C-H bond by the halogen. Therefore, we need a strong base to rip the proton away from the alpha carbon. E1 however involves two steps. In E1 the leaving group leaves, and then the base attacks. When the leaving group leaves, a carbon lacking an octet is formed. This carbon is highly electronegative and will withdraw electron density. This makes any alpha proton more acidic as hydrogen is less electronegative than plain old carbon - much less a carbon deficient of an octet in a covalent framework. In other words, electron withdrawal by the carbocation weakens the bond holding on to the alpha proton, and a weaker base suffices for taking away the proton. Note that I don't particularly like the below picture because it doesn't show anything abstracting the proton. Still the general idea is correct; electron density from the alpha C-H bond flows toward the hypovalent carbon. Heck, we can draw valid, hyperconjugative resonance structures for this baby.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/16978', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/8140/']}
jdg_74310
stackexchange
llm_judgeable_groundtruth_similarity
13101178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to implement a httpClient class which is an AsyncTask (otherwise I get an exception due to having a connection in my main thread).I tried something like this: private class execHttpAsync extends AsyncTask <String, String, HttpResponse> { public String resultString; @Override protected HttpResponse doInBackground(String... params) { String url = params[0]; HttpClient httpClient = new DefaultHttpClient(); HttpGet request = new HttpGet(url); request.setHeader("Content-Type", "text/xml"); HttpResponse response; try { response = httpClient.execute(request); } catch (ClientProtocolException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return response; } @Override protected void onPostExecute(HttpResponse result) { StringBuffer returned = new StringBuffer(); InputStream content = result.getEntity().getContent(); BufferedReader rd = new BufferedReader(new InputStreamReader(content, "UTF-8")); String line; while ((line = rd.readLine()) != null) { String endOfLine = ""; returned.append(line + endOfLine); } content.close(); String retVal = returned.toString(); try { String header = retVal.substring(0, 1); if (!header.equals("<")) { retVal = retVal.replace(header, ""); } } catch (Exception e) { // TODO: handle exception } resultString = returned.toString(); } } But I need to get the response in the end. I have tried to instantiate this class and then get the response as a member but that didnt work out.Any suggestions ? Now provide the response and nothing else.
try this way private class DownloadWebPageTask extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... urls) { String response = ""; for (String url : urls) { DefaultHttpClient client = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(url); try { HttpResponse execute = client.execute(httpGet); InputStream content = execute.getEntity().getContent(); BufferedReader buffer = new BufferedReader(new InputStreamReader(content)); String s = ""; while ((s = buffer.readLine()) != null) { response += s; } } catch (Exception e) { e.printStackTrace(); } } return response; } @Override protected void onPostExecute(String result) { textView.setText(result); } }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13101178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/721577/']}
jdg_74311
stackexchange
llm_judgeable_groundtruth_similarity
56910037
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have three arrays containing different information. The first contains a list of objects, the second contains a list of the object IDs (in corresponding order with the first list), the last contains a list of the object's parent node ID (in order too). The parent node ID of every object corresponds with another objects ID from all of the lists. But if the object doesn't have a parent node then it is a base node (parent id = -1, all other ids count up positively). The objects themselves contain an array that can hold other objects, I need to find a way to insert the children into their parent nodes. This problem can become really complicated because the children must be inserted from top to bottom or the tree will break, meaning the base nodes have to be found first, then their children inserted, then those children inserted, etc. etc. From looking at what you just read, you may think, recursion!! NOPE I tried this but javascript has limitations as to how much recursion can be done, when their are too many objects to be sorted, an exceeded stack error occurs. For those interested in how I was solving it before, here is my code: class Tree{ constructor (indices, sorting_information){ this.base_nodes = []; for (var i = 0; i < sorting_information.length; i++){ if (!(indices.includes(sorting_information[i]))){ var relevant_info = { location: i, id_num: indices[i], }; var base_node = new Node(relevant_info, indices, sorting_information); this.base_nodes.push(base_node); } } }}class Node { constructor (id_info, indices, sorting_information){ this.location = id_info.location; this.id = id_info.id_num; this.nodes = []; if (!(this.id == -1)){ for (var i = 0; i < sorting_information.length; i++){ if (sorting_information[i] == this.id){ var relevant_info = { location: i, id_num: indices[i], }; var node = new Node(relevant_info, indices, sorting_information); this.nodes.push(node); } } } }} For this code to work, only the last two lists are needed as it creates a skeletal structure that can be used to organize the first list. A Tree instance is created with the first and second parameters of the constructor being the first and second arrays respectively. The constructor finds the base nodes then when the base nodes are instantiated, they find their children, etc. etc. Expected input: var nodes = [node_0, node_1, node_2, node_3, node_4, node_5, node_6];var indices = [0, 1, 2, 3, 4, 5, 6]var sorting_information = [1, -1, 1, 0, -1, 4, 4] Expected output: var base_nodes = [node_1, node_4]; node_1.children = [node_0, node_2];node_4.children = [node_5, node_6];node_0.children = [node_3]; Please note that the data tree will not necessarily take up a binary form. I have looked at this problem from many angles but keep getting a headache any help would be appreciated Now provide the response and nothing else.
useEffect() will let you create side effects on your components based on the dependencies you send to it. function Example() { const [count, setCount] = React.useState(0); React.useEffect(() => { document.title = `You clicked ${count} times`; }, [count]); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}> Click me </button> </div> );}ReactDOM.render(<Example />, document.getElementById('root')) <script src="https://unpkg.com/[email protected]/umd/react.development.js"></script><script src="https://unpkg.com/[email protected]/umd/react-dom.development.js"></script><div id="root"></div> The example above is taken from the documentation of React . You can see that each time you click the button it will trigger an update on the count field (using setCount()) and then, the effect that depends on the count variable will trigger an update on the title of the page. useCallback() will return a memoized callback. Normally, if you have a child component that receives a function prop, at each re-render of the parent component, this function will be re-executed; by using useCallback() you ensure that this function is only re-executed when any value on it's dependency array changes. function ExampleChild({ callbackFunction }) { const [value, setValue] = React.useState(0); React.useEffect(() => { setValue(value + 1) }, [callbackFunction]); return (<p>Child: {value}</p>);}function ExampleParent() { const [count, setCount] = React.useState(0); const [another, setAnother] = React.useState(0); const countCallback = React.useCallback(() => { return count; }, [count]); return ( <div> <ExampleChild callbackFunction={countCallback} /> <button onClick={() => setCount(count + 1)}> Change callback </button> <button onClick={() => setAnother(another + 1)}> Do not change callback </button> </div> )}ReactDOM.render(<ExampleParent />, document.getElementById('root')); <script src="https://unpkg.com/[email protected]/umd/react.development.js"></script><script src="https://unpkg.com/[email protected]/umd/react-dom.development.js"></script><div id="root"></div> useMemo() will return a memoized value that is the result of the passed parameter. It means that useMemo() will make the calculation for some parameter once and it will then return the same result for the same parameter from a cache. This is very useful when you need to process a huge amount of data. function ExampleChild({ value }) { const [childValue, setChildValue] = React.useState(0); React.useEffect(() => { setChildValue(childValue + 1); }, [value]) return <p>Child value: {childValue}</p>;}function ExampleParent() { const [value, setValue] = React.useState(0); const heavyProcessing = () => { // Do some heavy processing with the parameter console.log(`Cached memo: ${value}`); return value; }; const memoizedResult = React.useMemo(heavyProcessing, [value]); return ( <div> <ExampleChild value={memoizedResult} /> <button onClick={() => setValue(value + 1)}> Change memo </button> </div> )}ReactDOM.render(<ExampleParent />, document.getElementById('root')); <script src="https://unpkg.com/[email protected]/umd/react.development.js"></script><script src="https://unpkg.com/[email protected]/umd/react-dom.development.js"></script><div id="root"></div>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56910037', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11073400/']}
jdg_74312
stackexchange
llm_judgeable_groundtruth_similarity
14413573
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have SQL Server 2008 R2. I have around 150 tables in a database and for each table I have recently created triggers. It is working fine in my local environment. Now I want to deploy them on my live environment. The question is I want to deploy only the triggers. I tried the Generate Script wizard but it is creating script with table schema along with triggers, NOT triggers only. Is there anyway to generate all the triggers drop and create type script? Now provide the response and nothing else.
Forget the wizard. I think you have to get your hands dirty with code. Script below prints all triggers code and stores it into table. Just copy the script's print output or get it from #triggerFullText. USE YourDatabaseNameGOSET NOCOUNT ON;CREATE TABLE #triggerFullText ([TriggerName] VARCHAR(500), [Text] VARCHAR(MAX))CREATE TABLE #triggerLines ([Text] VARCHAR(MAX))DECLARE @triggerName VARCHAR(500)DECLARE @fullText VARCHAR(MAX)SELECT @triggerName = MIN(name)FROM sys.triggersWHILE @triggerName IS NOT NULLBEGIN INSERT INTO #triggerLines EXEC sp_helptext @triggerName --sp_helptext gives us one row per trigger line --here we join lines into one variable SELECT @fullText = ISNULL(@fullText, '') + CHAR(10) + [TEXT] FROM #triggerLines --adding "GO" for ease of copy paste execution SET @fullText = @fullText + CHAR(10) + 'GO' + CHAR(10) PRINT @fullText --accumulating result for future manipulations INSERT INTO #triggerFullText([TriggerName], [Text]) VALUES(@triggerName, @fullText) --iterating over next trigger SELECT @triggerName = MIN(name) FROM sys.triggers WHERE name > @triggerName SET @fullText = NULL TRUNCATE TABLE #triggerLinesENDDROP TABLE #triggerFullTextDROP TABLE #triggerLines
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14413573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1228266/']}
jdg_74313
stackexchange
llm_judgeable_groundtruth_similarity
24986504
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know many ways how to find a substring: from start index to end index, between characters etc., but I have a problem which I don't know how to solve:I have a string like for example a path: folder1/folder2/folder3/new_folder/image.jpg and the second path: folder1/folder2/folder3/folder4/image2.png And from this paths I want to take only the last parts: image.jpg and image2.png . How can I take a substring if I don't know when it starts (I don't know the index, but I can suppose that it will be after last / character), if many times one character repeats ( / ) and the extensions are different ( .jpg and .png and even other)? Now provide the response and nothing else.
Use os.path.basename() instead and not worry about the details. os.path.basename() returns the filename portion of your path: >>> import os.path>>> os.path.basename('folder1/folder2/folder3/new_folder/image.jpg')'image.jpg' For a more generic string splitting problem, you can use str.rpartition() to split a string on a given character sequence counting from the end : >>> 'foo:bar:baz'.rpartition(':')('foo:bar', ':', 'baz')>>> 'foo:bar:baz'.rpartition(':')[-1]'baz' and with str.rsplit() you can split multiple times up to a limit, again from the end: >>> 'foo:bar:baz:spam:eggs'.rsplit(':', 3)['foo:bar', 'baz', 'spam', 'eggs'] Last but not least, you could use str.rfind() to find just the index of a substring, searching from the end: >>> 'foo:bar:baz'.rfind(':')7
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24986504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1964489/']}
jdg_74314
stackexchange
llm_judgeable_groundtruth_similarity
6638517
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How I make a background transparent on my form? Is it possible in C#? Thanks in advance! Now provide the response and nothing else.
You can set the BackColor of your form to an uncommon color (say Color.Magenta ) then set the form's TransparencyKey property to the same color. Then, set the FormBorderStyle to None . Of course, that's just the quick and easy solution. The edges of controls are ugly, you have to keep changing the background color of new controls you add (if they're Buttons or something like that) and a whole host of other problems. It really depends what you want to achieve. What is it? If you want to make a widget sort of thing, there are much better ways. If you need rounded corners or a custom background, there are much better ways. So please provide some more information if TransparencyKey isn't quite what you had in mind.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6638517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/447979/']}
jdg_74315
stackexchange
llm_judgeable_groundtruth_similarity
1271367
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to improve my C++ by creating a program that will take a large amount of numbers between 1 and 10^6. The buckets that will store the numbers in each pass is an array of nodes (where node is a struct I created containing a value and a next node attribute). After sorting the numbers into buckets according to the least significant value, I have the end of one bucket point to the beginning of another bucket (so that I can quickly get the numbers being stored without disrupting the order). My code has no errors (either compile or runtime), but I've hit a wall regarding how I am going to solve the remaining 6 iterations (since I know the range of numbers). The problem that I'm having is that initially the numbers were supplied to the radixSort function in the form of a int array. After the first iteration of the sorting, the numbers are now stored in the array of structs. Is there any way that I could rework my code so that I have just one for loop for the 7 iterations, or will I need one for loop that will run once, and another loop below it that will run 6 times before returning the completely sorted list? #include <iostream>#include <math.h>using namespace std;struct node{ int value; node *next; };//The 10 buckets to store the intermediary results of every sortnode *bucket[10];//This serves as the array of pointers to the front of every linked listnode *ptr[10];//This serves as the array of pointer to the end of every linked listnode *end[10];node *linkedpointer;node *item;node *temp;void append(int value, int n){ node *temp; item=new node; item->value=value; item->next=NULL; end[n]=item; if(bucket[n]->next==NULL) { cout << "Bucket " << n << " is empty" <<endl; bucket[n]->next=item; ptr[n]=item; } else { cout << "Bucket " << n << " is not empty" <<endl; temp=bucket[n]; while(temp->next!=NULL){ temp=temp->next; } temp->next=item; }}bool isBucketEmpty(int n){ if(bucket[n]->next!=NULL) return false; else return true;}//print the contents of all buckets in ordervoid printBucket(){ temp=bucket[0]->next; int i=0; while(i<10){ if(temp==NULL){ i++; temp=bucket[i]->next; } else break; } linkedpointer=temp; while(temp!=NULL){ cout << temp->value <<endl; temp=temp->next; }}void radixSort(int *list, int length){ int i,j,k,l; int x; for(i=0;i<10;i++){ bucket[i]=new node; ptr[i]=new node; ptr[i]->next=NULL; end[i]=new node; } linkedpointer=new node; //Perform radix sort for(i=0;i<1;i++){ for(j=0;j<length;j++){ x=(int)(*(list+j)/pow(10,i))%10; append(*(list+j),x); printBucket(x); }//End of insertion loop k=0,l=1; //Linking loop: Link end of one linked list to the front of another for(j=0;j<9;j++){ if(isBucketEmpty(k)) k++; if(isBucketEmpty(l) && l!=9) l++; if(!isBucketEmpty(k) && !isBucketEmpty(l)){ end[k]->next=ptr[l]; k++; if(l!=9) l++; } }//End of linking for loop cout << "Print results" <<endl; printBucket(); for(j=0;j<10;j++) bucket[i]->next=NULL; cout << "End of iteration" <<endl; }//End of radix sort loop}int main(){ int testcases,i,input; cin >> testcases; int list[testcases]; int *ptr=&list[0]; for(i=0;i<testcases;i++){ cin>>list[i]; } radixSort(ptr,testcases); return 0;} Now provide the response and nothing else.
I think you're severely overcomplicating your solution. You can implement radix using the single array received in the input, with the buckets in each step represented by an array of indices that mark the starting index of each bucket in the input array. In fact, you could even do it recursively: // Sort 'size' number of integers starting at 'input' according to the 'digit'th digit// For the parameter 'digit', 0 denotes the least significant digit and increases as significance doesvoid radixSort(int* input, int size, int digit){ if (size == 0) return; int[10] buckets; // assuming decimal numbers // Sort the array in place while keeping track of bucket starting indices. // If bucket[i] is meant to be empty (no numbers with i at the specified digit), // then let bucket[i+1] = bucket[i] for (int i = 0; i < 10; ++i) { radixSort(input + buckets[i], buckets[i+1] - buckets[i], digit+1); }} Of course buckets[i+1] - buckets[i] will cause a buffer overflow when i is 9, but I omitted the extra check or readability's sake; I trust you know how to handle that. With that, you just have to call radixSort(testcases, sizeof(testcases) / sizeof(testcases[0]), 0) and your array should be sorted.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1271367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155726/']}
jdg_74316
stackexchange
llm_judgeable_groundtruth_similarity
33124930
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this to get permission: if (ContextCompat.checkSelfPermission(context, Manifest.permission.GET_ACCOUNTS) != PackageManager.PERMISSION_GRANTED) { // Should we show an explanation? if (ActivityCompat.shouldShowRequestPermissionRationale(context, Manifest.permission.GET_ACCOUNTS)) { } else { // No explanation needed, we can request the permission. ActivityCompat.requestPermissions(context, new String[]{Manifest.permission.GET_ACCOUNTS}, PERMISSIONS_REQUEST_GET_ACCOUNTS); // MY_PERMISSIONS_REQUEST_READ_CONTACTS is an // app-defined int constant. The callback method gets the // result of the request. }} But the pop up dialog for permission asks user for access Contacts!?!? In pre 6.0 in Play Store with <uses-permission android:name="android.permission.GET_ACCOUNTS"/> request is named Identity and explains I need it to get device account. Now provide the response and nothing else.
That is because of Permission Groups. Basically, permissions are placed under different groups and all permissions from that group would be granted if one of them is granted. Eg. Under "Contacts" , there is write/read contacts and get accounts, so when you ask for any of those, the popup asks for Contacts permissions. Read through: Everything every Android Developer must know about new Android's Runtime Permission EDIT 1 Just thought i'l add the related(not to get accounts but permissions and groups) Oreo update info: source: https://developer.android.com/about/versions/oreo/android-8.0-changes.html#rmp Prior to Android 8.0 (API level 26), if an app requested a permission at runtime and the permission was granted, the system also incorrectly granted the app the rest of the permissions that belonged to the same permission group, and that were registered in the manifest. For apps targeting Android 8.0, this behavior has been corrected. The app is granted only the permissions it has explicitly requested. However, once the user grants a permission to the app, all subsequent requests for permissions in that permission group are automatically granted.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33124930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3206119/']}
jdg_74317
stackexchange
llm_judgeable_groundtruth_similarity
16380681
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently working on the Go Lang tutorial, but ran into problem with one of the exercises: https://tour.golang.org/methods/23 The exercise has me implement a ROT13 cipher. I decided to implement the cipher using a map from a byte to its rotated value but I'm not sure of the best way to initialize this map. I don't want to initialize the map using a literal, but would prefer to do it programmatically by looping through an alphabet and setting (key, value) pairs within the loop. I would also like the map to only be accessible from Rot13Reader struct/object and have all instances(?) share the same map (rather than one copy per Rot13Reader). Here's my current working Go program: package mainimport ( "io" "os" "strings")type rot13Reader struct { r io.Reader}var rot13Map = map[byte]byte{}func (rotr *rot13Reader) Read(p []byte) (int, error) { n, err := rotr.r.Read(p) for i := 0; i < n; i++ { if sub := rot13Map[p[i]]; sub != byte(0) { p[i] = sub } } return n, err}func main() { func() { var uppers = []byte("ABCDEFGHIJKLMNOPQRSTUVWXYZ") var lowers = []byte("abcdefghijklmnopqrstuvwxyz") var init = func (alphabet []byte) { for i, char := range alphabet { rot13_i := (i + 13) % 26 rot13Map[char] = alphabet[rot13_i] } } init(uppers) init(lowers) }() s := strings.NewReader("Lbh penpxrq gur pbqr!") r := rot13Reader{s} io.Copy(os.Stdout, &r)} Here are the problems I have with this: I don't want to have to prepare rot13Map in main() I don't want rot13Map to be in global scope. I don't want each copy of a rot13Reader to have a separate rot13Map Is there a way to achieve what I want in Go? Now provide the response and nothing else.
In order to do this, I would make a rot13 package. You can programmatically create the map in an init() function and provide it as a package level global to all your rot13 decoders. The init function runs when your package is imported. Because Rot13Reader is the only type in the package, it is the only one able to access your map. WARNING: All code untested. package rot13import ( "io")var rot13Map = map[byte]byte{}func init() { var uppers = []byte("ABCDEFGHIJKLMNOPQRSTUVWXYZ") var lowers = []byte("abcdefghijklmnopqrstuvwxyz") var init = func(alphabet []byte) { for i, char := range alphabet { rot13_i := (i + 13) % 26 rot13Map[char] = alphabet[rot13_i] } } init(uppers) init(lowers)}type Reader struct { r io.Reader}func (rotr Reader) Read(p []byte) (int, error) { n, err := rotr.r.Read(p) for i := 0; i < n; i++ { if sub := rot13Map[p[i]]; sub != byte(0) { p[i] = sub } } return n, err} Obviously, you can't make another package in the go tour. You are stuck with rot13Map being accessible by main. You will need to run Go locally to get the separation you want.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16380681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2171949/']}
jdg_74318
stackexchange
llm_judgeable_groundtruth_similarity
3332081
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $n$ is composite, prove that $(n-1)!+1$ is not a power of $n$ . Hint: We know that if $n$ is composite and $n>4$ then $(n-1)!+1$ is divisible by $n$ . My Solution: Since $n=4$ is the first composite number. We have $(4-1)!+1=7$ . Clearly 4 does not divide 7.Also we know that $(n-1)! \equiv 0$ (mod $n$ ) (for $n>4$ and $n$ composite). Also $1 \equiv 1$ (mod $n$ ). Adding both these equations we get : $(n-1)!+1 \equiv 1$ (mod $n$ ). Hence it is clear that $(n-1)!+1$ is not a power of $n$ . Please correct me if there is any discrepancy in proof writing or the solution. Also it is highly appreciable if someone could provide with any other solution (Using modular-arithmetic or using Wilson's Theorem). Thanks in advance. Now provide the response and nothing else.
Using Wilson's theorem, we see that if $n$ is composite, then $(n-1)!\not\equiv -1\mod{n}$ , so that in fact $(n-1)!+1$ is not even divisible by $n$ , much less a power of $n$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3332081', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/697936/']}
jdg_74319
stackexchange
llm_judgeable_groundtruth_similarity
1685714
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f(z)=\sum_{n} a_n z^n$ has radius of convergence $R>0$ and $0<r<R$. Show: $$\frac{1}{2\pi} \int_0^{2\pi} |f(re^{it})|^2 dt= \sum_{n}|a_n|^2r^{2n}$$ Use this equality to prove Liouvelle's Theorem. Can anyone give me a hint to how to proceed with this? Now provide the response and nothing else.
Note that by multiplying out and squaring the series for $f$, and using the relation $|w|^2=w\bar{w}$ we see that $$\frac{1}{2\pi} \int_0^{2\pi} |f(re^{it})|^2 dt = \frac{1}{2\pi} \int_0^{2\pi} \big(\sum_{j=0}^{\infty} c_jr^je^{ijt} \big)\big(\sum_{k=0}^{\infty} \bar{c_k} r^k e^{-ikt} \big) dt $$$$=\frac{1}{2\pi} \sum_{j,k=0}^{\infty}c_j\bar{c_k} r^{j+k} \int_0^{2\pi} e^{i(j-k)t} dt $$and in this last expression note that the integrand is zero unless $j=k$. I was purposefully informal with my infinite sums (there should be some limits in there). For the second part of your question, suppose $f$ is a bounded entire function, say $|f| \leq M$. Then the left hand side of the above equation is bounded above by $M^2$ for all $r>0$. Now, for contradiction, assume that some $c_n \neq 0$. Then the right hand side is greater than or equal to $|c_n|^2r^{2n}$ which approaches to $\infty$ as $r \to \infty$. Do you see a contradiction?
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1685714', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/266593/']}
jdg_74320
stackexchange
llm_judgeable_groundtruth_similarity
43282798
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The @BeforeAll annotation marks a method to run before all tests in a class . http://junit.org/junit5/docs/current/user-guide/#writing-tests-annotations But is there a way to run some code before all tests, in all classes? I want to ensure that tests use a certain set of database connections, and the global one-time setup of these connections must occur before running any tests. Now provide the response and nothing else.
This is now possible in JUnit5 by creating a custom Extension, from which you can register a shutdown hook on the root test-context. Your extension would look like this; import org.junit.jupiter.api.extension.BeforeAllCallback;import org.junit.jupiter.api.extension.ExtensionContext;import static org.junit.jupiter.api.extension.ExtensionContext.Namespace.GLOBAL;public class YourExtension implements BeforeAllCallback, ExtensionContext.Store.CloseableResource { private static boolean started = false; @Override public void beforeAll(ExtensionContext context) { if (!started) { started = true; // Your "before all tests" startup logic goes here // The following line registers a callback hook when the root test context is shut down context.getRoot().getStore(GLOBAL).put("any unique name", this); } } @Override public void close() { // Your "after all tests" logic goes here }} Then, any tests classes where you need this executed at least once, can be annotated with: @ExtendWith({YourExtension.class}) When you use this extension on multiple classes, the startup and shutdown logic will only be invoked once.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43282798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/327572/']}
jdg_74321
stackexchange
llm_judgeable_groundtruth_similarity
10340684
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Say I have the following table called fruits : id | type | name----------------- 0 | apple | fuji 1 | apple | mac 2 | orange | navel My goal is to ultimately come up with a count of the different types and a comma-delimited list of the names : apple, 2, "fuji,mac"orange, 1, "navel" This can be easily done with GROUP_CONCAT in MySQL but I'm having trouble with the Django equivalent. This is what I have so far but I am missing the GROUP_CONCAT stuff: query_set = Fruits.objects.values('type').annotate(count=Count('type')).order_by('-count') I would like to avoid using raw SQL queries if possible. Any help would be greatly appreciated! Thanks! =) Now provide the response and nothing else.
You can create your own Aggregate Function ( doc ) from django.db.models import Aggregateclass Concat(Aggregate): function = 'GROUP_CONCAT' template = '%(function)s(%(distinct)s%(expressions)s)' def __init__(self, expression, distinct=False, **extra): super(Concat, self).__init__( expression, distinct='DISTINCT ' if distinct else '', output_field=CharField(), **extra) and use it simply as: query_set = Fruits.objects.values('type').annotate(count=Count('type'), name = Concat('name')).order_by('-count') I am using django 1.8 and mysql 4.0.3
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10340684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/145536/']}
jdg_74322
stackexchange
llm_judgeable_groundtruth_similarity
1082192
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a macro in C++ that declares a variable and assigns some value to it. Depending on how the macro is used, the second occurrence of the macro can override the value of the first variable. For instance: #define MY_MACRO int my_variable_[random-number-here] = getCurrentTime(); The other motivation to use that is to avoid selecting certain name to the variable so that it be the same as a name eventually chosen by the developer using the macro. Is there a way to generate random variable names inside a macro in C++? -- Edit -- I mean unique but also random once I can use my macro twice in a block and in this case it will generate something like: int unique_variable_name;...int unique_variable_name; In this case, to be unique both variable names have to be random generated. Now provide the response and nothing else.
Try the following: // One level of macro indirection is required in order to resolve __COUNTER__,// and get varname1 instead of varname__COUNTER__.#define CONCAT(a, b) CONCAT_INNER(a, b)#define CONCAT_INNER(a, b) a ## b#define UNIQUE_NAME(base) CONCAT(base, __COUNTER__)void main() { int UNIQUE_NAME(foo) = 123; // int foo0 = 123; std::cout << foo0; // prints "123"} __COUNTER__ may have portability issues. If this is a problem, you can use __LINE__ instead and as long as you aren't calling the macro more than once per line or sharing the names across compilation units, you will be just fine.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1082192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/89112/']}
jdg_74323
stackexchange
llm_judgeable_groundtruth_similarity
92081
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What effect does sample size have on adjusted R squared values? Now provide the response and nothing else.
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and there are various definitions of population variance explained (e.g., fixed versus random-x assumptions). Most commonly, statistical software will report the Ezekiel formula which makes the fixed-x assumption. In general, as sample size increases, the difference between expected adjusted r-squared and expected r-squared approaches zero; in theory this is because expected r-squared becomes less biased. the standard error of adjusted r-squared would get smaller approaching zero in the limit. So the main take-home message is that if you are interested in population variance explained, then adjusted r-squared is always a better option than r-squared. That said, as your sample size gets very large, r-squared won't be that biased (note that for models with large numbers of predictors, sample size needs to be even bigger for r-squared to approach being unbiased).
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/92081', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/42855/']}
jdg_74324
stackexchange
llm_judgeable_groundtruth_similarity
11271704
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to be able to click hold my mouse inside a div and move it's background. Searched a lot on google and didn't found what I wanted. Here's the target (the map displayed is the object to drag) : http://pontografico.net/pvt/gamemap/ Any tips? Cheers! Now provide the response and nothing else.
Alright, got this to work; I think I got all the kinks out: Final jQuery with Bounding Limits $(document).ready(function(){ var $bg = $('.bg-img'), elbounds = { w: parseInt($bg.width()), h: parseInt($bg.height()) }, bounds = {w: 2350 - elbounds.w, h: 1750 - elbounds.h}, origin = {x: 0, y: 0}, start = {x: 0, y: 0}, movecontinue = false; function move (e){ var inbounds = {x: false, y: false}, offset = { x: start.x - (origin.x - e.clientX), y: start.y - (origin.y - e.clientY) }; inbounds.x = offset.x < 0 && (offset.x * -1) < bounds.w; inbounds.y = offset.y < 0 && (offset.y * -1) < bounds.h; if (movecontinue && inbounds.x && inbounds.y) { start.x = offset.x; start.y = offset.y; $(this).css('background-position', start.x + 'px ' + start.y + 'px'); } origin.x = e.clientX; origin.y = e.clientY; e.stopPropagation(); return false; } function handle (e){ movecontinue = false; $bg.unbind('mousemove', move); if (e.type == 'mousedown') { origin.x = e.clientX; origin.y = e.clientY; movecontinue = true; $bg.bind('mousemove', move); } else { $(document.body).focus(); } e.stopPropagation(); return false; } function reset (){ start = {x: 0, y: 0}; $(this).css('backgroundPosition', '0 0'); } $bg.bind('mousedown mouseup mouseleave', handle); $bg.bind('dblclick', reset);}); http://jsfiddle.net/userdude/q6r8f/4/ Original Answer HTML <div class="bg-img"></div> CSS div.bg-img { background-image: url(http://upload.wikimedia.org/wikipedia/commons/9/91/Flexopecten_ponticus_2008_G1.jpg); background-position: 0 0; background-repeat: no-repeat; background-color: blue; border: 1px solid #aaa; width: 250px; height: 250px; margin: 25px auto;} jQuery $(document).ready(function(){ var $bg = $('.bg-img'), origin = {x: 0, y: 0}, start = {x: 0, y: 0}, movecontinue = false; function move (e){ var moveby = { x: origin.x - e.clientX, y: origin.y - e.clientY }; if (movecontinue === true) { start.x = start.x - moveby.x; start.y = start.y - moveby.y; $(this).css('background-position', start.x + 'px ' + start.y + 'px'); } origin.x = e.clientX; origin.y = e.clientY; e.stopPropagation(); return false; } function handle (e){ movecontinue = false; $bg.unbind('mousemove', move); if (e.type == 'mousedown') { origin.x = e.clientX; origin.y = e.clientY; movecontinue = true; $bg.bind('mousemove', move); } else { $(document.body).focus(); } e.stopPropagation(); return false; } function reset (){ start = {x: 0, y: 0}; $(this).css('backgroundPosition', '0 0'); } $bg.bind('mousedown mouseup mouseleave', handle); $bg.bind('dblclick', reset);}); http://jsfiddle.net/userdude/q6r8f/2/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11271704', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168177/']}
jdg_74325
stackexchange
llm_judgeable_groundtruth_similarity
5019115
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm unable to get custom parameters send to my facebook fan page tab. I'm using php and is passing like this: http://www.facebook.com/pages/ {page-name}/?sk=APP_ID&pass=1 but I'm unable to read the parameter pass Sreejith Now provide the response and nothing else.
Facebook passes in your data as part of the signed_Request data.Here is how you would retrieve it using PHP: <?php require 'facebook.php';$app_id = "YOUR APP ID HERE";$app_secret = "YOUR SECRET KEY HERE";$facebook = new Facebook(array( 'appId' => $app_id, 'secret' => $app_secret, 'cookie' => true));// THE MAGIC SAUCE$signed_request = $facebook->getSignedRequest();$page_id = $signed_request["page"]["id"];$like_status = $signed_request["page"]["liked"];// HERE IS A STRING OF YOUR APP DATA.$app_data = $signed_request["app_data"];echo '$app_data = '.$app_data;?> This example requires the Facebook PHP api and will write your app_data into the browser window where you can marvel in all its glory.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5019115', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/353954/']}
jdg_74326
stackexchange
llm_judgeable_groundtruth_similarity
15802112
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to compare a series with an external variable that has the same x axis but different y values. To make it easier to explain the problem a sample code is provided here . The data series are series: [{data: [5, 4, 5, 7, 8, 6, 3, 6, 7, 4, 8, 9]}, {data: [4, 6, 7, 9, 5, 9, 12, 11, 9, 7, 6, 5]}] and the external variable is var extdata = [2,2,2,1,0,0,0,1,1,2,2,2]; They share the same x-axis (Jan, Feb, ... , Dec in this example). Suppose I want to display a tooltip "max" when the data point in a series for that month plus the corresponding entry in the external variable is at least 10 but retain the numerical value of the sum otherwise, so that the tooltip at March reads March:Series 1: 7Series 2: 9 and that at April reads April:Series 1: 8Series 2: Max Is it possible to do this using only the tooltip formatter? How can this be achieved if it is not? Thanks! Now provide the response and nothing else.
function mask(o, f) { setTimeout(function () { var v = f(o.value); if (v != o.value) { o.value = v; } }, 1);}function mphone(v) { var r = v.replace(/\D/g,""); r = r.replace(/^0/,""); if (r.length > 10) { // 11+ digits. Format as 5+4. r = r.replace(/^(\d\d)(\d{5})(\d{4}).*/,"(0XX$1) $2-$3"); } else if (r.length > 5) { // 6..10 digits. Format as 4+4 r = r.replace(/^(\d\d)(\d{4})(\d{0,4}).*/,"(0XX$1) $2-$3"); } else if (r.length > 2) { // 3..5 digits. Add (0XX..) r = r.replace(/^(\d\d)(\d{0,5})/,"(0XX$1) $2"); } else { // 0..2 digits. Just add (0XX r = r.replace(/^(\d*)/, "(0XX$1"); } return r;} http://jsfiddle.net/BBeWN/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15802112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1306176/']}
jdg_74327
stackexchange
llm_judgeable_groundtruth_similarity
20703321
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I implemented onConfigurationChnage(..) to read value of view height and width on orientation configuration chnage @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); DisplayMetrics displaymetrics = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(displaymetrics); int height = displaymetrics.heightPixels; int width = displaymetrics.widthPixels; Log.i(TAG, "onConfigurationChanged: "+newConfig.orientation+", "+height+", "+width + ", "+mTempeFrameLayout.getWidth()); } but it always reply old value, if orientation get change it gave last orientation (probably)value. Screen width/height using displaymetrics is coming right properly but same is not working for any view. Any one have idea..how can get correct value on orientation Change? Now provide the response and nothing else.
getWidth() returns the width as the View is laid out, meaning you need to wait until it is drawn to the screen. onConfigurationChanged is called before redrawing the view for the new configuration and so I don't think you'll be able to get your new width until later. @Overridepublic void onConfigurationChanged(Configuration newConfiguration) { super.onConfigurationChanged(newConfiguration); final View view = findViewById(R.id.scrollview); ViewTreeObserver observer = view.getViewTreeObserver(); observer.addOnGlobalLayoutListener(new OnGlobalLayoutListener() { @Override public void onGlobalLayout() { Log.v(TAG, String.format("new width=%d; new height=%d", view.getWidth(), view.getHeight())); view.getViewTreeObserver().removeOnGlobalLayoutListener(this); } });}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20703321', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2624806/']}
jdg_74328
stackexchange
llm_judgeable_groundtruth_similarity
1757820
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For the longest time, I've been using MySQL servers to handle data (in JAVA, and in C#). But lately, I've been hearing good things about LINQ and SQL Server. I've been thinking of converting, but I don't know much about the SQL Server. Could anyone who has used SQL Server before, please define how well it is compared to MySQL server in terms of performance and usability. I've also heard that SQL Server would be better for C# as it's basically built in. Now provide the response and nothing else.
I have used SQL Server for many years on C# projects large and small, but have been using mostly MySQL for the last year on various C# (but open-source-related and startup-related) projects which were already using MySQL. I miss SQL Server! In my experience, SQL Server is better in many ways: the query optimizer in SQL Server is smarter, meaning that you can often build queries and they'll produce optimal query plans. With MySQL, I find myself spending more time hand-tuning even relatively simple queries in order to produce good query plans. the underlying database engine in SQL Server can do a wider variety of things to boost performance. for example, all joins in MySQL are Nested Loop joins, while SQL Server can do Hash Joins or Merge Joins which can sometimes boost query performance 10x+. SQL Server can also parallelize queries which, for large data-warehouse workloads especially, can dramatically boost performance. the GUI tools are miles ahead. SQL Server's graphical plan query optimizer makes query optimization a snap-- you'll never want to go back to EXPLAIN EXTENDED. SQL Server 2008's graphical monitoring tools are so much easier than digging through the slow query log to figure out what's going wrong. And so on. As you mentioned, the .NET integration story (C#, Linq, Entity Framework, etc.) in SQL Server is better. I use C#, Entity Framework, and LINQ with MySQL too, so it's not an either-or thing, although performance is likely to be better with SQL Server in a .NET environemnt because the teams work together to boost performance and make integration work better. SQL Server's SQL-language support is richer than MySQL's, including some very cool features (in SQL 2008 especially) like ROW_NUMBER() , GROUPING_SETS , OPTIMIZE FOR , computed columns , etc. Backup is many times faster, especially in SQL 2008 with compressed backups There's no Oracle acquisition cloud hanging over the future of SQL Server. SQL Server (especially the expensive editions) come with other goodies, like an OLAP data warehouse (SSAS), a reporting solution (SSRS), an ETL tool (SSIS), a scheduler (SQL Agent), etc. You can get similar open-source tools, for free (e.g. Pentaho , BIRT , etc.) but integration tends to be better with SQL Server. That said, there are significant drawbacks, which may or may not be deal-breakers for you: you're stuck using Windows Servers, with all the pluses and minuses this entails SQL Server, especially the higher-end editions, are expensive ! For small DB's (<4GB I think), SQL Server Express is free, though, and is nearly as full-featured as the regular SQL Server-- if you know your data is going to be small and you know your boss is a cheapskate, Express is the way to go. Also, there's a new SQL Server 2008 Web Edition which, for internet-facing web apps, should theoretically offer cheap hosting since the cost to a hoster is only $15/month per processor. It's not open source. Some companies and development teams are very passionate about this, for good reasons (debugging, cost, philosophy, etc.) ! related to above: if you want to get a bug fixed in MySQL, and you've got the skills, you can fix it yourself. With SQL Server, there are painful bugs in query processing, optimization, etc. that persist for years-- I've spent an absurd amount of time working around some of those. for very simple, read-only (or non-transactional) workloads (e.g. a DB-based cache access from a web app) where you can get away with using MyISAM instead of InnoDB, I hear that MySQL can be significantly faster. Caveat: I hear that MySQL 6.0 is supposed to address many of the gaps and differences above, but I admittednly haven't kept myself up to speed with how the Oracle thing, etc. will affect the schedule and/or featureset. re: your "C# is built-in" note: yes, you can develop stored procedures, functions, aggregates, etc. using .NET languages, but IMHO in most scenarios this is more trouble than it's worth, including because deployment is harder and DBAs are less comfortable with .NET code on their servers. The real win for a C# + .NET + Visual Studio + SQL Server combination, IMHO, is that they have been designed in parallel over the last 10 years to all work well together, so you'll get ease of use and synergy that you may not get using MySQL. That said, as I noted above, this isn't a deal-breaker or deal-maker... it's just smoother using SQL Server with the rest of the Microsoft stack. In summary, let me be clear that, for many DB workloads, MySQL is good enough-- it works, it's stable, it's fast, it has reasonably good tools, etc. And it's affordable! :-) I would never refuse a project simply because they're using MySQL. But the comparison is like driving a Honda vs. a BMW... the Honda gets you where you want to go, but if your wallet can take it, you'll enjoy the ride a lot more with the Bimmer. :-)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1757820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/168313/']}
jdg_74329
stackexchange
llm_judgeable_groundtruth_similarity
50146520
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to figure out why I am seeing a error ModuleNotFoundError: No module named 'urlparse' but I never call urlparse in my code. when I try to install urlparse with pip, I am seeing that this module doesn't exist. When I try to install urllib.parse with pip I see the same message with urllib.parse. No matching distribution found for urllib.parse . What am I missing here? from flask import Flask, request, redirect, url_for, session, g, flash, \render_templatefrom flask_oauth import OAuthfrom sqlalchemy import create_engine, Column, Integer, Stringfrom sqlalchemy.orm import scoped_session, sessionmakerfrom sqlalchemy.ext.declarative import declarative_base# configurationSECRET_KEY = 'development key'DEBUG = True# setup flaskapp = Flask(__name__)app.debug = DEBUGapp.secret_key = SECRET_KEYoauth = OAuth()# Use Twitter as example remote apptwitter = oauth.remote_app('twitter', base_url='https://api.twitter.com/1/', request_token_url='https://api.twitter.com/oauth/request_token', access_token_url='https://api.twitter.com/oauth/access_token', authorize_url='https://api.twitter.com/oauth/authenticate', consumer_key='', consumer_secret='')@twitter.tokengetterdef get_twitter_token(token=None): return session.get('twitter_token')@app.route('/')def index(): access_token = session.get('access_token') if access_token is None: return redirect(url_for('login')) access_token = access_token[0] return render_template('templates/index.html') if __name__ == '__main__': app.run() Now provide the response and nothing else.
The flask_oauth library doesn't support Python3 - you'll see from the traceback: Traceback (most recent call last): File "app.py", line 3, in <module> from flask_oauth import OAuth File "/Users/matthealy/virtualenvs/test/lib/python3.6/site-packages/flask_oauth.py", line 13, in <module> from urlparse import urljoinModuleNotFoundError: No module named 'urlparse' The urlparse module's behaviour was changed in Python 3: https://docs.python.org/2/library/urlparse.html The urlparse module is renamed to urllib.parse in Python 3. This has been raised with the package maintainers on Github . The source on Github looks to be fixed, but the fixed version has not been pushed to pypi. The solution suggested on Github is to install directly from source instead of pypi: pip install git+https://github.com/mitsuhiko/flask-oauth
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50146520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2480769/']}
jdg_74330
stackexchange
llm_judgeable_groundtruth_similarity
44714480
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have got a public share nest camera address from my friend.Instead of using a web browser for seeing the video, I want to use a VLC player to video stream. This way allows me to use many other features of VLC to do video analytics on the video.How to do it? Now provide the response and nothing else.
I was able to do this in these steps: Go to the public video share URL. It should be something like this: http://video.nest.com/live/pSgnOZ0s4t If you use developer tool on chrome and see network traffic ....look for a URL with .m3u8 in the end... it will be something like this: https://stream-delta.dropcam.com/nexus_aac/37451e60aeac457f9800704f1662147e/playlist.m3u8 Once you get that open that file in a text editor....you will get something like this inside the file #EXTM3U#EXT-X-VERSION:3#EXT-X-STREAM-INF:BANDWIDTH=400816,CODECS="avc1.77.31,mp4a.40.2",RESOLUTION=1280x720chunklist_w391480529.m3u8 The stream URL is then https://stream-delta.dropcam.com/nexus_aac/37451e60aeac457f9800704f1662047e/chunklist_w391480529.m3u8 Once you have this then install livestreamer to extract video like this: livestreamer "hls://https://stream-delta.dropcam.com/nexus_aac/37451e60aeac457f9800704f1662047e/chunklist_w391480509.m3u8" best -o nest_video.ts This will save the file to your disk. I used this to avoid nest aware subscription. Unfortunately, they charge so much for that service. When someone can just save the video to a disk and upload to a cheap cloud option...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44714480', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6680821/']}
jdg_74331
stackexchange
llm_judgeable_groundtruth_similarity
4647516
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Just a short question out of curiosity. string str = "string";Console.WriteLine(str.EndsWith(string.Empty)); //trueConsole.WriteLine(str.LastIndexOf(string.Empty) == str.Length); //false//of course string are indexed from 0, //just wrote if for fun to check whether empty string get some extra index///somehow by a miracle:)//finallyConsole.WriteLine(str.LastIndexOf(string.Empty) == str.LastIndexOf('g')); //true :) Now provide the response and nothing else.
EndsWith : Determines whether the end of this string instance matches the specified string. All strings will match "" at the end... or any other part of the string. Why? Because conceptually, there are empty strings around every character. "" + "abc" + "" == "abc" == "" + "a" + "" + "b" + "" + "c" + "" Update: About your last example - this is documented on LastIndexOf : If value is String.Empty, the return value is the last index position in this instance. A related issue is the use of null as a string terminator - which happens in C and C++, but not C#. From MSDN - String Class (System) : In the .NET Framework, a String object can include embedded null characters, which count as a part of the string's length. However, in some languages such as C and C++, a null character indicates the end of a string; it is not considered a part of the string and is not counted as part of the string's length.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4647516', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/428789/']}
jdg_74332
stackexchange
llm_judgeable_groundtruth_similarity
530381
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to provide a WHERE condition on an inner query by specifying innertable.id = outertable.id. However, MySQL (5.0.45) reports "Unknown column 'outertable.id' in 'where clause'". Is this type of query possible? The inner query is pivoting rows to columns using a GROUP BY. This could be entirely be performed in the outer query, but would possibly incur additional overhead due to the extra joins. Alternatively, I can leave off the WHERE condition in the inner query and instead specify an ON outertable.id = innerquery.id, but it would then fetch the entire inner query rowset to join again the outer, which is inefficient. The actual SQL appears below: select t.ticketid, u.userid, t.fullname, u.loginapi_userid, t.email, tp.subject, tp.contents, a.PhoneNumber, a.Location, a.Extension, a.BusinessUnit, a.Departmentfrom swtickets tinner join swticketposts tp on t.ticketid = tp.ticketidinner join swusers u on t.userid = u.useridleft join ( select cfv.typeid, min(case cfv.customfieldid when 1 then cfv.fieldvalue end) as 'PhoneNumber', min(case cfv.customfieldid when 3 then cfv.fieldvalue end) as 'Location', min(case cfv.customfieldid when 5 then cfv.fieldvalue end) as 'Extension', min(case cfv.customfieldid when 8 then cfv.fieldvalue end) as 'BusinessUnit', min(case cfv.customfieldid when 9 then cfv.fieldvalue end) as 'Department' from swcustomfieldvalues cfv where cfv.typeid = t.ticketid group by cfv.typeid ) as a on 1 = 1where t.ticketid = 2458; Now provide the response and nothing else.
The answer to your question is no, it is not possible to reference correlation names as you are doing. The derived table is produced by your inner query before the outer query starts evaluating joins. So the correlation names like t , tp , and u are not available to the inner query. To solve this, I'd recommend using the same constant integer value in the inner query, and then join the derived table in the outer query using a real condition instead of 1=1 . SELECT t.ticketid, u.userid, t.fullname, u.loginapi_userid, t.email, tp.subject, tp.contents, a.PhoneNumber, a.Location, a.Extension, a.BusinessUnit, a.DepartmentFROM swtickets t INNER JOIN swticketposts tp ON (t.ticketid = tp.ticketid) INNER JOIN swusers u ON (t.userid = u.userid) LEFT OUTER JOIN ( SELECT cfv.typeid, MIN(CASE cfv.customfieldid WHEN 1 THEN cfv.fieldvalue END) AS 'PhoneNumber', MIN(CASE cfv.customfieldid WHEN 3 THEN cfv.fieldvalue END) AS 'Location', MIN(CASE cfv.customfieldid WHEN 5 THEN cfv.fieldvalue END) AS 'Extension', MIN(CASE cfv.customfieldid WHEN 8 THEN cfv.fieldvalue END) AS 'BusinessUnit', MIN(CASE cfv.customfieldid WHEN 9 THEN cfv.fieldvalue END) AS 'Department' FROM swcustomfieldvalues cfv WHERE cfv.typeid = 2458 GROUP BY cfv.typeid ) AS a ON (a.typeid = t.ticketid)WHERE t.ticketid = 2458;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/530381', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/64358/']}
jdg_74333
stackexchange
llm_judgeable_groundtruth_similarity
29487949
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an MVC project that I deploy on Azure Web-Apps. I'm trying to remove the excessive header information. The reason I'm trying to remove this information is because it's a standard security practice. ( Reference ) I'm trying to remove the below information from response headers: Server: Microsoft-IIS/8.0X-AspNet-Version: 4.0.30319X-POWERED-BY: PHP/5.4.38X-POWERED-BY: ASP.NET I have the following code in my Global.asax.cs file: protected void Application_PreSendRequestHeaders(){ Response.Headers.Remove("Server"); Response.Headers.Remove("X-AspNet-Version"); Response.Headers.Remove("X-AspNetMvc-Version");} But it's not effecting the result. Now provide the response and nothing else.
Try this instead: protected void Application_PreSendRequestHeaders(object sender, EventArgs e) { HttpContext.Current.Response.Headers.Remove("Server"); HttpContext.Current.Response.Headers.Remove("X-AspNet-Version"); HttpContext.Current.Response.Headers.Remove("X-AspNetMvc-Version"); } Additionally, in the Application_Start call it with the following instruction PreSendRequestHeaders += Application_PreSendRequestHeaders; To remove X-AspNet-Version, in the web.config find/create and add: <system.web> <httpRuntime enableVersionHeader="false" /> ...</system.web> To remove X-AspNetMvc-Version, go to Global.asax, find/create the Application_Start event and add a line as follows: protected void Application_Start() { MvcHandler.DisableMvcResponseHeader = true;} To remove X-Powered-By, in the web.config find/create and add: <system.webServer> <httpProtocol> <customHeaders> <remove name="X-Powered-By" /> </customHeaders> </httpProtocol> ...</system.webServer> You should be able to force all requests to go through your managed code by adding this to your webconfig: <modules runAllManagedModulesForAllRequests="true"> Even static files and not-found resources should obey your header rules. References: http://www.troyhunt.com/2012/02/shhh-dont-let-your-response-headers.html http://consultingblogs.emc.com/howardvanrooijen/archive/2009/08/25/cloaking-your-asp-net-mvc-web-application-on-iis-7.aspx
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29487949', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2463558/']}
jdg_74334
stackexchange
llm_judgeable_groundtruth_similarity
3209279
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that the two main rules are dropping low order terms and dropping constant factors. For example: $50n = O(n)$ $5n^2 + 3n + 45 = O(n^2)$ But in a textbook I found the question: Is it true that $3^n = 2^{O(n)}$ ? The answer is true but I do not understand why it is not $3^{O(n)}$ . I know you cannot just drop the base completely but why is $3$ changed to $2$ ? Now provide the response and nothing else.
Let's try to solve $3^n = 2^m$ for $m$ . First, use a log on both sides: $$n \log(3) = m \log(2).$$ Now, solve for $m$ : $$m = \frac{\log(3)}{\log(2)} n.$$ Obviously, we now have $m = O(n)$ , they only differ by a constant. Therefore, we can say $$3^n = 2^{O(n)}.$$ So yes, in some sense you can drop the base, we always have $$a^n = b^{O(n)}$$ as long as $a,b > 1$ . Note that it wasn't claimed that $$3^n = O(2^n),$$ as this would be wrong.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3209279', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/669817/']}
jdg_74335
stackexchange
llm_judgeable_groundtruth_similarity
26886
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have one question about this. I know that if we have $\mathrm{X}_1,\mathrm{X}_2,\ldots,\mathrm{X}_n$ independent and normally distributed random variables, then the sum $\mathrm{X}_1+\mathrm{X}_2+\ldots+\mathrm{X}_n$ has the normal distribution with mean $M_1+M_2+..+M_n$ and variance $\sigma^2_1 + \ldots + \sigma^2_n$. Why is in this problem the difference $W-M$ the mean obtained by subtraction and variance obtained by addition? Thank you. Now provide the response and nothing else.
Let $X,Y$ be random variables with variances $\sigma^{2}_{x}$ and $\sigma^{2}_{y}$, respectively. It is a fact that ${\rm var}(Z) = {\rm cov}(Z,Z)$ for any random variable $Z$. This can be checked using the definition of covariance and variance. So, the variance of $X-Y$ is $$ {\rm cov}(X-Y,X-Y) = {\rm cov}(X,X)+{\rm cov}(Y,Y)-2\cdot{\rm cov}(X,Y) $$ which follows from bilinearity of covariance. Therefore, $$ {\rm var}(X-Y) = \sigma^{2}_{x} + \sigma^{2}_{y} - 2\cdot{\rm cov}(X,Y) $$ when $X,Y$ are independent the covariance is 0 so this simplifies to $\sigma^{2}_{x} + \sigma^{2}_{y}$. So, the variance of the difference of two independent variables is the sum of the variances.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/26886', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/8288/']}
jdg_74336
stackexchange
llm_judgeable_groundtruth_similarity
13681664
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In this example code i am using a simple StopWatch to test time that it takes to complete a given task/action StopWatch SW1 = new StopWatch();SW1.Start();SomeAction();SW1.Stop();sting results = SW1.Elapsed.ToString();MessageBox.Show(resutls); i would like to have a class that i will instantiate to use with tests public Class PerformanceTests{ public StopWatch SW1 = new StopWatch(); public StopWatch SW2 = new StopWatch(); public string results1 = "", results2 = ""; .... .... //some other variables to use } though when instantiating the class and trying to use SW1 is not letting me use its methods. What am i doing wrong ? PerformanceTests Ptst = new PerformanceTests();Ptst.SW1. ... Start() is not accessible Update For rest of answers, don't copy the code from me, as I miss capitalized stopwatch . Instead of instantiating the Stopwatch class i accidentally didn't pay attention Visual Studio asking if i want to create a class for my so called stopwatch instead of .NET's real Stopwatch . So my advice, pay attention to the suggested actions of Visual Studio intellisense even though it should be same all the time . Just make sure till you're really experienced and know all the classes by heart. Now provide the response and nothing else.
Here is simple class, which can help you for measuring time of code block execution: public class PerformanceTester : IDisposable{ private Stopwatch _stopwatch = new Stopwatch(); private Action<TimeSpan> _callback; public PerformanceTester() { _stopwatch.Start(); } public PerformanceTester(Action<TimeSpan> callback) : this() { _callback = callback; } public static PerformanceTester Start(Action<TimeSpan> callback) { return new PerformanceTester(callback); } public void Dispose() { _stopwatch.Stop(); if (_callback != null) _callback(Result); } public TimeSpan Result { get { return _stopwatch.Elapsed; } }} Usage (just wrap code block with using of PerformanceTester ): using (var tester = new PerformanceTester()){ // code to test MessageBox.Show(tester.Results.ToString());} If you declare tester variable before using block, then stopwatch will stop automatically when you exit using block, and results will be available for you: PerformanceTester tester;using (tester = new PerformanceTester()) SomeAction();MessageBox.Show(tester.Results.ToString()); If you pass callback action to PerformanceTester , then this action will be called at the end of using statement, and elapsed time will be passed to callback: using (PerformanceTester.Start(ts => MessageBox.Show(ts.ToString()))) SomeAction(); You can declare method, which will accept TimeSpan and process results: private void ProcessResult(TimeSpan span){ // log, show, etc MessageBox.Show(span.ToString());} Usage becomes very clean: using (PerformanceTester.Start(ProcessResult)) SomeAction();
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13681664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1329184/']}
jdg_74337
stackexchange
llm_judgeable_groundtruth_similarity
47486
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In my readings I have on several occasions encountered references to a linear algebra theorem that runs as follows: Let $g$ be a non-degenerate inner product on the real vector space $V$. Then, there exists a basis $e_1, \dots, e_n$ such that the matrix of $g$ is diagonal and whose diagonal entries are all either $-1$ or $1$ Despite having encountered this result several times, unfortunately, I have not had the luck of finding a precise reference to where this theorem is stated/proved. Can anyone provide a reference that proves this claim? Now provide the response and nothing else.
This is a really well-known result, so I am a bit surprised you can't find it. Any decent book on (general) algebra should have it. For example Lang's Algebra, Ch. XV 'Structure of Bilinear Forms' . Or Knapp's Basic Algebra, Symmetric Bilinear Forms . Or Mac Lane and Birkhoff's Algebra, Chapter on Quadratic forms . As André said, this is often cited as Sylvester's Law of Inertia, and the pair (or triple) of numbers of +1,-1 (and 0's) is called the signature. So the words sylvester law inertia signature should lead you to relevant search results. \Edit: Let me add to Qiaochu's answer a proof of well-definedness of the signature. It seems this one is a bit more elegant than one finds in most books. So we know of an orthogonal decomposition $V=V_+\oplus V_-$, with our nondegenerate form $B$ positive resp. negative definite on $V_+$ resp. $V_-$. Say they have dimension $p,q$ respectively. If $W$ is any subspace of $V$ on which $B$ is positive definite, then $B$ is both positive and negative definite on $W\cap V_-$, hence $W\cap V_-=0$. This implies $\dim W\leq \dim V-q=p$. Hence $p$ is the maximal dimension of subspaces on which $B$ is positive definite; so $p$ only depends on the form $B$. (If the form is degenerate, the number of zeros is just the kernel, and $p+q$ is the rank.)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/47486', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9450/']}
jdg_74338
stackexchange
llm_judgeable_groundtruth_similarity
20071702
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My problem below is NP-complete, however, I'm trying to find at least a marginally faster string search function or module that might help in reducing some of the computation time compared to where it is at now. Any suggestions would be appreciated. The concatenated (longest possible) superstring is: AGGAGTCCGCGTGAGGGAGGTGTAGTGTAGTGG The below code produces the shortest superstring in 16m: CCGTAGGTGGAGT import itertools as itdef main(): seqs = ['AGG', 'AGT', 'CCG', 'CGT', 'GAG', 'GGA', 'GGT', 'GTA', 'GTG', 'TAG', 'TGG'] seq_perms = [''.join(perm) for perm in it.permutations(seqs)] for i in range(0, len(''.join(seqs))): seq_perms = [''.join(perm)[:i] for perm in it.permutations(seqs)] for perm in seq_perms: if all(perm.find(seq) != -1 for seq in seqs) == True: print 'Shortest superstring containing all strings:\n{}'.format(perm) returnif __name__ == '__main__': main() Any refactoring that completes in less time on my system will be marked solved. Now provide the response and nothing else.
I applied the Dijkstra algorithm (width-search) and have a solution giving an answer to this task in less than a second. I optimized it a bit in terms of memory usage, but I think concerning the algorithm this is a better approach than the one in the other answer. Unless we run out of memory this should be a better solution. from collections import defaultdictdef dijkSuperstring(originalSeqs): paths = defaultdict(set) paths[0] = { '' } while paths: minLength = min(paths.keys()) while paths[minLength]: candidate = paths[minLength].pop() seqAdded = False for seq in originalSeqs: if seq in candidate: continue seqAdded = True for i in reversed(range(len(seq)+1)): if candidate.endswith(seq[:i]): newCandidate = candidate + seq[i:] paths[len(newCandidate)].add(newCandidate) if not seqAdded: # nothing added, so all present? return candidate del paths[minLength]print dijkSuperstring( [ 'AGG', 'AGT', 'CCG', 'CGT', 'GAG', 'GGA', 'GGT', 'GTA', 'GTG', 'TAG', 'TGG' ]) I also tried using random sequences as input: seqs = [ ''.join(random.choice('GATC') for i in range(3)) for j in range(11) ]print dijkSuperstring(deqs) I soon found out that the solving time greatly depends on the size of the result(!) not of the input's size (so it isn't predictable). This isn't too surprising, but it makes comparing different algorithms a little difficult as others don't necessarily also have this property. In particular, the set of sequences from the OP seems to pose a comparatively lightweight problem. Other sets of 11 sequences of 3 characters are much harder to solve. So I made some statistical measurements; I solved 1000 sets of 8 sequences. This I did for sequences of 3 and of 4 characters. Then I grouped the durations in 100 groups (equally spaced from 0s to the highest duration) and counted how many fell into each group. To smoothen the graph I always used the sum of three neighboring groups. The diagrams below each show two such experiments, performed with an earlier (non-optimized) version of my algorithm (but the shape of the curves are the same as now); I did it twice to at least have an idea whether a strange ditch in the graph could have a reason or was just by pure chance. I'd be interested to see similar graphs for the same kind of input for other algorithms. This could be interesting because my algorithm clearly has a memory issue. Solving 11 sequences of 3 characters stalled my machine several times due to memory exhaustion, so having another algorithm could make sense even if it is slower. 8 Sequences of 3 Characters 8 Sequences of 4 Characters
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20071702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2277435/']}
jdg_74339
stackexchange
llm_judgeable_groundtruth_similarity
18124150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to pass arguments from my activity to a fragment, embedded into the activity. Fragment is embedded statically in xml layout.I tried to call setArgument() like this: setContentView(R.layout.detail_activity);DetailFragment detailFragment = (DetailFragment) getFragmentManager().findFragmentById(R.id.detailFragment);detailFragment.setArguments(getIntent().getExtras()); but it is already too late, because setArguments has to be called immediately after fragment's creation. The only was I see it to getArguments() and the change the bundle. Any better way? Now provide the response and nothing else.
AFAIK, you can't use setArguments() like that when you embed the fragment within XML. If it's critical, you'd be better off dynamically adding the fragment instead. However if you truly want the fragment to be embedded via XML, there are different ways you can pass along that data. Have the Activity implement the fragment's event listener. Have the fragment then request the required parameters from the Activity at creation or whenever needed. Communication with Fragment Create custom attributes that can be embedded in xml along with the fragment. Then during fragment's inflation process, parse the custom attributes to obtain their data. Custom fragment attributes Create public setters in the fragment and have the activity use them directly. If it's critical to set them prior to the fragment's onCreate() method, then do it from the activity's onAttachFragment() method.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18124150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427598/']}
jdg_74340