text
stringlengths
104
605k
# prohibitively ## Definitions • WordNet 3.6 • adv prohibitively to a prohibitive degree "it is prohibitively expensive" • *** Century Dictionary and Cyclopedia • Interesting fact: Citizens of Monaco are prohibited from gambling in Monte Carlo, but they're exempt from taxation. • prohibitively In a prohibitive manner; with prohibition; so as to prohibit: as, prices were prohibitively high. • *** ## Quotations • W. C. Fields “Once, during Prohibition, I was forced to live for days on nothing but food and water.” • Thomas L. Masson “Prohibition may be a disputed theory, but none can say that it doesn't hold water.” • Will Rogers “Communism is like prohibition, it is a good idea, but it won't work.” “Stripped of ethical rationalizations and philosophical pretensions, a crime is anything that a group in power chooses to prohibit.” • William F. Buckley “Idealism is fine, but as it approaches reality the cost becomes prohibitive.” • Bernard Joseph Saurin “The law often permits what honor prohibits.” ## Usage ### In literature: What does the clause prohibit? "The Anti-Slavery Examiner, Part 2 of 4" by American Anti-Slavery Society The importation of slaves was prohibited after January 1, 1808. "The World's Greatest Books, Vol XII." by Arthur Mee This was resisted by some of the Southern delegates, who feared that the importation of slaves might thereby be prohibited. "Our Government: Local, State, and National: Idaho Edition" by J.A. James And so we kept up the agitation, and demanded that the saloon should be prohibited throughout the State. "Personal Recollections of Pardee Butler" by Pardee Butler Through their office they are committed to prohibition. "The Art Of The Moving Picture" by Vachel Lindsay The question of prohibition, as we have just seen, is one of those cases; the slavery question was a still more striking one. "The Making of Arguments" by J. H. Gardiner Silencers are prohibited, and firearms in forests may be prohibited by the Governor during droughts. "Our Vanishing Wild Life" by William T. Hornaday Adultery was prohibited for men as well as for women. "Our Legal Heritage, 5th Ed." by S. A. Reilly Prohibitions are almost useless. "Study of Child Life" by Marion Foster Washburne The use of tobacco, if not prohibited, should be discouraged. "Grappling with the Monster" by T. S. Arthur *** ### In poetry: Z was a zealous old Zibet, Toboggans he tried to prohibit. If any one tried To take a sly slide, He ordered him hanged on a gibbet. "An Alphabet Zoo" by Carolyn Wells She may dream of some horrible brute, Of some genii, or fairy-built spot; Or perhaps the prohibited fruit, Or perhaps of—I cannot tell what. "To A Fly: On The Bosom of Chloe, While Sleeping" by Thomas Gent "The Fourth prohibits trespassing Where other Ghosts are quartered: And those convicted of the thing (Unless when pardoned by the King) Must instantly be slaughtered. "Phantasmagoria Canto II ( Hys Fyve Rules )" by Lewis Carroll Fellow men! why should the lords try to despise And prohibit women from having the benefit of the parliamentary Franchise? When they pay the same taxes as you and me, I consider they ought to have the same liberty. "Women's Suffrage" by William Topaz McGonagall Did we prohibit swillin’ tea clean out of common-sense Or legislate on gossipin’ across a backyard fence? Did we prohibit bustles—or the hoops when they was here? The wimin never think of this—they want to stop our beer. "Here's Luck" by Henry Lawson ### In news: The report states that Tehran is prohibiting inspections at sites that may be housing a nuclear weapons program. A longstanding controversy over nudity on Cape Cod National Seashore beaches has flared anew with the arrest of a woman who was sunbathing topless in a challenge to a Federal regulation prohibiting nude bathing. As violence soars, so do voices of dissent against drug prohibition. Beer-maker growth rate fastest since Prohibition. Such images are prohibited in Islam. Katie Greene MLive.com Gov Rick Snyder, here appearing in Grand Rapids this month, recently signed a bill prohibiting university research assistants from joining a union. Manitoba's conservation minister has all but confirmed there will be a ban on lawn pesticides to control dandelions and other weeds, although the extent of the prohibition is still being worked out. The first licensed whiskey maker in Albany since Prohibition, will open its tasting room and officially introduce its first product with a ribbon-cutting event at 11 am Friday (10/5). Dress Rehearsal , a steadily improving Irish-bred filly trained by Bill Mott, turned the tables on prohibitive favorite. Thirty-six states have statutes prohibiting use by motorists of cellular phones or other devices to send messages. There is every reason for Kentucky to take the advice and become the 18th state to prohibit capital punishment. A massive radiotelescope prohibits any mainstream electronics from being used nearby. Newton prohibits parking on front lawns. It was a saloon before Prohibition, and then it was converted into a restaurant. *** ### In science: Each evaluation of the marginal likelihood is computationally costly, making MCMC approaches prohibitive. The prevalence of dust on the exoplanet HD 189733b from Hubble and Spitzer observations This is generally prohibitive computationally, as the model must be trained anew for each data point that is excluded. The prevalence of dust on the exoplanet HD 189733b from Hubble and Spitzer observations In this technique the ionization and affinity parts are analytically decoupled from the beginning, prohibiting the logarithmic divergence of the static part of the selfenergy as shown in Ref.[DSC95]. Faddeev Random Phase Approximation applied to molecules Moving away from equilibrium, there is always an RPA instability at some point that prohibits the calculation of FRPA values. Faddeev Random Phase Approximation applied to molecules However, the complexity of an IPM iteration, in general, grows rapidly (as n3 ) with the design dimension of the problem, which in numerous applications (like LP’s with dense constraint matrices arising in Signal Processing) make IPM’s prohibitively time-consuming in the large-scale case. Solving large scale polynomial convex problems on $\ell_1$/nuclear norm balls by randomized first-order algorithms ***
# How to use ExternalEvaluate in ResourceFunction? Are there any working examples or best practices for writing ResourceFunction that package or depend on external language code (and package dependencies) through ExternalEvaluate? Details: • I can’t think of any good way to do this wel across platforms • primarily use cases are for Python and JavaScript. • specifically, my python functions depend on both conda and pip libraries • Any ideas and/or working examples would be appreciated • Without a proper packaging system this isn't likely to work. ResourceFunction is basically a "weak" form of packaging that only works for Mathematica functions and only for some tiny subset of them anyway. You'd want a proper paclet to use other language capabilities. This is what happens with JLink and LibraryLink and all of these other, better developed hooks; they distribute their dependencies via paclets. – b3m2a1 Apr 12 '20 at 1:39
# glBeginTransformFeedback ## Name glBeginTransformFeedback — start transform feedback operation ## C Specification void glBeginTransformFeedback( GLenum primitiveMode); void glEndTransformFeedback( void); ## Parameters for glBeginTransformFeedback primitiveMode Specify the output type of the primitives that will be recorded into the buffer objects that are bound for transform feedback. ## Description Transform feedback mode captures the values of varying variables written by the vertex shader. Transform feedback is said to be active after a call to glBeginTransformFeedback until a subsequent call to glEndTransformFeedback. Transform feedback commands must be paired. An implicit glResumeTransformFeedback is performed by glEndTransformFeedback if the transform feedback is paused. Transform feedback is restricted to non-indexed GL_POINTS, GL_LINES, and GL_TRIANGLES. While transform feedback is active the mode parameter to glDrawArrays must exactly match the primitiveMode specified by glBeginTransformFeedback. ## Errors GL_INVALID_OPERATION is generated if glBeginTransformFeedback is executed while transform feedback is active. GL_INVALID_ENUM is generated by glBeginTransformFeedback if primitiveMode is not one of GL_POINTS, GL_LINES, or GL_TRIANGLES. GL_INVALID_OPERATION is generated if glEndTransformFeedback is executed while transform feedback is not active. GL_INVALID_OPERATION is generated by glDrawArrays and glDrawArraysInstanced if transform feedback is active and mode does not exactly match primitiveMode. GL_INVALID_OPERATION is generated by glDrawElements, glDrawElementsInstanced, and glDrawRangeElements if transform feedback is active and not paused. GL_INVALID_OPERATION is generated by glBeginTransformFeedback if any binding point used in transform feedback mode does not have a buffer object bound. In interleaved mode, only the first buffer object binding point is ever written to. GL_INVALID_OPERATION is generated by glBeginTransformFeedback if no binding points would be used, either because no program object is active of because the active program object has specified no varying variables to record. ## API Version Support OpenGL ES API Version Function Name 2.0 3.0 3.1 glBeginTransformFeedback - glEndTransformFeedback -
What does this equation mean? Im really bad in reading those, so if somebody explained it, I would really appreciate. $f_i x,y \ge \quad\forall i\{,,\}$.Hi everyone, I have a friend in my homeschool group who accidentally discovered that her year old son can read a book better upside down rather than.Not only do PI kids read upside down, but many actually write upside down, too.Without PI this young man was unable to read or write on even Kindergarten level very frustrating for him, his parents and his teachers..By learning to read and write upside down, what did I do to myself? There’s even research specifi.y on reading and writing upside down and back to front .
# Tutorial: Test for etiologic heterogeneity in a case-control study ## Introduction In epidemiologic studies polytomous logistic regression is commonly used in the study of etiologic heterogeneity when data are from a case-control study, and the method has good statistical properties. Although polytomous logistic regression can be implemented using available software, the additional calculations needed to perform a thorough analysis of etiologic heterogeneity are cumbersome. To facilitate use of this method we provide functions eh_test_subtype() and eh_test_marker() to address two key questions regarding etiologic heterogeneity: 1. Do risk factor effects differ according to disease subtypes? 2. Do risk factor effects differ according to individual disease markers that combine to form disease subtypes? Whether disease subtypes are pre-specified or formed by cross-classification of individual disease markers, the resulting polytomous logistic regression model is the same. Let $$i$$ index study subjects, $$i = 1, \ldots, N$$, let $$m$$ index disease subtypes, $$m = 0, \ldots M$$, where $$m=0$$ denotes control subjects, and let $$p$$ index risk factors, $$p = 1, \ldots, P$$. The polytomous logistic regression model is specified as $\Pr(Y = m | \mathbf{X}) = \frac{\exp(\mathbf{X}^T \boldsymbol{\beta}_{\boldsymbol{\cdot} m})}{\mathbf{1} + \exp(\mathbf{X}^T \boldsymbol{\beta}) \mathbf{1}}$ where $$\mathbf{X}$$ is the $$(P+1) \times N$$ matrix of risk factor values, with the first row all ones for the intercept, and $$\boldsymbol{\beta}$$ is the $$(P+1) \times M$$ matrix of regression coefficients. $$\boldsymbol{\beta}_{\boldsymbol{\cdot} m}$$ indicates the $$m$$th column of the matrix $$\boldsymbol{\beta}$$ and $$\mathbf{1}$$ represents a vector of ones of length $$M$$. ## Pre-specified subtypes If disease subtypes are pre-specified, either based on clustering high-dimensional disease marker data or based on a single disease marker or combinations of disease markers, then statistical tests for etiologic heterogeneity according to each risk factor can be conducted using the eh_test_subtype() function. Estimates of the parameters of interest related to the question of whether risk factor effects differ across subtypes of disease, $$\hat{\boldsymbol{\beta}}$$, and the associated estimated variance-covariance matrix, $$\widehat{cov}(\hat{\boldsymbol{\beta}})$$, are obtained directly from the resulting polytomous logistic regression model. Each $$\beta_{pm}$$ parameter represents the log odds ratio for a one-unit change in risk factor $$p$$ for subtype $$m$$ disease versus controls. Hypothesis tests for the question of whether a specific risk factor effect differs across subtypes of disease can be conducted separately for each risk factor $$p$$ using a Wald test of the hypothesis $H_{0_{\beta_{p.}}}: \beta_{p1} = \dots = \beta_{pM}$ Using the subtype_data simulated dataset, we can examine the influence of risk factors x1, x2, and x3 on the 4 pre-specified disease subtypes in variable subtype using the following code: library(riskclustr) mod1 <- eh_test_subtype( label = "subtype", M = 4, factors = list("x1", "x2", "x3"), data = subtype_data) See the function documentation for details of function arguments. The resulting estimates $$\hat{\boldsymbol{\beta}}$$ can be accessed with mod1$beta 1 2 3 4 x1 1.5555082 0.8232515 0.2410591 0.1086845 x2 0.3031594 0.4335048 0.3518870 0.3714092 x3 0.8000998 1.9909315 3.0115985 1.5594139 the associated standard deviation estimates $$\sqrt{\widehat{var}(\hat{\boldsymbol{\beta}})}$$ with mod1$beta_se 1 2 3 4 x1 0.0875330 0.0749353 0.0758686 0.0693273 x2 0.0783898 0.0732283 0.0759600 0.0697852 x3 0.2246070 0.1833106 0.1783101 0.1823138 and the heterogeneity p-values with mod1$eh_pval p_het x1 0.0000000 x2 0.4778092 x3 0.0000000 An overall formatted dataframe containing $$\hat{\boldsymbol{\beta}} \Big(\sqrt{\widehat{var}(\hat{\boldsymbol{\beta}})}\Big)$$ and heterogeneity p-values p_het to test the null hypotheses $$H_{0_{\beta_{p.}}}$$ can be obtained as mod1$beta_se_p 1 2 3 4 p_het x1 1.56 (0.09) 0.82 (0.07) 0.24 (0.08) 0.11 (0.07) <.001 x2 0.3 (0.08) 0.43 (0.07) 0.35 (0.08) 0.37 (0.07) 0.478 x3 0.8 (0.22) 1.99 (0.18) 3.01 (0.18) 1.56 (0.18) <.001 Because it is often of interest to examine associations in a case-control study on the odds ratio (OR) scale rather than the original parameter estimate scale, it is also possible to obtain a matrix containing $$OR=\exp(\hat{\boldsymbol{\beta}})$$, along with 95% confidence intervals and heterogeneity p-values p_het to test the null hypotheses $$H_{0_{\beta_{p.}}}$$ using mod1$or_ci_p 1 2 3 4 p_het x1 4.74 (3.99-5.62) 2.28 (1.97-2.64) 1.27 (1.1-1.48) 1.11 (0.97-1.28) <.001 x2 1.35 (1.16-1.58) 1.54 (1.34-1.78) 1.42 (1.23-1.65) 1.45 (1.26-1.66) 0.478 x3 2.23 (1.43-3.46) 7.32 (5.11-10.49) 20.32 (14.33-28.82) 4.76 (3.33-6.8) <.001 ## Subtypes formed by cross-classification of disease markers If disease subtypes are formed by cross-classifying individual binary disease markers, then statistical tests for associations between risk factors and individual disease markers can be conducted using the eh_test_marker() funtion. Let $$k$$ index disease markers, $$k = 1, \ldots, K$$. Here the $$M$$ disease subtypes are formed by cross-classification of the $$K$$ binary disease markers, so that we have $$M = 2^K$$ disease subtypes. To evaluate the independent influences of individual disease markers, it is convenient to transform the parameters in $$\boldsymbol{\beta}$$ using the one-to-one linear transformation $\hat{\boldsymbol{\gamma}} = \frac{\hat{\boldsymbol{\beta}} \mathbf{L}}{M/2}.$ Here $$\mathbf{L}$$ is an $$M \times K$$ contrast matrix such that the entries are -1 if disease marker $$k$$ is absent for disease subtype $$m$$ and 1 if disease marker $$k$$ is present for disease subtype $$m$$. $$\boldsymbol{\gamma}$$ is then the $$(P+1) \times K$$ matrix of parameters that reflect the independent effects of distinct disease markers. Each element of the $$\boldsymbol{\gamma}$$ parameters represents the average of differences in log odds ratios between disease subtypes defined by different levels of the $$k$$th disease marker with respect to the $$p$$th risk factor when the other disease markers are held constant. Variance estimates corresponding to each $$\hat{\gamma}_{pk}$$ are obtained using $\widehat{var}(\hat{\gamma}_{pk}) = \left(\frac{M}{2}\right)^{-2} \mathbf{L}_{\boldsymbol{\cdot} k}^T \widehat{cov}(\hat{\boldsymbol{\beta}}_{p \boldsymbol{\cdot}}^T) \mathbf{L}_{\boldsymbol{\cdot} k}$ where $$\mathbf{L}_{\boldsymbol{\cdot} k}$$ is the $$k$$th column of the $$\mathbf{L}$$ matrix and the estimated variance-covariance matrix $$\widehat{cov}(\hat{\boldsymbol{\beta}}_{p \boldsymbol{\cdot}})$$ for each risk factor $$p$$ is obtained directly from the polytomous logistic regression model. Hypothesis tests for the question of whether a risk factor effect differs across levels of each individual disease marker of which the disease subtypes are comprised can be conducted separately for each combination of risk factor $$p$$ and disease marker $$k$$ using a Wald test of the hypothesis $H_{0_{{\gamma_{pk}}}}: \gamma_{pk} = 0.$ Using the subtype_data simulated dataset, we can examine the influence of risk factors x1, x2, and x3 on the two individual disease markers marker1 and marker2. These two binary disease markers will be cross-classified to form four disease subtypes that will be used as the outcome in the polytomous logistic regression model to obtain the $$\hat{\boldsymbol{\beta}}$$ estimates, which are then transformed in order to obtain estimates and hypothesis tests related to the individual disease markers. library(riskclustr) mod2 <- eh_test_marker( markers = list("marker1", "marker2"), factors = list("x1", "x2", "x3"), case = "case", data = subtype_data) See the function documentation for details of function arguments. The resulting estimates $$\hat{\boldsymbol{\gamma}}$$ can be accessed with mod2$gamma marker1 marker2 x1 -1.0145081 -0.4323157 x2 -0.0066840 0.0749338 x3 0.8899905 -0.1306765 the associated standard deviation estimates $$\sqrt{\widehat{var}(\hat{\boldsymbol{\gamma}})}$$ with mod2$gamma_se marker1 marker2 x1 0.0681025 0.0601803 x2 0.0631465 0.0588423 x3 0.1450606 0.1348479 and the associated p-values with mod2$gamma_pval marker1 marker2 x1 0.0000000 0.0000000 x2 0.9157016 0.2028521 x3 0.0000000 0.3325126 An overall formatted dataframe containing the $$\hat{\boldsymbol{\gamma}} \Big(\sqrt{\widehat{var}(\hat{\boldsymbol{\gamma}})}\Big)$$ and p-values to test the null hypotheses $$H_{0_{\gamma_{pk}}}$$ can be obtained as mod2\$gamma_se_p marker1 est marker1 pval marker2 est marker2 pval x1 -1.01 (0.07) <.001 -0.43 (0.06) <.001 x2 -0.01 (0.06) 0.916 0.07 (0.06) 0.203 x3 0.89 (0.15) <.001 -0.13 (0.13) 0.333 The estimates and heterogeneity p-values for disease subtypes formed by cross-classifying these individual disease markers can also be accessed in objects beta_se_p and or_ci_p, as described in the section on Pre-specified subtypes.
# inner product and adjoint operator This is a problem I found in Schaum's Outlines: Linear Algebra, and I was wondering if someone knew how to solve it. I began using integration by parts, but that approach did not lead to any conclusions. Let V be the space of all infinitely-differentiable functions on R which are periodic of period h>0 [i.e., f(x+h) = f(x) for all x in R]. Define an inner product on V by $$\langle f,g\rangle =\int_{-h}^hf(x)g(x)dx$$ Let $\alpha(f)=f'$. Find $\alpha^*$. I know that the adjoint implies the relationship $\langle\alpha(f),g\rangle= \langle f,\alpha^*(g)\rangle$ . Thank you. - Could you define $\alpha^{*}$? – Jebruho Nov 4 '12 at 22:29 Here is a related problem. – Mhenni Benghorbal Nov 4 '12 at 22:36 This result is very interesting -- a special case of the fact that the adjoint of $\nabla$ is $-\text{div}$. It also implies that $\frac{d}{dt}$ is a "normal" operator, so one would hope that the spectral theorem should apply: there should be an orthonormal basis of eigenvectors of $\frac{d}{dt}$. This idea leads us to discover Fourier series. – littleO Nov 4 '12 at 22:51 We have $$\langle \alpha(f),g\rangle=\int_{-h}^hf'(t)g(t)dt=\left[f(t)g(t)\right]_{-h}^h-\int_{-h}^hf(t)g'(t)dt.$$ As $f\cdot g$ is periodic of period $h$, $\left[f(t)g(t)\right]_{-h}^h=0$, so $\alpha^*(f)=-\alpha(f)$. Yes, $\left.f(x)g(x)\right|_{-h}^{+h}=0$ because $f(x+h)=f(x) \forall x \in \Re$, i.e. with $x=0$, $f(h)=f(0)$; and with $x=-h$, $f(0)=f(-h)$. The same occurs with $g(x)$, so $f(h)g(h)=f(0)g(0)=f(-h)g(-h)$, and $\left.f(x)g(x)\right|_{-h}^{+h}=0$. QED.
# Adjacency matrix on simple graphs It is known that one of the eigenvalues in the $k$-regular graph is $k$. I have to prove that for a connected graph with eigenvalue $\Delta$, in which $\Delta$ is the maximum degree in G, the graph is regular. For a generic graph, all it's eigenvalues are less or equal than $\Delta$, mi question is thus the following, is there a graph which has eigenvalue $\Delta$ and it's not regular? If not, then the statement is easily to prove.
# Postgresql – Why is this query with WHERE, ORDER BY and LIMIT so slow database-designindex-tuningorder-byperformancepostgresqlpostgresql-performance Given this table posts_lists: Table "public.posts_lists" Column | Type | Collation | Nullable | Default ------------+------------------------+-----------+----------+--------- id | character varying(20) | | not null | user_id | character varying(20) | | | tags | jsonb | | | score | integer | | | created_at | integer | | | Indexes: "tmp_posts_lists_pkey1" PRIMARY KEY, btree (id) "tmp_posts_lists_idx_create_at1532588309" btree (created_at) "tmp_posts_lists_idx_score_desc1532588309" btree (score_rank(score, id::text) DESC) "tmp_posts_lists_idx_tags1532588309" gin (jsonb_array_lower(tags)) "tmp_posts_lists_idx_user_id1532588309" btree (user_id) Getting a list by tag is fast: EXPLAIN ANALYSE SELECT * FROM posts_lists WHERE jsonb_array_lower(tags) ? lower('Qui'); Bitmap Heap Scan on posts_lists (cost=1397.50..33991.24 rows=10000 width=56) (actual time=0.110..0.132 rows=2 loops=1) Recheck Cond: (jsonb_array_lower(tags) ? 'qui'::text) Heap Blocks: exact=2 -> Bitmap Index Scan on tmp_posts_lists_idx_tags1532588309 (cost=0.00..1395.00 rows=10000 width=0) (actual time=0.010..0.010 rows=2 loops=1) Index Cond: (jsonb_array_lower(tags) ? 'qui'::text) Planning time: 0.297 ms Execution time: 0.157 ms Getting a list ordered by score, limit 100 – also fast: EXPLAIN ANALYSE SELECT * FROM posts_lists ORDER BY score_rank(score, id) DESC LIMIT 100; Limit (cost=0.56..12.03 rows=100 width=88) (actual time=0.074..0.559 rows=100 loops=1) -> Index Scan using tmp_posts_lists_idx_score_desc1532588309 on posts_lists (cost=0.56..1146999.15 rows=10000473 width=88) (actual time=0.072..0.535 rows=100 loops=1) Planning time: 0.586 ms Execution time: 0.714 ms But combining the above two queries is very slow: EXPLAIN ANALYSE SELECT * FROM posts_lists WHERE jsonb_array_lower(tags) ? lower('Qui') ORDER BY score_rank(score, id) DESC LIMIT 100; Limit (cost=0.56..33724.60 rows=100 width=88) (actual time=2696.965..493476.142 rows=2 loops=1) -> Index Scan using tmp_posts_lists_idx_score_desc1532588309 on posts_lists (cost=0.56..3372404.39 rows=10000 width=88) (actual time=2696.964..493476.139 rows=2 loops=1) Filter: (jsonb_array_lower(tags) ? 'qui'::text) Rows Removed by Filter: 9999998 Planning time: 0.426 ms Execution time: 493476.190 ms Why? How to improve the efficiency of the query? Definition of the two functions used above: create or replace function score_rank(score integer, id text) returns text as $$select case when score < 0 then '0' || lpad((100000000 + score) :: text, 8, '0') || id else '1' || lpad(score :: text, 8, '0') || id end$$ language sql immutable; create or replace function jsonb_array_lower(arr jsonb) returns jsonb as $$SELECT jsonb_agg(lower(elem)) FROM jsonb_array_elements_text(arr) elem$$ language sql immutable; ### Sorting and paging Your function score_rank() produces a text from an integer score and the appended PK id. That's not helpful for sorting. Replace it completely, I suspect you do not need it at all. Instead use the two columns score and id directly for sorting: SELECT * FROM posts_lists ORDER BY score DESC, id DESC LIMIT 100; Replace your index tmp_posts_lists_idx_score_desc1532588309 with a smaller, faster, cheaper to maintain, more versatile index on (score DESC, id DESC). You can also base pagination on this multicolumn index efficiently with row value comparison. See: You later mentioned a new function concatenating strings with base256 etc. All that smart trickery is not going to increase performance. Sorting on an integer is faster than sorting on strings in Postgres. Using integer (or bigint) instead of varchar(20) would actually help in multiple ways. ### Statistics and query plan (a.k.a: Why?) The main issue is the lack of statistics for values nested in the jsonb column. Postgres consequently sometimes misjudges the selectivity of the predicate jsonb_array_lower(tags) ? lower('Qui') and chooses a bad query plan. In your example with LIMIT 2 the logic of the query planner can be illustrated like this - let's call this "Plan 1": Only two rows with the highest scores? Let's scan the index posts_lists_idx_score_desc starting with the highest scores. With any luck we'll have the result in no time! It's a reasonable plan for most cases with at least moderately common tags. But the tag 'qui' turns out to be very rare, and with low scores, too. The worst case. Postgres ends up scanning close to 4 million rows, just to keep 2. A colossal waste of time: Rows Removed by Filter: 3847383 If the query planner had any idea how rare that tag actually is, it would start with the other index posts_lists_idx_tags like we see in your second example with LIMIT 100 - let's call this "Plan 2": Find matching rows, then sort by score and take the top N. Plan 1 is more favorable the smaller the LIMIT and the more frequent the tag. (And if qualifying rows happen to sort on top.) Plan 2 is more favorable the bigger the LIMIT and the less frequent the tag. Postgres has currently no statistics about nested values in document types like jsonb. And no combined frequencies at all. See: Update: "Combined statistics" became possible with CREATE STATISTICS in Postgres 10. What ever else you do, be sure to run the latest version of Postgres. The planner gets smarter with every release. ### Alternatives 1. One idea might be to use a Postgres array (text[]) and array operators instead of the jsonb column to have some statistics for most common elements. Columns most_common_elems, most_common_elem_freqs, and elem_count_histogram in the system view pg_stats. Helps Postgres to generate better query plans for some constellations, but it's no silver bullet. For starters, only the most common elements are stored. Postgres still doesn't know about the rarest elements. 2. Normalize your db design and move tags to a separate 1:n table with a single tag per row. That increases the disk footprint because of the added row overhead per tag. (But changing tags becomes much cheaper with less table bloat.) If your tags are stable, consider a full n:m relationship between posts_lists and a new table tags. That's a bit smaller for lots of common tags, too. And it's the "clean" way. You have more detailed statistics and should see fewer bad query plans. 3. Since Postgres 10 there is a variant of to_tsvector() that processes json(b) values. So it's simple to create a text search index and work with text search operators now. Index: CREATE INDEX posts_lists_idx_tags_fts ON posts_lists USING gin (to_tsvector('simple', tags)); Query: SELECT * FROM posts_lists WHERE to_tsvector('simple', tags) @@ to_tsquery('simple', 'qui') -- text search is case insensitive ORDER BY score DESC, id DESC LIMIT 2; Be sure to use the simple dictionary. You don't want stemming which is built into most other dictionaries. Text search functions produce lower case output, it all works case insensitive by design. No need for processing like in your original function jsonb_array_lower(). 4. While sticking with a jsonb index, try the more specialized jsonb_path_ops operator class: CREATE INDEX ON posts_lists USING gin (jsonb_array_lower(tags) jsonb_path_ops); Query with: WHERE jsonb_array_lower(tags) @> '["qui"]' The manual: Although the jsonb_path_ops operator class supports only queries with the @> operator, it has notable performance advantages over the default operator class jsonb_ops. A jsonb_path_ops index is usually much smaller than a jsonb_ops index over the same data, and the specificity of searches is better, particularly when queries contain keys that appear frequently in the data. Therefore search operations typically perform better than with the default operator class. But I do not expect much for your particular case. 5. Use a regime of "granulated" indexes, combined with a procedural solution. See: db<>fiddle here - with a number of tests ...
# Prove: The set of all polynomials p with p(2) = p(3) is a vector space Prove that this set is a vector space (by proving that it is a subspace of a known vector space). The set of all polynomials p with p(2) = p(3). I understand I need to satisfy, vector addition, scalar multiplication and show that it is non empty. I'm new to this concept so not even sure how to start. Do i maybe use P(2)-P(3)=0 instead? My concern is I'm not sure what two polynomials I need to add to prove vector addition; proving scalar multiplication seems okay though. Thankyou also a follow up question; if I prove something is a subspace of a known vector space does this imply the subspace is a vector space. Or does that subspace has to span the entire vector space first? how would i prove this in this case? - Non emptiness is easy to check. Clearly $p(x)=(x-2)(x-3)$ is a member of the set. – rsg Sep 24 '12 at 5:08 @rsg: Even more clearly(?) the zero polynomial is. – Hagen von Eitzen Sep 24 '12 at 5:16 so even though we cant sub in '2' or '3' as in P(2) or P(3); the zero polynomial is still sufficient to prove non emptiness? – student101 Sep 24 '12 at 5:21 @HagenvonEitzen lol. Of course yes. Wanted to give a relatively (more?) nontrivial example. – rsg Sep 24 '12 at 5:54 Let $P$ be the vector space of all polynomials, and let $V=\{p\in P:p(2)=p(3)\}$; we want to prove that $V$ is a vector space, and the easiest way to do this is to prove that it’s a subspace of the known vector space $P$. This requires that you prove three things: 1. $V\ne\varnothing$. ($V$ is non-empty.) 2. If $p,q\in V$, then $p+q\in V$. ($V$ is closed under vector addition.) 3. If $p,q\in V$ and $\alpha\in\Bbb R$, then $\alpha p\in V$. ($V$ is closed under scalar multiplication. Proving (1) is easy: just exhibit a polynomial $p$ such that $p(2)=p(3)$. The simplest one is the constant polynomial $p(x)=0$, which also happens to be the zero vector in $P$ and in $V$. To prove (2), you must start with arbitrary polynomials $p$ and $q$ in $V$. In other words, you have polynomials $p(x)$ and $q(x)$ such that $p(2)=p(3)$ and $q(2)=q(3)$. (Note that you don’t know what $p(2)$ and $q(2)$ actually are.) To help keep the notation straight, let $t=p+q$; $t$ is a polynomial, and for every $x\in\Bbb R$ it satisfies $t(x)=p(x)+q(x)$. In particular, \begin{align*} t(2)&=p(2)+q(2)\\ &=p(3)+q(3)\qquad\text{ because }p,q\in V\\ &=t(3)\;, \end{align*} so $t\in V$. This shows that $V$ is closed under vector addition. You prove (3) in very much the same way. Let $p$ be any polynomial in $V$, let $\alpha$ be any real number, let $q=\alpha p$ (i.e., $q(x)=\alpha p(x)$ for all $x\in\Bbb R$), and show that $q\in V$. - Besides direct proof (as in Makoto's answer), one can simply note that the evaluation maps $\rm\:E_{\,r}\!:\, p(x)\to p(r)\:$ are $\,\Bbb R$-linear, hence so too is the difference $\rm\:D = E_{\,3} - E_{\,2}\!:\ p(x)\to p(3)-p(2).\:$ Your set is the kernel of $\rm\,D,\,$ hence it is a subspace ot the vector space of real polynomials. - Let $V$ be the space of all polynomials over $\mathbb{R}$. Let $W = \{p \in V\colon p(2) = p(3)\}$. Since $0 \in W$, $W$ is not empty. Let $f, g \in W$. Let $a, b \in \mathbb{R}$. Then $af(2) + bg(2) = af(3) + bg(3)$. Hence $af + bg \in W$. Thus $W$ is a subspace of $V$. - ah okay thanks. – student101 Sep 24 '12 at 5:14
PIMS-UCalgary Operations Research and Analytics Seminar Series: Cynthia Vinzant • Date: 03/24/2017 • Time: 11:00 Lecturer(s): Cynthia Vinzant, North Carolina State University Location: University of Calgary Topic: Real stable polynomials, determinants, and combinatorics Description: Real stable polynomials define real hypersurfaces with special topological structure. These polynomials bound the feasible regions of semidefinite programs and appear in many areas of mathematics, including optimization, combinatorics and differential equations. Recently, tight connections have been developed between these polynomials and combinatorial objects called matroids. This led to a counterexample to the generalized Lax conjecture, which concerned high-dimensional feasible regions of semidefinite programs. I will give an introduction to some of these objects and the fascinating connections between them. Other Information: Location: SH202
# Pandoc update broke exportation #260 Open opened this issue Nov 20, 2017 · 18 comments Assignees Labels ### sallirom commented Nov 20, 2017 • edited Hi, pandoc was updated 8 days ago. https://github.com/jgm/pandoc/releases/tag/2.0.2 It seems to be a slight change in the grammar of pandoc, or at least, I get this error message when I try to export the document: Export error: --smart/-S has been removed. Use +smart or -smart extension instead. For example: pandoc -f markdown+smart -t markdown-smart. Try pandoc --help for more information. Incidentaly, multimarkdown was also updated on the 13th of october. https://github.com/fletcher/MultiMarkdown-6/releases When I try to export using multimarkdown I get: MultiMarkdown: invalid option "--smart" Try 'multimarkdown --help' for more information. It is strange nobody else has said something about this, I am no expert, maybe I just messed all up. Excuse me if that would be the case. Thank you, added the label Nov 25, 2017 ### wereturtle commented Nov 25, 2017 Thanks for the heads up! Until I get to fixing it, you can turn off the smart extension when exporting by unchecking the option in the Export dialog. Unfortunately, this won't help you with HTML previews, as that is baked in. added the label Nov 25, 2017 ### sallirom commented Nov 28, 2017 Thank you very much! you are the best :) ### Seraphli commented Dec 20, 2017 I'm wondering when the new version will release. It has been almost a month. ### jkruppa commented Jan 2, 2018 Unfortunately, I am forced to work on a Windows machine... I really liked your program, but I use mainly the Pandoc Preview, which is now not available... Can you make a new version available? ### wereturtle commented Jan 17, 2018 Hi @jkruppa, the next version will be releasing Soon (tm). In the meantime, try downloading an older version of the Pandoc zip from the Pandoc website and putting the Pandoc executables where the ghostwriter.exe is. The app will find that version of Pandoc first, as it would be in the same directory. (I assume you are using the portable version of ghostwriter, so you should not need to be an admin to do this.) referenced this issue Jan 17, 2018 Closed added a commit that referenced this issue Feb 9, 2018 Fixes #260. Add compatibility with Pandoc version 2 and MultiMarkdown… f1a55e6 … version 6. closed this in c08482c Feb 10, 2018 referenced this issue Feb 15, 2018 referenced this issue Feb 25, 2018 referenced this issue Mar 7, 2018 ### andi-blafasl commented Mar 28, 2018 Is there a release date on the road-map for the next ghostwriter version? It looks like many people facing this Issue. The statement from January "Releasing Soon (tm)" is not really satisfactory ;-) ### wzhangbeibei commented Jul 20, 2018 HI I got this reply --smart/-S has been removed. Use +smart or -smart extension instead. For example: pandoc -f markdown+smart -t markdown-smart. --normalize has been removed. Normalization is now automatic. Try pandoc --help for more information. Then I use the recommended "pandoc -f markdown+smart -t markdown-smart" and then I got this "pandoc: pandoc: openBinaryFile: does not exist (No such file or directory)" Help me please--- ### abenson commented Jul 20, 2018 Are you running the latest pandoc and latest ghostwriter? ### tecosaur commented Dec 29, 2018 • edited I seem to be getting this issue with Ghostwriter version: 1.7.4, release: 13 (quote my package manager) and pandoc 2.2.1. This is on the linux distro Solus. Export error: --smart/-S has been removed. Use +smart or -smart extension instead. For example: pandoc -f markdown+smart -t markdown-smart. Try pandoc --help for more information. ### wereturtle commented Dec 29, 2018 @tecosaur I'm not seeing that issue with Pandoc 2.5 and ghostwriter 1.7.4. The source code that figures this all out looks like so: The question is, does that particular version of Pandoc, as packaged for Solus, print out it's version in expected manner? ghostwriter parses for a specific version format from the output of pandoc --version. If it changed for any reason on Solus or in that particular version, you could end up in a case where ghostwriter thinks it's using Pandoc 1.x instead of 2.x. If you can, please provide the exact output of pandoc --version, and I can see if something went wrong with the version parsing. ### tecosaur commented Dec 29, 2018 Here's my full output from pandoc --version. pandoc 2.2.1 Compiled with pandoc-types 1.17.4.2, texmath 0.11.0.1, skylighting 0.7.1 Default user data directory: /home/tec/.pandoc Copyright (C) 2006-2018 John MacFarlane Web: http://pandoc.org This is free software; see the source for copying conditions. There is no warranty, not even for merchantability or fitness for a particular purpose. ### tecosaur commented Jan 20, 2019 @wereturtle I'd be great to hear back from you. I'd really like to use a ghostwriter to integrate with my latex tools, and not being able to use Pandoc is somewhat ruining this for me. If you'd be able to help me figure this out I would really appreciate this as it absolutely crucial for my use case; It's killing me given how much I like using the app. ### wereturtle commented Feb 12, 2019 @tecosaur I've installed Solus in a VM with ghostwriter and pandoc on it, all up to date. I could not replicate your issue. Pandoc is working fine in the preview and for exporting. Do you have any more specific steps I could follow? You've used the normal Solus Software Center tool to install both ghostwriter and pandoc, correct? Regardless, since things work on a fresh install of Solus, you might want to try reinstalling your OS to get things working. ### wereturtle commented Feb 14, 2019 @tecosaur Actually, were you trying to export to PDF with wkhtmltopdf? I ask because the error message you got looks suspiciously like the one in issue #412. If so, you can work around the issue by using the latex backend, or even export to ODT and then using LibreOffice to convert that to PDF. ### tecosaur commented Feb 17, 2019 • edited @wereturtle Just got a fresh install of Solus and it worked. Thanks for going to all the trouble of setting up a VM to test it 😃 (fyi - not using wkhtmltopdf) ### wereturtle commented Feb 23, 2019 @tecosaur, that's great to hear! Thanks for the update! ### Korsani commented Oct 1, 2019 Hi, I'm facing the problem on Windows 10 with pandoc 2.7.3 and ghostwriter 1.8.0 Disabling Smart on export is ok, but preview is broken... reopened this Oct 4, 2019 removed the label Oct 4, 2019 self-assigned this Oct 4, 2019 ### SobhanMP commented Oct 15, 2019 • edited Hi using mathjax instead of mathml fixes the preview issue. should this be added as an option? or is there some other reason to prefer mathml?
# Homework 35: Linked list in C++ Name: _____________________________________________ Alpha: ___________________ Describe help received: ________________________________________________________ • Due before class on Friday, April 21 • This homework contains code to be submitted electronically. Put your code in a folder called hw35 and submit using the 204sub command. • This is a written homework - be sure to turn in a hard-copy of your completed assignment before the deadline. Use the codeprint command to print out your code and turn that in as well. # Assignment 1. Circle one to indicate how you did for the reading assignment from Homework 33 before class on Monday: How carefully did you complete the reading? (Circle one) Not at all Skimmed it 2. Circle one to indicate how you did for the reading assignment from Homework 34 before class on Wednesday: How carefully did you complete the reading? (Circle one) Not at all Skimmed it 3. Required reading for this homework: Section 5 (structs and classes) from the Unit 10 notes. Circle below to indicate how you did for this reading assignment: How carefully did you complete the reading? (Circle one) Not at all Skimmed it 4. Given the following C++ declarations: void wombat(int x, double y); int wombat(int x); int wombat(char* s); int a; double b; char c; char d[128]; fill in the following table with the type (only) of each expression. Write ERROR for if the expression would be a compiler or run-time error. expression type (or ERROR) a wombat(a) wombat(a,b) wombat(b) a < b wombat(d) == 10 wombat(a,b) == 10 wombat(wombat(&c)) 5. Note: This is the exact same assignment as from Homework 32 from last week, except that now you need to do it in C++, using new and delete for example. Pierce the Painter likes to paint many layers of colors on top of each other, then strip them off and remember what was underneath. Write a program paint.cpp to help Pierce keep track of these colors. Your program will read a series of two kinds of commands: • paint color Adds a layer of color on top of the current color. • strip Removes the topmost layer of color, revealing whatever was just underneath. Before each command, your program should report what color is on the wall by saying "The top color is X." When there are no colors, like at the beginning of the program, say "The canvas is blank." When Pierce tries to strip from a blank canvas, quit the program. Of course, you will want to create a linked list to help you! Each node in your linked list should store a single string, for the name of a color. When Pierce paints, that means adding to the front of the linked list. When Pierce wants to strip, that means removing the first node in the list. Examples roche@ubuntu$./paint roche@ubuntu$ ./paint The canvas is blank. strip roche@ubuntu$./paint The canvas is blank. paint red The top color is red. strip The canvas is blank. strip roche@ubuntu$ ./paint The canvas is blank. paint blue The top color is blue. paint blue The top color is blue. paint green The top color is green. strip The top color is blue. paint orange The top color is orange. strip The top color is blue. strip The top color is blue. paint purple The top color is purple. paint purple The top color is purple. strip The top color is purple. strip The top color is blue. strip The canvas is blank. paint white The top color is white. strip The canvas is blank. strip
Windows batch file executes some commands that return output over and over I tried to put commands that return output into batch file. When I run the batch file it executes the command over and over until I cancel with CTRL+C. I observed this behavior in Windows CE, Windows XP, Windows 7 and Server 2003. At first I thought I made a mistake with LDIFDE but the same thing goes for PING. Is there something I missed with batch scripting? The file contains one line: ``````ping google.com `````` - Without knowing the contents of the batch script, this question is going to be impossible to answer :-) –  Chris J Sep 2 '11 at 14:53 Surely I isolated it to one command only. –  Dean Sep 2 '11 at 14:58 I am sorry I left this detail, It seems obvious - I placed only one command in the batch file to isolate the issue. –  Dean Sep 2 '11 at 15:03 That's odd, how are you executing the batch script? Without some sort of looping control structure it shouldn't be doing this ... –  Zypher Sep 2 '11 at 15:12 I've seen this before a long time ago, but I can't recall what it was. I think it had something to do with it not seeing ping as a command. I also see your runnign this from `D:\Desktop`. Post your batch file so we can help. –  Nixphoe Sep 2 '11 at 15:28 I.think you named your script `ping.bat` or `ping.cmd` and it is calling itself. This happens because of a design decision that was introduced in DOS 2.0. On MS-DOS, Windows, and MS-DOS clones and derivatives, the current directory is first in the search path. When DOS is searching for a command it first checks to see if it an internal command, built into command.com (e.g. echo, copy) then it searches the filesystem. It always starts with the current directory, and then it looks in directories defined in the PATH variable. You have a couple options: • Rename the script. • Simply include the file extension in your script so it is `ping.exe google.com` • Use the full path to ping `%SystemRoot%\system32\ping.exe` - +1 Nice catch. Worked for me because I specifically DIDN'T do that. –  squillman Sep 2 '11 at 16:04 Wow, I would have never guessed that. Nice one! –  Brad Sep 2 '11 at 16:18 THATS IT! Good observation! I've been naming the batch files the same as the command. Thanks a lot! –  Dean Sep 2 '11 at 17:24
We use cookies to personalise content and advertisements and to analyse access to our website. Furthermore, our partners for online advertising receive pseudonymised information about your use of our website. cookie policy and privacy policy. +0 # Please help me 0 144 1 We have triangle $\triangle ABC$ where $AB = AC$ and $AD$ is an altitude. Meanwhile, $E$ is a point on $AC$ such that $AB \parallel DE.$ If $BC = 12$ and the area of $\triangle ABC$ is $180,$ what is the area of $ABDE$? Aug 4, 2019 ### 1+0 Answers #1 +23352 +3 We have triangle $$\triangle ABC$$ where $$AB = AC$$ and $$AD$$ is an altitude. Meanwhile, $$E$$ is a point on $$AC$$ such that $$AB \parallel DE$$. If $$BC = 12$$ and the area of  $$\triangle ABC$$ is $$180$$, what is the area of $$ABDE$$? $$\text{Let AB = AC } \\ \text{Let DB = CD =\dfrac{BC}{2} = 6 }$$ $$\begin{array}{|rcll|} \hline \text{area of } \triangle ABC =180 &=& \dfrac{BC*AD}{2} \\ 180 &=& \dfrac{12*AD}{2} \\ 180 &=& 6AD \\ \mathbf{ AD } &=& \mathbf{30} \\ \hline \end{array}$$ $$\begin{array}{|rcll|} \hline \dfrac{CD}{ED} &=& \dfrac{BC}{AB} \\\\ \dfrac{6}{ED} &=& \dfrac{12}{AB} \\\\ \dfrac{ED}{6} &=& \dfrac{AB}{12} \\\\ \mathbf{ED }&=& \mathbf{\dfrac{AB}{2}} \\ \hline \end{array}$$ $$\begin{array}{|rcll|} \hline \text{area of } ABDE &=& \dfrac{AB+ED}{2}\times H \quad| \quad H=\dfrac{DB*AD}{AB}=\dfrac{6*30}{AB} \\\\ \text{area of } ABDE &=& \left(\dfrac{AB+ED}{2}\right)\times \dfrac{6*30}{AB} \\\\ \text{area of } ABDE &=& \left(\dfrac{AB+\dfrac{AB}{2}}{2}\right)\times \dfrac{6*30}{AB} \\\\ \text{area of } ABDE &=& \dfrac{3}{4}AB\times \dfrac{6*30}{AB} \\\\ \text{area of } ABDE &=& 3* 3*15 \\\\ \mathbf{\text{area of } ABDE} &=& \mathbf{135} \\ \hline \end{array}$$ Aug 5, 2019
### Session D33: Focus Session: Complex Oxide Thin Films -- Magnetic Oxides 2:30 PM–5:18 PM, Monday, March 15, 2010 Room: E143 Chair: Mark Rzchowski, University of Wisconsin-Madison Abstract ID: BAPS.2010.MAR.D33.7 ### Abstract: D33.00007 : Oxygen Doping Study of Cuprate/Manganite Thin-Film Heterostructures 3:42 PM–3:54 PM Preview Abstract MathJax On | Off   Abstract #### Authors: Hao Zhang (Department of Physics, University of Toronto and Canadian Institute for Advanced Research) Yi-Tang Yen (Department of Physics, University of Toronto and Canadian Institute for Advanced Research) John Y.T. Wei (Department of Physics, University of Toronto and Canadian Institute for Advanced Research) Recent studies of thin-film heterostructures comprising superconducting cuprates and ferromagnetic manganites have revealed a range of novel physical phenomena [1].~ These phenomena are believed to involve complex interfacial interactions between competing order parameters [2], and appear to be highly sensitive to the doping of carriers [3]. To further examine these phenomena, we carry out a systematic oxygen-doping study of YBa$_{2}$Cu$_{3}$O$_{6+x}$/La$_{0.67}$Ca$_{0.33}$MnO$_{3}$ multilayers, grown epitaxially by pulsed laser-ablated deposition. Our samples are characterized by electrical transport and magnetization measurements, as well as x-ray diffraction and various scanning microscopy probes.~ To assess the role of interfacial magnetism on the cuprate layer, YBa$_{2}$Cu$_{3}$O$_{6+x}$ /LaNiO$_{3}$ samples are also made and measured as a comparison.~ We also examine the effects of cation substitution in the YBa$_{2}$Cu$_{3}$O$_{6+x}$ layer, in order to determine the extent of carrier doping across the~interface.~ [1] for example, see Z. Sefrioui et al., Phys. Rev. B 67, 214511 [2] J. Hoppler et al., Nature Materials 8, 315 [3] V. Pe\~{n}a et al., Phys. Rev. Lett. 97, 177005 To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2010.MAR.D33.7
## Algebra 1: Common Core (15th Edition) Published by Prentice Hall # Chapter 6 - Systems of Equations and Inequalities - 6-5 Linear Inequalities - Got It? - Page 396: 3 #### Answer See the graph (for question a). #### Work Step by Step a) Graph $x \lt -5$ using a dashed line. Use $(0, 0)$ as a test point. $x \lt -5$ $0 \lt -5$ Shade on the side of the line that contains $(0, 0)$. b) Graph $x \leq -2$ using a solid line. Use $(0, 0)$ as a test point. $x \leq -2$ $0 \leq -2$ Shade on the side of the line that does not contain $(0, 0)$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# Idea¶ Acrylamid is not a dumb blog compiler. Over one thousand lines of code have been written to detect changes in your blog and reflect these changes with minimum effort in the output while on the other side being highly modular. How is this done? ## Fundamental Concepts¶ 1. Caching. Acrylamid caches everything. So if you have several pages on your blog index and you add a new entry, it pulls all existing entries from its cache without any recompilation. 2. Explicit modification recognition. Only two things can change: content or layout. If the content changes, only this specific item must be re-compiled. If you modify your layout, all content is retrieved from cache. 3. Lazy evaluation. Acrylamid is that lazy; it even delays the import of libraries. If you write your current article in Markdown, why should it initialize docutils that takes nearly half a second to import? That is the idea behind. Now how it is actually implemented. Instead of rushing from beginning to the end, we do it reverse. ## Implementation¶ View : An Atom feed, an index page with a summarized listing of recent posts, a full version of that entry, an articles overview. A view generates that and also checks whether a page must be re-compiled or can be skipped. Now the best: you can add as many views as you like. You route them to your index page or render out a full-text feed beside a summarized feed. Acrylamid will find out the most efficient way to compile your content to HTML. Filters : A filter is like a UNIX pipe. It gets text, processes it and returns text (preferably HTML). You can apply as many filters as you like all content, per view or per entry. That’s also the hierarchy. A filter specified in an entry will always override per view or global filters (but only if the filter is in conflict with other filters such as Markdown vs. reStructuredText). You may ask why you need multiple filters per entry and/or per view? Well, let me explain: you write some text, and you are to lazy to sum up the context. A filter can do that for you. You prefer hyphenation in your browser, but don’t want it in your feed. But you have to. Here is an example configuration: FILTERS = ["Markdown", "hyphenation"] VIEWS = { # h1 means headings decreased by 1, h2 by 2 ... '/:year/:slug/': {'view': 'entry', filters: ['h1', ]}, '/': {'view': 'index', filters: ['h1', 'summarize']}, '/atom/': {'view': 'atom', filters: ['h3', 'nohyphenate']} } So, how does Acrylamid figure out, that the markup can be used for multiple routes? It basically builds a tree with all used paths and looks whether a path can be used more than once: You can clearly see that markdown conversion is only computed once and is re- used. That makes it possible to re-adjust the maximum of words you want to summarize without any markdown conversion. It simply uses the cached version. Using a disk cache makes it also possible to track changes. Each time you modify the filter chain (evaluated per entry) it figures out what intermediates are not used anymore and which ones are not affected. ## Trivia¶ Q : What is the overhead of saving each intermediate to disk? I can only give my personal blog as comparison: 165 articles are using 1.45 mb cache. Actually all intermediates are minimized with zlib. Q : How fast is Acrylamid in comparison to Pelican, Octopress, Nikola? For what it’s worth: Pelican and Acrylamid are almost equally fast in their default configuration, although Acrylamid has hyphenation active and renders much more pages like tags and index. When it comes to incremental updates, Acrylamid is much faster than Pelican (something like “less than a second versus several seconds”).
# Why doesn't this have the limit $2/e$? [closed] Why does the limit below equal $2$ and not $\frac{2}{e}$? $$\lim_{n \to \infty} \frac{2}{1+\frac{1}{n}}$$ - ## closed as off-topic by Jonas Meyer, Claude Leibovici, අරුණ, John, Erick WongMar 26 at 7:55 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Jonas Meyer, Claude Leibovici, අරුණ, John, Erick Wong If this question can be reworded to fit the rules in the help center, please edit the question. Because it's not $\frac{2}{(1 + \frac{1}{n})^n}$? –  Qiaochu Yuan Nov 6 '11 at 20:23 When you write \lim in TeX the backslash not only (1) prevents italicization but also (2) causes $n\to\infty$ to appear directly below "lim" when it's in "display" mode (as opposed to "inline") and (3) in some cases results in proper spacing between "lim" and what follows it. (I fixed it.) –  Michael Hardy Nov 6 '11 at 21:02 $\lim_{n\rightarrow\infty}(1+1/n)^n=e$, but $\lim_{n\rightarrow\infty}(1+1/n) =1$. - Because you are taking the limit of $\displaystyle \frac{2}{1+\frac{1}{n}}$ and not $\displaystyle \frac{2}{\left(1+\frac{1}{n}\right)^n}$. Also, because $\lim\limits_{n\to\infty}\frac{1}{n}=0$. - Because $$\lim_{n\to \infty}\frac{2}{(1+\frac{1}{n})}=\frac {\lim_{n\to \infty}2}{\lim_{n\to \infty}(1+\frac{1}{n})}= \frac{2}{\lim_{n\to \infty}(1+\frac{1}{n})}=\frac{2}{1}=2$$ Note that $\lim_{n\to \infty}(1+\frac{1}{n})=1$ not $e$. -
Question # Name the reagent used to convert benzyl alcohol to benzoic acid. Hint: We will need to convert $-C{{H}_{2}}OH$ group of benzyl alcohol into –COOH group to complete this reaction. This is a simple oxidation reaction. An inorganic compound which contains a transition metal in its structure can be used in this conversion. - We can use an acidic, neutral or alkaline solution of $KMn{{O}_{4}}$ to oxidise any primary alcohol to carboxylic acid. So, we can use alkaline potassium permanganate solution to convert benzyl alcohol to benzoic acid. - It is proved that out of these types of solutions, acidic potassium permanganate is the most strong reagent. $KMn{{O}_{4}}$ in neutral medium is weaker than it in acidic medium and in when $KMn{{O}_{4}}$ is in alkaline medium, it is weakest amongst three. - Remember that Jones reagent ($Cr{{O}_{3}}+{{H}_{2}}S{{O}_{4}}$) can also convert most primary alcohols to carboxylic acids but in case of Benzyl alcohol, Jones reagent will first convert it into Benzaldehyde but this reaction will involve formation of hydrates of aldehyde while, benzaldehyde is not able to form stable hydrates. So, this is the reason that we will not be able to use Jones reagent for this conversion.
Let $$a < b$$ be real numbers, and let $$g : [a, b] \to \mathbb{R}$$ be a function [something and something]. Then, there exists an $$x \in (a, b)$$ such that: [$$g'(x) =$$ ?]
Sub-Forums : Mechanical Engineering Search this Forum ### Aerospace Engineering Development of aircraft and space vehicles. Propulsion, Aircraft Structures, Crafts, Atmosphere Y 10:27 PM etudiant ### Automotive Engineering Gear heads unite to discuss Vehicle Dynamics, Safety Engineering, Performance, NVH, Ergonomics, Durability... Dec17-13 05:07 PM geoff033 # Mechanical Engineering - Development of machines. Mechatronics, Robotics, Engines, Drafting RSS Search this Forum Views: 776 Announcement: End of year contest, $75+$50 prize! Dec18-13 Meta Thread / Thread Starter Last Post Replies Views Pinned: Experimental Fluid Mechanics Videos Series ( 1 2 3 ... Last) Hi everyone, Maybe this resource is new for someone. It's a collection of videos made in the 60's by the very best... Mar6-13 11:50 AM FeynmanIsCool 61 76,139 Pinned: Should I learn Pro/Engineer or SolidWorks? ( 1 2) Dear all, I checked job numbers on indeed Canada, and when using Pro/ENGINEER and solidworks as search words, I got... Dec17-13 12:25 PM shanren 21 18,023 hey every1... i'm really really stuck on this problem and unless i get it and a few others right im gonna fail my... Oct19-06 12:14 PM radou 4 1,525 Hey everyone. First time poster here. I was helping a friend do some Statics homework (I completed the course last... Feb22-12 05:15 PM FeX32 8 1,043 Hi all. I was perusing the grad course options for mechanical engineering at a local school when I noticed a course... Sep14-10 10:50 PM Nspyred 2 805 Hello everyone: I am trying to find a program or formula that will help me figure out how many windings, turns of... Jun27-10 07:57 PM tvavanasd 3 1,982 Ladies and gentlemen. I have been trying for literally days on this problem and its completley done me in. The... Aug15-10 10:38 PM djw42 2 3,225 Nozzle: Why for subsonic flow the cross-sectional area of nozzle decreases? Mar23-13 02:44 PM S_Happens 1 454 My partner and I are working on a quadcopter, but we can't seem to find the correct equation we need for the... Nov17-10 10:10 AM sid_raptor 1 1,622 What characteristics must a airplane have in order not to be visible on enemy's radar screen? May31-05 11:17 PM abercrombiems02 2 1,636 Hello all, Has anyone bought the 41st Edition of "Steam Its Generation and Use" ? ... Oct9-09 02:43 PM CFDFEAGURU 0 835 I've been reading about this topic in a book recently and it states that when steam (water vapor) condenses on a cold... Sep28-13 06:56 AM sanka 2 353 I am a 2nd year mech engg student. I have a basic doubt in condensers. The steam from turbine exhaust enters the... Oct17-10 11:29 AM ashutoshd 2 1,444 Hi, I was just wondering if anyone had any advice on sources of steam ejector design principles. I've seen one... Jun14-10 08:44 PM august8 6 30,436 Would it be possible to build a steam engine using parts made primarily of carbon fiber? Any engine? Sep2-07 01:02 AM Paulanddiw 12 3,465 Hi, my first post. So brief presentation, I'm an undergraduate mechanical engineer student. I hope this doesn't... Jan30-12 05:30 AM 256bits 2 1,745 Hi, here's a problem I've recently come across, and it's bugging me that I can't wrap my mind around it. I need to... Jun21-12 04:29 PM Q_Goest 7 1,220 I have made a Solar collector out of a 10' diameter satelight dish and lexan mirrors. I have a collector that... Aug20-08 03:42 PM chayced 10 5,865 im from India , in my third year of B.E. Mech degree .. course... and my group is planning to do steam jet... Jan29-10 02:06 PM CFDFEAGURU 4 6,089 Question: http://i1179.photobucket.com/albums/x393/mariaschiffer/img017.jpg Can anyone plz check whether this... May24-11 09:10 PM oliverjames 0 1,812 Hi all, I'm looking for design of steam platens for Rubber curing presses.The steam platens heat the green rubber... Jul26-13 01:55 PM Bandit127 5 551 Carnot Cycle is the ideal heat engine cycle with maximum efficiency. We use Rankine Cycle as the working cycle for... Oct7-04 12:53 PM Clausius2 7 17,615 Hello All, I am currently trying to optimize a steam power plant cycle and I am stuck... What I am unable... Dec21-08 02:18 PM Blaine 0 4,761 Hi professionals, im a student from philippines and im about to design a steam power plant with 74000 KW load. Our... Dec1-09 12:54 PM Redbelly98 1 2,771 I will start my junior year as a ME major in the fall, I currently am working for a power company on a co-op. I work... Jul3-06 09:52 PM Astronuc 9 4,038 Hi All, For a small steam turbine to generate power at a rate of 50 kW/h. How many BTU's of heat required per... Dec6-08 09:32 PM RonL 12 3,233 Has anyone experienced with SST-9000 siemens steam turbine here? I am much astonished to see the steam parameter and... Jan31-13 10:40 PM jim hardy 9 1,025 Hello, i would like to do structural analysis for low pressure spindle blades in Ansys apdl.. but i couldnt do it... Jul25-11 09:57 PM Damian123 3 1,770 Can somebody aid my understanding: I am re-studying thermodynamics at the moment 6 years on from when I last studied... Oct21-09 11:06 PM Topher925 2 6,020 hii frends, im a doing a project on the efficiency improvement of a velocity compounded impulse steam turbine. we... Jan11-10 09:10 AM paul89 4 6,514 Can someone explain something to me: I believe when throttling h1=h2. When calculating enthalpy drop across a... Nov6-09 12:09 PM Q_Goest 2 3,017 (English is not my native language so I apologize for the rest of the post) Hi everyone! I'm an engineering... Aug22-13 10:44 AM marciokoko 17 1,773 Hello, I am trying to design main inlet steam nozzles and stator nozzles for an impulse type steam turbine. (Rataeu... May3-13 09:12 AM nautilus1 0 542 Hello, would someone know a real-life example value of the steam velocity at the turbine outlet? Also, an example... Mar20-13 01:50 PM Sunfire 6 810 Hello All, I'm working on a project and one of the main parts is this pressure vessel that is sealed air tight that... Jan6-12 06:15 AM berko1 9 1,591 This topic is about streamlines, pathlines and streaklines. Please could you see if you could help me out, only if you... Jan19-07 12:16 PM FredGarvin 1 6,596 Hi every one. As Steel Frame Manuel mostly discribe the force analysis in vertical loading, I wonder how can i... Aug5-09 09:33 AM Su Solberg 2 2,723 I need a hand choosing the right size, thickness and shape of steel for a bracket to mount an old style 5 gallon Jerry... Aug10-10 06:59 AM Ranger Mike 11 2,224 I'm using the basic deflection equation to determine how thick a plate of steel would have to be to support a load of... Aug25-12 08:45 PM lvwarren 3 995 hello friends, i am currently working on a project in which a steel rope is used to carry an object in a loop just... May9-12 10:43 PM Pkruse 12 1,492 I am building a new deck - from wood tables I know that my joists need to be 2 x 8 to span the 10 feet of my pond... Jan4-10 09:52 PM nvn 9 6,145 Im making an A frame to tow a quad bike behind a car, i need to know something because it affects the design. ... May8-11 03:29 PM oldfartuk 14 3,102 Display Options for Mechanical Engineering Mentors Showing threads 6681 to 6720 of 8133 Mentors : 2 Sorted By Thread Title Last Post Time Thread Start Time Number of Replies Number of Views Thread Starter Thread Rating Sort Order Ascending Descending From The Last Day Last 2 Days Last Week Last 10 Days Last 2 Weeks Last Month Last 45 Days Last 2 Months Last 75 Days Last 100 Days Last Year Beginning Forum Tools Search this Forum Search this Forum :   Advanced Search
# Are the words “easy,” “basic,” “clearly,” “obviously,” etc., ever helpful? This is a very basic fact from... It then clearly follows that... Obviously, we have... The proof is trivial... I could add plenty of other phrases to this list that mathematicians are prone to use when trying to communicate that a particular concept is so simple that they refuse to discuss the details. I can't think of a time when words like these helped me. Usually they just make me realize how lost I am, and how much more the professor/author knows than I do. Even if I agree that the fact is basic, or the proof is trivial, it didn't help me to hear that it was easy or trivial, because I already thought that. To get to my question, consider the following amendments: This is a fact from... It then follows that... We have... The proof is left as an exercise. Is there ever a context in which it is more helpful for students to hear the first set of sentences as compared to this second set? Are words like "easy," or "trivially," ever constructive? In what setting? As math educators, should we work to weed them out of our written/spoken vocabulary? • Something related was discussed on MathOverflow mathoverflow.net/questions/16193/… – quid Apr 22 '14 at 18:00 • Yes, they are helpful. For example, when there are multiple inequalities that could apply, I normally opt to use sharper bounds, however, it might be enough to apply something as rough as union bound. It's also a short way of letting the audience know that nothing unexpected happens. In other words, such a statement "prunes the search tree" and might lead sometimes to great speed-ups. Nevertheless, one should not use such phrases if they are not warranted, e.g. it only seems it's obvious, but was not properly checked. – dtldarek Apr 22 '14 at 19:32 • Also related: Mathematics StackExchange and Academia StackExchange – Joel Reyes Noche Apr 23 '14 at 2:10 • This page is about the computer engineering use of "trivial", which is similar to the mathematician's use; you might find it amusing. fishbowl.pastiche.org/2007/07/17/… For me personally, "clearly" usually indicates where I'm about to make an error. :-) – Eric Lippert Apr 23 '14 at 13:01 • I'd add that I agree with your statement that either the characterization of a theorem as trivial is insulting to the reader if the theorem is not understood, and redundant if it is. I edit programming books as a hobby and often discourage writers from saying things like "It is easy to do X in C#"; the reader who already finds it easy likely does not have to read the book! – Eric Lippert Apr 24 '14 at 6:33 The main point here is that these words/expressions should not be used as substitute for an argument. They obviously have some negative effects: • You evaluate your students by them and not in the positive sense. If you say "obviously", then it has a message that "it better be obvious". And if it is for someone not clear, then it is an evaluation that this person is not up to the course. This can turn into humiliation of your audience (or reader, if it is in notes) quickly. • If you turn out to be wrong, the you loose your face quickly. This is a consequence of my previous point: you humiliated your audience, and it turned out that you did it wrongly. You are not a person to be taken seriously. You should, however, use these words in context, explanation. For example, to stress that though a calculation looks complicated, it is actually a simple idea what is behind. • Yes! One of my old math teachers always banned us from saying "Basically..." when we were explaining our solutions up at the board. It was deemed "a put-down word", because it implied that if you didn't understand, you were stupid. – SimonT Apr 23 '14 at 0:11 • @SimonT well I wouldn't classify "basically" as one of those words. That can be used when you've just explained a complicated series of steps that can be summarized more concisely. – David Z Apr 23 '14 at 2:05 • +1 for "should not be used as substitute for an argument". – Nico Burns Apr 30 '14 at 22:48 I can think of a few instances where it might be useful: 1. To situate the current piece of the concept among others coming up 2. To call-back to something earlier in the course that really should be easy to them at this point 3. To intentionally make fun of something you know they thought was stupid (e.g. high school? if you're teaching a proofs course) And, examples of each of these: 1. To start off this proof, we prove this easy (or easier) theorem... (again, the goal is to tell them that something harder is coming soon) 2. Obviously, we have 2n+1 is odd, by the definition of odd. (again, later in the course--not if you've just introduced quantifiers) 3. And now, to finish off this proof, we do a trivial computation of this obnoxious integral. You all learned this in high school, right? (here, the goal would be for people to roll their eyes) I do agree with you on the general idea though. We use these terms far too often when we're explaining things, and they are almost always not a good idea. • On point (2), it can sometimes be outright confusing if an author doesn't acknowledge that an obvious step is obvious. I'm left worrying that I don't understand something, and it's more complicated than I think. – Jack M Apr 22 '14 at 18:03 • @JackM Almost always being, "at all but finitely many times" or "at all but countably many times"? :P – M. Vinay Oct 31 '14 at 16:46 In addition to what @andras-batkai said, seeing words like 'obviously', 'clearly', and so on, in assignments or texts raises red flags and scepticism with me. (Recall that, for thousands of years, it was obvious that the earth was flat. Also, obviously a set contains all of its elements.) As both a former student of formal logic and an occasional tutor, this is a pet peeve of mine. I feel these words should not be in educational or instructional texts (manuals, course notes, troubleshooting): • They don't improve comprehension. If something is obvious, by its nature, it doesn't need to be pointed out. • They put the burden of mathematical rigour on the reader, rather than the author. This makes it more likely for hidden assumptions to leak into proofs. • They sweep complexity under the rug. If it really is obvious, wouldn't the explanation fit into a footnote? • They discourage discussion. If the assumption does turn out to be wrong, students will be more likely to assume they misunderstand than they are to ask about it. I did notice that this practice seems to be going out of style. At least in institutions I've worked with, lecturers are advised to take teaching classes, and it looks like this topic is given some attention. P.S. The author experimented with the word 'obviously' in providing IT support to lecturers. The experiment was not well received. They aren't any four-letter words in the set, so they are allowed in polite company. But use them sparingly, and when you really mean it. When you say something is "simple," you should make sure it really is simple for most of your audience. Be careful, what is trivial to you (presumably the world expert on the topic you are writing about) can very well be a profound mistery to the average reader. If in doubt, err on the side of explaining (a bit) too much. • Have you deliberately combined "mystery to the average reader" with "misery to the average reader"? – user173 Apr 22 '14 at 19:54 • @MattF., no. Chalk it up to "fortunate typo" ;-) – vonbrand Apr 22 '14 at 20:04 Yes, they are useful, but they can be over-used, or used when not true. If you state "it is a basic fact from...", and the reader does not see why it follows, then they know that they're not following your argument as you intended it to be followed. If you merely state "it is a fact from...", and the reader does not see why it follows, then they might think that it's a deep result and that you intend either that they should accept it on trust, or that you're presenting a lemma to be proved later. Compare: "It is a very basic fact from distributivity/associativity/commutativity that $(1-x)(1+x) = 1 - x^2$" -- the reader should be able to immediately visualize the proof. "It is a very basic fact from arithmetic that $(1-x)\sum\limits_{i=0}^{n-1}x^i = 1 - x^n$" -- the reader can immediately see how a proof for a fixed $n$ might be carried out, and you're telling them that the calculations do indeed work out to save them the bother of writing it. The detail around the implied "for all $n$" is being ignored, which is a little shady. Really you want a proof that retains the summations and therefore involves distributivity over the summation symbol rather than just distributivity over binary addition, and that's still within the reach of the reader. "It is a fact from the Axiom of Choice that every set has a well-order" -- the reader might well know or be able to construct a proof, but they're not expected to instantly produce it. It is not a "very basic" consequence except perhaps to an audience of skilled set theoreticians who are indeed all expected to trot out that proof. "It is a fact from the Axiom Choice that a sphere can be decomposed into finitely many pieces, which themselves can be rotated and translated to produce two spheres of equal size to the first. This is the so-called 'Banach-Tarski paradox'" -- Information presented for interest and not proved. The student probably thinks "okaay, I can't even imagine how to prove that", but the proof might come in a sufficiently specialized undergraduate course. There is a similar difference between "the proof is trivial" and "the proof is left as an exercise". If someone attempts the exercise and finds themself part way into a proof that isn't trivial, then if you've said it's trivial then they know there's a better proof they've missed. If you haven't they don't. "Clearly follows" and "obviously" are similar, although I think they're frequently mis-used for things that objectively are neither clear nor obvious to part of the audience. It's just that the speaker thinks they ought to be clear or obvious and therefore doesn't care to spend time and screen real-estate on them. There's another such phrase, "it follows immediately", which asserts that no new gizmos need to be introduced to the proof. To stretch the use of the term, it may turn out that "immediate" actually means a few lines of multiplying out brackets, but that's OK. There's two separate issues here. The first is whether there's important work which the word clear/trivial/obvious is doing when used in mathematical explanations. As explained by Jack M. and Steve Jessop and several of the MO posts, the answer to that question is yes: it indicates to the reader/listener that there's a very short argument that's being skipped over rather than something deep. The second issue is whether "clear/trivial/obvious" are good choices of words to use for the concept. I think in the context of teaching the answer is no. (In a research context they're fine, because all readers will know that they're technical terms of art that don't actually mean what clear/trivial/obvious mean in everyday speech.) However, I don't know what would be a better choice of word which would be less likely to be misunderstood. Has anyone found a word choice that works better with students? • Interesting point, although everyone I know was very familiar with the term of art well before "research context". I suppose that once you've introduced your students to proofs, and starting presenting proofs long enough that there are details worth eliding that the reader/listener is competent to fill in, you have a choice. You can use a terminology that works for them, or you can teach them what "clear/trivial/obvious" means in this context and then use the term of art. Or some combination of the two, of course. – Steve Jessop Jan 3 '15 at 3:37 I would argue that they are. When one reads a word such as "clearly", it is a sign that an argument has been omitted, and that said argument should be relatively easy to find. I think it engages the reader and makes them "participate" with the material more. I wouldn't worry about making people feel bad when they don't get something - everyone eventually has to learn to get over that feeling (and we all experience it, time and time again). Students who are assertive will ask you if they didn't understand something you said or wrote. • There is a difference here. When stating that something "clearly" holds, then is is also stated that it is expected of you to get it, and if you don't get it, you are not up to the course. This would not be a big problem if it would be objective. But it is highly subjective what is "obviuous" and what is not. Independently of the mathematical ability. – András Bátkai Apr 23 '14 at 11:35 • Feeling bad for not getting something shouldn't be accompanied by having to put up with someone telling you (written or otherwise) that it is obvious/clear/easy to see. Moreover, like András said, "relatively easy to find" is also subjetive. – Mark Fantini Apr 23 '14 at 21:06 • I think that worrying about how people feel when they hear words like "easy" is an issue that really shouldn't be ignored. It is obvious to any instructor who cares about understanding that a student who feels like everyone else finds something easy (even if they don't) will be unable to fully engage with the material. I take your point about it signalling a missing argument, but I think it is important to contrast the difficulty constructing an argument with the ease of presenting it. What is clear or obvious in presenting an argument is often difficult to come up with in the first place – jbaldus Apr 25 '14 at 2:16 I would say that "trivial" is okay, the others are probably less so. While "trivial" has connotations of "exceptionally easy", in math speak we often use it to roughly mean "an exceptionally simple statement, requiring little complexity to prove or define". I'd probably shy away from "the proof is trivial" a little more, but even so, if it's a tiny 3 line lemma, even if it's not easy and involves a huge leap of logic, it's still "trivial" by my metric. I view "trivial" like "simple", and "simple doesn't mean easy" is practically a catch phrase of mine at this point. Things can be simple in that they only rely on two or three axiomatic facts, but actually thinking of which two or three axioms to use isn't always easy. It may pay off to remind students of this fact. "It's not exceptionally complex, but can be hard to understand" goes a long way. Basic is the next I'd be okay with, but only in limited cases. I'd pretty much solely use "basic" to refer to things like so-called "basic facts" (2+2=4; you can't divide by zero, etc), or more generally literal axioms of the system you're working in. "Obviously" and "clearly" I'd avoid at all costs, unless maybe you're saying "2+2=4". Though I've been known to use "obviously" in humor, especially with contradictions ("And, well, obviously 0=4 so..."). It just alienates people who didn't get the intuition. Unfortunately, these two words are almost the proof-writer's version of "um...". People don't notice they're saying them, they're filler words. I always try to proof read them out of anything I write, it actually tends to make things more readable in addition to not alienating the students who don't understand a concept. Perhaps one tiny caveat is that sometimes look more complex than they are, and it can be useful to reassure students that this gnarly thing isn't really that scary. Newton-Raphson comes to mind as something that made my brain overflow when I first saw the formal definition of it in a textbook we were using. Being assured that, yes, it really is just this simple iterative process was helpful in making it less scary. See also: functions with 25 Greek letters where 24 of them are stupid constants. • Functions with 25 Greek letters are highly suspicious. – user11235 Apr 30 '14 at 19:50 • @user11235 Deliberate absurdity, I know the Greek alphabet only has 24 letters :p – LinearZoetrope Apr 30 '14 at 19:53 • That's why Greek has two different lowercase sigmas. One of which is easily confused with zeta given the typical shabbiness of mathematicians' handwriting. – Steve Jessop Jan 3 '15 at 3:47 These words can be quite helpful, because it is important to know how difficult a skipped pproof is. The problem is that these words have contradictory meaning, they depend on context and are often used out of laziness. Since it can be quite frustrating to get stuck on an abused "trivial", the trust of students and readers in this kind of word can be low, so it is almost always better to be more precise, either by giving more details on the difficulty: "The fully-written proof of this lemma would take half a page of intermediate difficulty among the proofs written in this book." "The reader can verify the base case of the induction (same difficulty as the one-star exercices in this book)." or be more honest on the sense of trivial: "The proof of this result is elementary, uninspiring and long, so we skip it." "The proof of this result is so well-known that I expect my readers to know it." "I am so scared about boring anyone that I would rather risk losing half the audience." Obviously, and its ilk, do not contain much information. They can be replaced by referring to the tool that is used to prove or notice the obvious fact. For example: By triangle inequality, by definition (of something specific where necessary), by calculation, by integrating by parts. Also: By lengthy calculation, by very clever choice of test function, and so on. Here the statement communicates something about the difficulty and the tools used, thus making it easier to immediately verify something, or alternatively telling why one should not be discouraged at not immediately seeing it. I think the most important thing to keep in mind is your audience. Obviously if you were teaching a calculus class you wouldn't have to remind them on how to add fractions. Well, in this case the correct word might be "hopefully". Hopefully you don't have to remind them, but I've had calculus students that couldn't add 2 fractions to save their lives. So one should really think about one's audience first. However, the use of such phrases builds confidence if used appropriately. I remember when I was a student and my professors would use "trivial" or "obvious" ...sometimes I would agree and other times I would disagree. When I agreed, I felt that all my studying paid off. The problem is when the student disagrees. In that case, it's up to both the student and the professor to bridge the gap. This is why it's important for both students and teachers to ask questions. This is just one facet of the question that I feel wasn't mentioned. • I don't have the rep to comment, but @Carlos - if you use hopefully there's no clarity of meaning between a) I hope I don't have to remind you of the standard method used to add fractions and b) I hope the standard method of adding fractions, which you're familiar with, is true in most instances.. – portll Apr 23 '14 at 21:38
# Scripted letters in papers 1. Jun 2, 2017 ### Afonso Campos I was wondering if someone could help me figure out what a scripted letter in a paper I'm reading is called. The scripted letter is in equation (3). It looks like some kind of a scripted T. 2. Jun 2, 2017 3. Jun 2, 2017 ### Staff: Mentor It is actually an I, representing the imaginary part, obtained using \Im: $$\Im \omega_0 \leq \pi T_\mathrm{BH}\, .$$ 4. Jun 2, 2017 ### Staff: Mentor Arrrgg! MathJax doesn't render that with the right font. Here is how it will look in $\LaTeX$: 5. Jun 2, 2017 ### Staff: Mentor You can test it in the editor section: type something included in $\textrm{$ \mathfrak{something} $}$ and see what it looks like - $\mathfrak{something}$ - by the editor function "preview". Frequently used fonts are: \mathbb{} for double lines as in $\mathbb{R}$ \mathcal{} for a look similar to hand writing $\mathcal{G}$ \mathfrak{} fraktura for "old" letters like in $\mathfrak{su}(2)$ Here's an overview of LaTeX symbols: http://detexify.kirelabs.org/symbols.html 6. Jun 3, 2017 ### Afonso Campos Makes sense! Thanks!
# Find the supplement of the angle $\large\frac{4}{5}$ of $90^{\circ}.$ $( A ) 108^{\circ} \\ ( B ) 75^{\circ} \\ ( C ) 72^{\circ} \\ ( D ) 110^{\circ}$
# DX11 [SlimDX, DX11] Read From texture Into Array This topic is 3030 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I'm sure I'm just being stupid, but I really can't figure out how to read the contents of a texture into an array using SlimDX and DirectX 11. In DirectX 9 it was easy, simply call LockRectangle and read from the resulting DataRectangle. I understand that in DirectX 11 I need to call MapSubresource to get a DataBox and then use that. However I can't figure out what I should be passing to the parameters of MapSubresource, and once I've done that I don't know what to do with the resulting DataBox. I'm currently doing the following to load an R32F DDS texture, and the values in my array seem completely wrong. It's as if it's only reading from every 4th row of my texture. Note: The renderSystem.MapSubresource passes the parameters directly to Device.MapSubresource(). ImageLoadInformation info = new ImageLoadInformation() { BindFlags = BindFlags.None, FilterFlags = FilterFlags.None, Format = SlimDX.DXGI.Format.R32_Float, MipFilterFlags = FilterFlags.None, OptionFlags = ResourceOptionFlags.None, Usage = ResourceUsage.Staging, }; float[] data = new float[2048 * 2048]; DataBox box = renderSystem.MapSubresource(tex, 0, tex.Description.Width * tex.Description.Height * sizeof(float), MapMode.Read, MapFlags.None); Am I doing something obviously stupid? (Probably) ##### Share on other sites Hmm, your parameters seems to be ok. Maybe the "float" is problematic, try "int". Or the texture is somehow compressed (4x4 blocks) and you got the raw data? ##### Share on other sites The ReadRange method is a templated method that is supposed to take a datatype as the template parameter. Without looking at the source, I would guess that the default type is Byte. Try: ReadRange<float>(data, 0, 2048 * 2048); ##### Share on other sites Also, try to fill all the fields of the image info structure (as in mip levels, width and height). ##### Share on other sites Another thing : Check if the pitch is really what you think it is (the drivers sometimes can use some extra data at the end of each scanline) ##### Share on other sites Quote: Original post by PyrogameHmm, your parameters seems to be ok. Maybe the "float" is problematic, try "int". Or the texture is somehow compressed (4x4 blocks) and you got the raw data? I've not had a chance to try this, but it's a floating point texture, so reading in using sizeof(int) seems completely wrong. Quote: Original post by Nik02The ReadRange method is a templated method that is supposed to take a datatype as the template parameter. Without looking at the source, I would guess that the default type is Byte.Try:ReadRange(data, 0, 2048 * 2048); It's a generic method (I know, same differencem but there is a distinction) and picks up that it should be outputting floats from the data parameter quite happily. Quote: Original post by Nik02Also, try to fill all the fields of the image info structure (as in mip levels, width and height). I'll try this later on. Although they seem to be being picked up fine by the Texture2D.FromFile method, the Description structure for the texture is as I would expect after loading. Quote: Original post by feal87Another thing :Check if the pitch is really what you think it is (the drivers sometimes can use some extra data at the end of each scanline) No matter what I do the pitch comes out as 8192. Which since it's a 2048x2048 texture and sizeof(float) is 4... seems right to me. --- I'm surprised there is not a sample out there for this somewhere, or no one has done it already. 1. 1 Rutin 24 2. 2 3. 3 JoeJ 18 4. 4 5. 5 • 13 • 38 • 23 • 13 • 13 • ### Forum Statistics • Total Topics 631714 • Total Posts 3001861 ×
# Proof that $\{a^ib^jc^k\mid i,j,k\in\mathbb{N}, i<k<j\}$ is not context-free using the Pumping Lemma $$L=\{a^ib^jc^k \;| \;i, j, k \in \mathbb{N} \; \text{and} \; i I need to show that this language is not context-free with the help of the Pumping Lemma. My first intuition is, that there exist 5 different cases, i.e. the middle part, let's call it vwx, consists of 1. only $$a$$'s 2. only $$b$$'s 3. only $$c$$'s 4. $$a$$'s and $$b$$'s 5. $$b$$'s and $$c$$'s and I need to find a pumping constant, which excludes the new word from the above defined language. However, I am having a hard time how show that formally and precisely. Any hints are highly appreciated! Not you don't have to find a pumping constant. To the contrary, you have to show no such constant can exist. So, the general argument is usually like "if I assume $$N$$ is the pumping constant, I can use this word $$x\in L$$, longer than $$N$$, and whatever I try, we cannot pump it and stay in $$L$$." Usually one choses a string that is "just" inside the language, in this case $$a^Nb^{N+1}c^{N+2}$$. Now check your cases. What if we pump $$a$$'s and or $$b$$'s, but no $$c$$'s etcetera.
Question #f68d0 Mar 23, 2017 $\sec x - \sin x \tan x$ Explanation: Since $\csc x = \frac{1}{\sin} x$ and $\cot x = \cos \frac{x}{\sin} x$, substitute these in to get $\frac{\frac{1}{\sin} x - \sin x}{\cos \frac{x}{\sin} x}$ Multiply the reciprocal of the denominator. $\frac{1}{\sin} x - \sin x \cdot \sin \frac{x}{\cos} x$ We end up with this after using the distributive property. $\sin \frac{x}{\sin x \cos x} - {\sin}^{2} \frac{x}{\cos} x$ Cancel out the two $\sin x$s on the left. $\frac{1}{\cos} x - {\sin}^{2} \frac{x}{\cos} x$ Since $\frac{1}{\cos} x = \sec x$ and $\sin \frac{x}{\cos} x = \tan x$, the simplified answer is $\sec x - \sin x \tan x$ EDIT: Ignore this, not fully simplified. See Scott's answer. Mar 23, 2017 $\cos x$ Explanation: First put everything in terms of sine and cosine $\csc x = \frac{1}{\sin} x$ and $\cot x = \cos \frac{x}{\sin} x$, so $\frac{\csc x - \sin x}{\cot} x = \frac{\frac{1}{\sin} x - \sin x}{\cos \frac{x}{\sin} x} = \left(\frac{1}{\sin} x - \sin x\right) \left(\sin \frac{x}{\cos} x\right)$ Then distribute multiplication over subtraction $\left(\frac{1}{\sin} x - \sin x\right) \left(\sin \frac{x}{\cos} x\right) = \left(\frac{1}{\sin} x \cdot \sin \frac{x}{\cos} x\right) - \left(\sin x \cdot \sin \frac{x}{\cos} x\right)$ Multiply inside the parentheses $\left(\frac{1}{\sin} x \cdot \sin \frac{x}{\cos} x\right) - \left(\sin x \cdot \sin \frac{x}{\cos} x\right) = \left(\sin \frac{x}{\sin x \cos x}\right) - \left(\frac{\sin x \sin x}{\cos} x\right)$ And simplify $\left(\sin \frac{x}{\sin x \cos x}\right) - \left(\frac{\sin x \sin x}{\cos} x\right) = \frac{1}{\cos} x - {\sin}^{2} \frac{x}{\cos} x = \frac{1 - {\sin}^{2} x}{\cos} x$ Using the Pythagorean identity ${\sin}^{2} x + {\cos}^{2} x = 1$, we know that $1 - {\sin}^{2} x = {\cos}^{2} x$, so substitute that in $\frac{1 - {\sin}^{2} x}{\cos} x = {\cos}^{2} \frac{x}{\cos} x$ And finally, simplify ${\cos}^{2} \frac{x}{\cos} x = \cos \frac{x}{1} = \cos x$
# Success means leaving their birthplace, which chapter and page? It's taken for granted that success for Paul, as for William before him, will mean leaving his birthplace.
# Multicolumn environment in custom environment I want to use the multicol-package in my own environment by using a counter, such that it looks like \newenvironment{Test}{\ifnum\theCnt=1% \begin{multicols}{1} \else \begin{multicols}{2} \fi}{\end{multicols}} In theory that is definitely possible, but I want to extend that now, due to having further environments in my Test-environment. It will look like \begin{Test} \begin{A} \end{A} \begin{B} \end{B} \begin{A} \end{A} \begin{B} \end{B} \end{Test} Now the interesting part is that environment A and B are modifying the counter, i.e. the first occurence of environment A and B should be set as one-column environments, but every further environment should be set as two-column environment, thus making it not possible to simply modify the environment definition. As further problem I have that I want to have the second two environments as a single column, i.e. for an input of \begin{Test} \begin{A} \end{A} \begin{B} \end{B} \begin{A} \end{A} \begin{B} \end{B} \begin{A} \end{A} \begin{B} \end{B} \end{Test} the result should look like A B A A B B The same effect would be doable if I write \begin{Test1} \begin{A} \end{A} \begin{B} \end{B} \end{Test1} \begin{Test2} \begin{A} \end{A} \begin{B} \end{B} \begin{A} \end{A} \begin{B} \end{B} \end{Test2} with Test1 defining a single-column environment, and Test2 a multi-column environment. But I want to have everything in one environment called Test. Is that possible, and if yes, how? • If i understand, you want contents of first Aand B in one column then contents of A and B .... in two column? you declare 2 and 3 column. How do A and B change the counter? do you use A, B ,A, B ... always in this ordre? – touhami Mar 20 '16 at 7:45 • @touhami: Yes, A and B are one unit, consisting out of two different environments. The first occurrence of that unit should be in one column, every further in two columns. Fixed that typo in my question above. – arc_lupus Mar 20 '16 at 8:32 Here is a solution. Using environmenthooks from etoolbox \documentclass{article} \usepackage{multicol} \newif\iffirst \newenvironment{Test}{\section{AB}}{[stuff]} \newenvironment{A}{\section{A}}{\hrulefill} \newenvironment{B}{\section{B}}{\dotfill} \usepackage{etoolbox} \AtBeginEnvironment{Test}{\firsttrue} \AfterEndEnvironment{B}{\iffirst\firstfalse\begin{multicols}{2}\fi} \AtEndEnvironment{Test}{\iffirst\else\end{multicols}\fi} \begin{document} bla bla \begin{Test} \begin{A} AAAA \end{A} \begin{B} BBBBB \end{B} \begin{A} AAAA \end{A} \begin{B} BBBBB \end{B} \begin{A} AAAA \end{A} \begin{B} BBBBB \end{B} \end{Test} \begin{Test} \begin{A} AAAA \end{A} \begin{B} BBBBB \end{B} \end{Test} \end{document}
# Post a reply Options Add an Attachment If you do not want to add an Attachment to your Post, please leave the Fields blank. (maximum 10 MB; please compress large files; only common media, archive, text and programming file formats are allowed) Options # Topic review martin ## Re: Connect without browse directory Please attach a full session log file showing the problem (using the latest version of WinSCP). To generate the session log file, use /log=C:\path\to\winscp.log command-line argument. Submit the log with your post as an attachment. Note that passwords and passphrases not stored in the log. You may want to remove other data you consider sensitive though, such as host names, IP addresses, account names or file names (unless they are relevant to the problem). If you do not want to post the log publicly, you can mark the attachment as private. jlindgren ## Connect without browse directory I have an FTP site I'm needing to send files to that will not allow the FTP client to browse the directory upon connection. WINSCP seems to inherently want to browse the directory even if using command lines. I need to connect with batch and txt files, open the connection with user and password and then do a simple PUT command. Is there a way to shut off the directory browsing so I can accomplish this? Thanks.
# Forecast Error_ 4-Month Moving AverageForecast Error, 4-Month Weighted Moving AverageDifferences in errorTime PeriodValue101In each time period the four-month moving average ###### Question: Forecast Error_ 4-Month Moving Average Forecast Error, 4-Month Weighted Moving Average Differences in error Time Period Value 101 In each time period the four-month moving average produces errors of forecast than the four-month weighted moving average- #### Similar Solved Questions ##### Come-Clean Corporation produces a variety of cleaning compounds and solutions for both industrial and household use.... Come-Clean Corporation produces a variety of cleaning compounds and solutions for both industrial and household use. While most of its products are processed independently, a few are related, such as the company’s Grit 337 and its Sparkle silver polish. Grit 337 is a coarse cleaning powder wit... ##### A model train, with a mass of 9 kg, is moving on a circular track with a radius of 15 m. If the train's kinetic energy changes from 72 j to 36 j, by how much will the centripetal force applied by the tracks change by? A model train, with a mass of 9 kg, is moving on a circular track with a radius of 15 m. If the train's kinetic energy changes from 72 j to 36 j, by how much will the centripetal force applied by the tracks change by?... ##### The following information is provided. n = 49X =54.80 = 28Ho: H = 50 Ha: / 50 If the test is done at a level of significance 5%, the null hypothesis shouldSelect one: A. None of the other answers is correctB be rejectedC Not enough information is given to answer this questionD: not be rejected The following information is provided. n = 49 X =54.8 0 = 28 Ho: H = 50 Ha: / 50 If the test is done at a level of significance 5%, the null hypothesis should Select one: A. None of the other answers is correct B be rejected C Not enough information is given to answer this question D: not be rejec... Fey Fashions expects the following dividend pattern over the next seven years: stock's price today if an investor wants to earn The company will then have a constant dividend of $2.60 forever. What is the а. 14%? b. 23 %? a. What is the stock's price today if an investor wants to eam 1... 3 answers ##### Gambler claims thal Ihe dice he uses FIair" . meuning the hypothesued preportion ench olthe slx possible outcomes 0.13567. Honevet opposing player Is suspicious Of Inis clair, To test IL ke secrety ro led thc dicc I00 limnes and recorded thc number or Ts.2s.3s,4s.5$,w1d 6'$(hat resulted Mnc Quid shovrn bclow: Usc Excel conduct 0 Chi-Squate Goodness of Fit Test of this clalm nt the 1% *Igr Illconcc IcvclDtcd Quecenl Mynnbhettrod Expuctud Va nbeenena Aahac (edlr nc IetJLl Luto0 ddddoat Gambler claims thal Ihe dice he uses FIair" . meuning the hypothesued preportion ench olthe slx possible outcomes 0.13567. Honevet opposing player Is suspicious Of Inis clair, To test IL ke secrety ro led thc dicc I00 limnes and recorded thc number or Ts.2s.3s,4s.5$,w1d 6'$(hat resulted M... 4 answers ##### (8 points) Compute the following antiderivatives. The integrands may require some manipulation or identities before you recognize them as a derivative.points) Compute / (1+ tan? r) dr.(6) 2 points) Computef(-9) (+9) dc_(4 points) In a few complete sentences_ briefly explain the manipulations you did to make parts a) and (b) work How is this different from the way you would approach computing derivatives? (8 points) Compute the following antiderivatives. The integrands may require some manipulation or identities before you recognize them as a derivative. points) Compute / (1+ tan? r) dr. (6) 2 points) Compute f(-9) (+9) dc_ (4 points) In a few complete sentences_ briefly explain the manipulations you... 5 answers ##### Define two vector functionsF(t) 3 sin(t)i + 8 cos(t)j 8t2k (t) 8 sin(t)i + 3cos(t)j 8)kCompute F (t) . r(t) Define two vector functions F(t) 3 sin(t)i + 8 cos(t)j 8t2k (t) 8 sin(t)i + 3cos(t)j 8)k Compute F (t) . r(t)... 5 answers ##### Slope The slope of the 241 j0 edois 3ain @, ofthe oftne () Ite tnngento lngenito langent Io the 1 polat Folai cune anue 1 attha point B0ine poin 1 1 (2 1 1 (Typu an collec ccice exnctans belamJu Vi slope The slope of the 241 j0 edois 3ain @, ofthe oftne () Ite tnngento lngenito langent Io the 1 polat Folai cune anue 1 attha point B0ine poin 1 1 (2 1 1 (Typu an collec ccice exnctans belam Ju Vi... 1 answer ##### Write a Paragraph on how is ABB Automation consistent Write a Paragraph on how is ABB Automation consistent... 1 answer ##### 1. (HPR) The price of ACB, Inc. stock is$34. In six months, its price is... 1. (HPR) The price of ACB, Inc. stock is $34. In six months, its price is$39 per share. What is its holding period return? Please show all work.... ##### Qulstiom 20obpd rtolvut Ftlemlmianontontal crrcke rdhiaooHaconnlmratennlMal [a tna magnludsruqurud9ln 7 qulstiom 20 obpd rtolvut Ftlemlmia nontontal crrcke rdhiaoo Haconnlm ratennl Mal [a tna magnluds ruqurud 9ln 7... ##### Let p represent the statement, "Jim plays football", and let q represent "Michael plays basketball" Let p represent the statement, "Jim plays football", and let q represent "Michael plays basketball". Convert the compound statements into symbols.Jim does not play football or Michael plays basketball. 1. p q 2. ~p q 3. p q 4. ~(p q)... ##### Which ocean can be found on the southern part of USA./teas 4 which ocean can be found on the southern part of USA./teas 4... ##### Fing Ioe Z-ccmoccentExpress Jomr ansiet usingsignificant figure :AEdSubmitRequestAniwer Fing Ioe Z-ccmoccent Express Jomr ansiet using significant figure : AEd Submit RequestAniwer... ##### Pizza shop claims its average home delivery time is 28 minutes A sample of 35 deliveries had sample average of 30 minutes_ Assume the population standard deviation for the shop's deliveries is 6. minutes. Complete parts and b belowIs there support for the shop s claim using the criteria that the sample average of 30.8 minutes falls within the symmetrical interva that includes 9590 of the sample means if the true population mean is 28 minutes? Select the correct choice below and fill in the pizza shop claims its average home delivery time is 28 minutes A sample of 35 deliveries had sample average of 30 minutes_ Assume the population standard deviation for the shop's deliveries is 6. minutes. Complete parts and b below Is there support for the shop s claim using the criteria that t... ##### Y(v) + 9yliv) + 18y' + 162y" + 81y' 729y = 3 sinx y(v) + 9yliv) + 18y' + 162y" + 81y' 729y = 3 sinx... ##### Kneuton Courscwvork7.1.1 Write ramectric EquatiansQuestion An object travels at a steady rate along straight path in the zy-coordinate plane from the point (9, 5) to the point (-8,3) "fthe coordinates are in feet and the it takes 10 hours for the object to travel fror one point to the otheoimthich of the following gives parametric equations with respect to time for the position of the object?Select the correct answer below:I(t) = 8t _ 5, y(t) = -17t + 92() = -13t_ 8, M(t) =x(0) = -8t+ @ Kneuton Courscwvork 7.1.1 Write ramectric Equatians Question An object travels at a steady rate along straight path in the zy-coordinate plane from the point (9, 5) to the point (-8,3) "fthe coordinates are in feet and the it takes 10 hours for the object to travel fror one point to the otheoim... ##### The labor efficiency variance is the responsibility of: O Purchasing manager O Production manager O Payroll... The labor efficiency variance is the responsibility of: O Purchasing manager O Production manager O Payroll manager O The accountant... ##### Sketch an MO diagram for LiH On the basis of your diagram, would you expect this molecule to be stable with respect to dissociation into atoms? Use your MO diagram to predict any other properties you can_ Sketch an MO diagram for LiH On the basis of your diagram, would you expect this molecule to be stable with respect to dissociation into atoms? Use your MO diagram to predict any other properties you can_... ##### At a certain temperature, 0.880 mol SO3 is placed in a 2.50 L container. 2SO3(g)↽−−⇀2SO2(g)+O2(g) At... At a certain temperature, 0.880 mol SO3 is placed in a 2.50 L container. 2SO3(g)↽−−⇀2SO2(g)+O2(g) At equilibrium, 0.110 mol O2 is present. Calculate ?c.... ##### Discuss the issues/concerns in assessing DSM-5 personality and personality disorders. Follow the step: Claim- a one... Discuss the issues/concerns in assessing DSM-5 personality and personality disorders. Follow the step: Claim- a one to two sentences this responding to the prompt. Support- from an external source (not your textbook) defending your thesis. At least three discrete ideas or examples supporting your cl... ##### Chart Title180160 Plot Area 14012010080604020[0, 0.26](0.26, 0.52](0.52,0.78](0.78,1.04] Chart Title 180 160 Plot Area 140 120 100 80 60 40 20 [0, 0.26] (0.26, 0.52] (0.52,0.78] (0.78,1.04]... ##### Ifa satellite circulate around the earth at height of 9,163.73 km above the earth's surface; given the earth radius is 3958.8 miles and mass is 5.98 x1024kg use G-6.67x 10-11 Nm2/kg? find the period of this satellite in unit hours? Ifa satellite circulate around the earth at height of 9,163.73 km above the earth's surface; given the earth radius is 3958.8 miles and mass is 5.98 x1024kg use G-6.67x 10-11 Nm2/kg? find the period of this satellite in unit hours?... ##### A six month old puppy weighs X = 7.0 kg: From this age, the puppy will grow such that its weight; X, is described by the following equation: d = 10-x dtwhere t is time measured in months. Use the Euler method of integration with a time step of At = 0.1 months to find the puppy's weight at 6.5 months_ Enter your answer with at least 4 significant figures: (On the actual exam, you will be asked to show all work and intermediate steps for full credit:)Answer: A six month old puppy weighs X = 7.0 kg: From this age, the puppy will grow such that its weight; X, is described by the following equation: d = 10-x dt where t is time measured in months. Use the Euler method of integration with a time step of At = 0.1 months to find the puppy's weight at 6.5 ... ##### J9 -15-17 -16-15-44-134-W-J0The curve above is the graph of sinusoidal function_ It goes through the points 2) and (5, Find sinusoidal function that matches the given graph_ If needed, you can enter T-3.1416.. 'pi' in your answer; otherwise use at least 3 decimal digits_f(z) J9 -15-17 -16-15-44-134-W-J0 The curve above is the graph of sinusoidal function_ It goes through the points 2) and (5, Find sinusoidal function that matches the given graph_ If needed, you can enter T-3.1416.. 'pi' in your answer; otherwise use at least 3 decimal digits_ f(z)... ##### Factor each trinomial 7x^2+x-8 4m^2-17m+18 14m^2+17m-22 Factor each trinomial 7x^2+x-8 4m^2-17m+18 14m^2+17m-22... ##### Neam te Nedo Duuatmont Stcre contaucoldicdink cantndouPUQUAMNmlecartridyusdntcctivt()la CEameteaeCte ChtnunesTardum from ta sreltenaiu prababilb thal bathcefecuve? (Rourd Your anstercecual Felcer.(DJIa {eAemnt KeEecteCart 7a5 t ranldoan nom tnechetprobab lly Uut Inier |G uelectnt (Round Ydul HGteiLhea decmal clcer neam te Nedo Duuatmont Stcre contau coldicdink cantndou PUQUAM Nmle cartridyus dntcctivt ()la CEameteaeCte Chtnunes Tardum from ta sreltenai u prababilb thal bath cefecuve? (Rourd Your anster cecual Felcer. (DJIa {eAemnt KeEecte Cart 7a5 t ranldoan nom tnechet probab lly Uut Inier |G uelectnt (Roun... ##### 8 @I105Usingthe information provided bout the experimental equipment; calculate the total resistance of both probe wires used to collect the resistance of each coil. (hint: remember the diameter of the wire is provided. Calculation of the cross sectional area requires the radius)in addition to your numerica answer; briefly describe how the resistance was calculated, including the sourcefs) of the data that was used in the calculation:HTMLYRTE 8804 B [ 4 A - @ - I E :33 0 E1 = ** * 1- E 0 0 * n 8 @I 105 Usingthe information provided bout the experimental equipment; calculate the total resistance of both probe wires used to collect the resistance of each coil. (hint: remember the diameter of the wire is provided. Calculation of the cross sectional area requires the radius) in addition to y... ##### 43.00 pF capacitor Is first charged by being connected across = and connected actoss an uncharged 00 /4 capacitor:battery: It is then disconnected (rom the battery1) Calculate the (Inal charge on the 3.00 /4 capacitor: (Express }our answer to three slgnificant figures )5 450C Submlt2 Cakcuizte the final charge on the &.00-/F capacitor: (Express Your answer to three significant figures )Subnut 43.00 pF capacitor Is first charged by being connected across = and connected actoss an uncharged 00 /4 capacitor: battery: It is then disconnected (rom the battery 1) Calculate the (Inal charge on the 3.00 /4 capacitor: (Express }our answer to three slgnificant figures ) 5 45 0C Submlt 2 Cakcuizte ... ##### Qinle e Tatkr Folnonra ak derree 4 C =- 7 Ys Yx )z Smx +x He "Apesbouad Deler mi po Iynoaial ecco( Yor Tor}" &€ {or degnea 10 e mtenal Ev, 1 Pcx ) =Snxtx otl" corcle Ypvr GAswer No clea D^ Fo €u Ca Qinle e Tatkr Folnonra ak derree 4 C =- 7 Ys Yx )z Smx +x He "Apesbouad Deler mi po Iynoaial ecco( Yor Tor}" &€ {or degnea 10 e mtenal Ev, 1 Pcx ) =Snxtx otl" corcle Ypvr GAswer No clea D^ Fo €u Ca...
Number of rotations Write a function or a program to find the number of rotations required by a wheel to travel a given distance, given its radius. Rules Input can be 2 positive rational numbers and can be taken in any convenient format. Both inputs are of same unit. There must not be any digits 0-9 in your code. The output will be an integer (in case of float, round to infinity) This is code-golf so shortest code wins Examples 10 1 2 50 2 4 52.22 4 3 3.4 0.08 7 12.5663 0.9999 3 • You probably should add that digits are also forbidden in compiler options (or anywhere else): if you limit this constraint to code only, with gcc we can do something like -DP=3.14 in compiler flags, that would define P as an approximation of pi, which is probably not what you intended – Annyo Nov 21 '18 at 16:50 MathGolf, 5 4 bytes τ/╠ü Try it online! Explanation τ Push tau (2*pi) / Divide the first argument (total distance) by tau ü Ceiling APL+WIN, 9 bytes Prompts for radius followed by distance: ⌈⎕÷○r+r←⎕ Try it online! Courtesy of Dyalog Classic Explanation: ○r+r←⎕ prompt for radius and double it and multiply by pie ⌈⎕÷ prompt for distance, divide by result above and take ceiling • ⌈⎕÷○+⍨⎕ works for 7 bytes. – J. Sallé Nov 22 '18 at 13:04 • @J.Sallé Thanks but unfortunately my ancient APL+WIN interpreter does not have the ⍨ operator – Graham Nov 22 '18 at 15:51 Java 8, 32 30 bytes a->b->-~(int)(a/b/Math.PI/'') Contains unprintable \u0002 between the single quotes. Port of @jOKing's Perl 6 answer. Try it online. • Is that the digit '1' in your code? I think that might not be allowed. – ouflak Nov 21 '18 at 14:18 • @ouflak Looks like it can be fixed like this. – Erik the Outgolfer Nov 21 '18 at 14:22 • @ouflak Woops, that was a pretty stupid mistake.. Using the unprintable so I don't use the digit 2, and then just use digit 1... Luckily Erik is indeed right that a simple negative unary has the same effect as +1 (often used to get rid of parenthesis since the negative and unary have higher precedence than most other operators). – Kevin Cruijssen Nov 21 '18 at 18:21 Perl 6, 15 12 bytes -3 bytes tjanks to nwellnhof reminding me about tau */*/τ+|$+!$ Try it online! Anonymous Whatever lambda that uses the formula (a/b/tau).floor+1. Tau is two times pi. The two anonymous variables $are coerced to the number 0, which is used to floor the number +|0 (bitwise or 0) and add one +!$ (plus not zero). • There must not be any digits 0-9 in your code. – Titus Nov 21 '18 at 13:35 • @Titus I can't believe I forgot that. Thanks, fixed! – Jo King Nov 21 '18 at 13:37 • Are digits in exponents also allowed? – ouflak Nov 21 '18 at 14:21 Python 2, 474544 43 bytes lambda l,r:l/(r+r)//math.pi+l/l import math Try it online! • -2 bytes, thanks to flawr • -1 byte, thanks to Jonathan Allan • Since inputs have been guaranteed to be both (strictly) positive and rational we never hit the edge-case of requiring an exact number of rotations, so I think we can do l/(r+r)//pi+l/l and save a byte. – Jonathan Allan Nov 21 '18 at 13:50 • @JonathanAllan Thanks :) – TFeld Nov 21 '18 at 14:16 05AB1E, 6 bytes ·/žq/î Port of @flawr's Python 2 comment. Takes the input in the order radius,distance. Explanation: · # Double the first (implicit) input / # Divide the second (implicit) input by it žq/ # Divide it by PI î # Ceil it (and output implicitly) Lua, 615857 49 bytes function(s,r)return math.ceil(s/(r+r)/math.pi)end Try it online! Thanks to KirillL. -8 bytes. • I don't know much Lua (so maybe it's still too long), but it appears to be shorter as a function: 49 bytes – Kirill L. Nov 22 '18 at 11:18 • @KirillL., I'm still learning the rules here. The OP's challenge is pretty open on the input. So my question is, would we have to count your program call() against the byte count? If not, your's definitely shaves off a nice chunk. – ouflak Nov 22 '18 at 11:43 • A quite common style of submission here is an anonymous function (so that we don't have to count the name, unless it is recursive), which outputs by its return value. The footer section with function calls and actual printing to console is then basically used for visualizing the results and doesn't count towards your score. BTW, you may add more of the OP's test examples to the footer, so that they can be conveniently viewed all at once. Note that in some cases a full program may actually turn out to be golfier! – Kirill L. Nov 22 '18 at 11:58 Common Lisp, 36 bytes (lambda(a b)(ceiling(/ a(+ b b)pi))) Try it online! proc N d\ r {expr ceil($d/(($r+$r)*acos(-$r/$r)))} Try it online! Tcl, 53 bytes proc N d\ r {expr ceil($d/(($r+$r)*acos(-[incr i])))} Try it online! Lack of a pi constant or function makes me lose the golf competition! • Do I need to remove the .0 at end of each output? It would make me consume more bytes! – sergiol Nov 22 '18 at 18:55 • [incr i] is quite clever but I think you can use $d/$d or $r/$r instead. – david Nov 22 '18 at 20:17 • Saved some bytes thanks to @david's idea! – sergiol Nov 22 '18 at 22:56 PowerShell, 5352 51 bytes -1 byte thanks to @mazzy -1 byte after I realized I don't need a semicolon after the param() block param($d,$r)($a=[math])::ceiling($d/($r+$r)/$a::pi) Try it online! Takes input from two commandline parameters, distance -d and radius -r. • ? param($d,$r);($a=[math])::ceiling($d/($r+$r)/$a::pi) – mazzy Nov 24 '18 at 5:54 JavaScript (Babel Node), 23 bytes s=>r=>-~(s/2/r/Math.PI) Try it online! • There must not be any digits 0-9 in your code. – Dennis Nov 22 '18 at 12:55 Clojure, 50 bytes (fn[a b](int(Math/ceil(/ a Math/PI(count" ")b)))) An anonymous function that accepts two integers a and b as arguments: the distance and the wheel's radius, respectively. Try it online! (count " ") evaluates to 2, so this function implements $$\\lceil \dfrac a{2\pi b} \rceil\$$. TI-Basic (83 series), 12 bytes -int(-Tmax⁻¹min(e^(ΔList(ln(Ans Takes input as a list of radius and distance in Ans: for example, {0.9999:12.5663:prgmX. e^(ΔList(ln(Ans will take the ratio of those distances, and min( turns this into a number. Then we divide by Tmax, which is a graphing parameter that's equal to 2π by default. Finally, -int(- takes the ceiling. Pari/GP, 23 bytes (d,r)->ceil(d/(r+r)/Pi) Try it online!
# SHIFT --- Sjoerd Hooft's InFormation Technology --- ### Site Tools q:q355 ##### Differences This shows you the differences between two versions of the page. — q:q355 [2016/07/09 10:30] (current) Line 1: Line 1: + = Question 355 = + This page is part of Q, the IT exam trainer. \\ See https://​www.getshifting.com/​q for more info \\ \\ **Question:​** \\ What are two limitations of Link Aggregation Control Protocol (LACP) on a vSphere Distributed Switch? \\ \\ **Description:​** \\ See <​html><​a href="​https://​pubs.vmware.com/​vsphere-60/​topic/​com.vmware.vsphere.networking.doc/​GUID-3FDE1E96-9217-4FE6-8B76-6E3A64766828.html"​ target="​_blank">​here​ for more information. \\ \\ **Correct Answer:** \\ - Software iSCSI multipathing is not compatible \\ + - It does not support configuration through Host Profiles \\ {{tag>​qq}} \\
• B Rajeev Articles written in Proceedings – Mathematical Sciences • Probabilistic representations of solutions to the heat equation In this paper we provide a new (probabilistic) proof of a classical result in partial differential equations, viz. if ϕ is a tempered distribution, then the solution of the heat equation for the Laplacian, with initial condition ϕ, is given by the convolution of ϕ with the heat kernel (Gaussian density). Our results also extend the probabilistic representation of solutions of the heat equation to initial conditions that are arbitrary tempered distributions. • Measure Free Martingales and Martingale Measures Let $T\subset\mathbb{R}$ be a countable set, not necessarily discrete. Let $f_t,t\in T$, be a family of real-valued functions defined on a set 𝛺. We discuss conditions which imply that there is a probability measure on 𝛺 under which the family $f_t,t\in T$, is a martingale. • Differential operators on Hermite Sobolev spaces In this paper, we compute the Hilbert space adjoint $\partial^{*}$ of the derivative operator $\partial$ on the Hermite Sobolev spaces $\mathcal{S}_{q}$. We use this calculation to give a different proof of the ‘monotonicity inequality’ for a class of differential operators $(L, A)$ for which the inequality was proved in Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2(4) (2009) 515–591. We also prove the monotonicity inequality for $(L, A)$, when these correspond to the Ornstein–Uhlenbeck diffusion. • # Proceedings – Mathematical Sciences Current Issue Volume 129 | Issue 3 June 2019
# Short exact sequence of exact chain complexes If $0 \rightarrow A_{\bullet} \rightarrow B_{\bullet} \rightarrow C_{\bullet} \rightarrow 0$ is a short exact sequence of chain complexes (of R-modules), then, whenever two of the three complexes $A_{\bullet}$,$B_{\bullet}$,$C_{\bullet}$ are exact, so is the third. This exercise 1.3.1 in Weibel's Introduction to Homological Algebra. I was trying to tackle the exercise via diagram chasing in the diagram However, in all three situations I eventually get stuck. I am starting to think that diagram chase might not be the right approach here? Thank you. - Someone please replace the with link with an image tag (I am not yet allowed to post images). Thank you. –  Felix Hoffmann Apr 13 '11 at 19:53 As user8268 points out, the long exact sequence immediately implies the answer, but a diagram chase will also work. –  Grumpy Parsnip Apr 13 '11 at 20:12 My advise: try harder -- this is supposed to work with a diagram chase. ;) –  Rasmus Apr 13 '11 at 20:34 The snake lemma is helpful here (which I believe, if I remember correctly, is cited before this exercise). –  Fredrik Meyer Apr 13 '11 at 22:12 No, this result comes before the Snake Lemma in Weibel's book. –  Pete L. Clark Feb 21 '13 at 1:50 Here is an example of the diagram chase when we know $B_\bullet$ and $C_\bullet$ are exact. Let's call the maps $\alpha_n:\ A_n\rightarrow B_n$ and $\beta_n:\ B_n\rightarrow C_n$. Suppose $a\in A_n$ maps to $0$ in $A_{n-1}$. Then $b=\alpha_n(a)$ maps to $0$ in $B_{n-1}$, so there exists $\hat{b}\in B_{n+1}$ with $\hat{b}$ mapping to $b$. Since $b$ is the image of $a$, $\beta_n(b)=0$, and so $\hat{c}=\beta_{n+1}(\hat{b})$ maps to $0$ in $C_n$; thus there exists $c^*\in C_{n+2}$ with $c^*$ mapping to $\hat{c}$. Since the $\beta$ maps are surjective, there is a $b^*\in B_{n+2}$ mapping to $c^*$. Let's call the image of $b^*$ in $B_{n+1}$ $\bar{b}$. Then $\beta_{n+1}(\hat{b})=\beta_{n+1}(\bar{b}) = \hat{c}$, and so $\beta_{n+1}(\hat{b}-\bar{b})=0$. Thus there is an $\hat{a}\in A_{n+1}$ with $\alpha_{n+1}(\hat{a})=\hat{b}-\bar{b}$. But then $a_0$, the image of $\hat{a}$ in $A_n$, maps to $b$; since the $\alpha$ maps are injective, this implies $a_0=a$. Phew! - unless you're not supposed to use the corresponding long exact sequence: well - then use it :) It will tell you that if the cohomology of two of $A$, $B$, $C$ vanishes, then it vanishes also for the 3rd complex. - <del>yes of course, you are not supposed to. This exercise leads up to the theorem. I will add this in the description.</del> Actually, this might be what is meant by the exercise, since I don't think the existence of the long exact sequence relies on the nine lemma? (the exercise is used in the next one to proof the nine lemma). –  Felix Hoffmann Apr 13 '11 at 20:12 @Felix: I did feel I was cheating with such a trivial answer. But I guessed that proving what you want is not really easier that proving the existence of the long exact sequence. –  user8268 Apr 13 '11 at 20:23 For completeness, I think I can cover the case where $A_{\bullet}$ and $B_{\bullet}$ are exact myself now. I will try to stick to the notations used in the other posts. If $c \in C_n$ with $d(c)=0$, there is, by the surjectivity of $\beta_n$, a $b$ in $B_n$ such that $$\beta_n(b)=c.$$ Then $$\beta_{n-1}(d (b)) = d(\beta_n (b)) = d(c) = 0$$ and thus, by exactness of the n-1-th row, there is an $a$ in $A_{n-1}$ with $\alpha_{n-1}(a) = d(b)$. For $a$ we have $$\alpha_{n-2}(d(a)) = d(\alpha_{n-1}(a)) = d(d(b)) = 0$$ and, since $\alpha_{n-2}$ is injective, $d(a)=0$ follows. Then, by exactness of $A_{\bullet}$ we have an $a'$ in $A_n$, such that $d(a')=a$. Consider $b-\alpha_n(a')$ in $B_n$. We have $$d(b - \alpha_n(a'))=d(b) - d(\alpha_n(a')) = d(b) - \alpha_{n-1}(d(a')) = d(b) - d(b) = 0,$$ thus by exactness of $B_{\bullet}$, there is a $b'$ in $B_{n+1}$, such that $d(b')=b-\alpha_n(a')$. Finally, $\beta_{n+1}(b')$ is the desired pre-image of $c$ in $C_{n}$, since $$d(\beta_{n+1}(b'))=\beta_n(d(b'))=\beta_n(b) - \beta_n(\alpha_n(a')) = c.$$ Thanks, everyone! - This looks good, thanks for taking the time to complete the set of answers. –  t.b. Apr 14 '11 at 19:58 With diagram chasing. Assume, for instance, that $A$ and $C$ are exact and let's show that so is $B$. Call the morphisms between the complexes $\alpha : A \longrightarrow B$ and $\beta : B \longrightarrow C$. Let $b \in B_n$ be a cycle; that is, $db = 0$. Consider $\beta_n b \in C_n$. Because $\beta$ is a morphism of complexes, $$d\beta_n b = \beta_{n-1} db = \beta_{n-1} 0 = 0.$$ So $\beta_n b$ is a cycle, but $C$ is exact, hence there exists $c \in C_{n+1}$ such that $dc = \beta_n b$. Since $\beta_{n+1} : B_{n+1} \longrightarrow C_{n+1}$ is an epimorphism, there exists $b' \in B_{n+1}$ such that $\beta_{n+1}b' = c$. Now, consider the element $b-db' \in B_n$. Since $$\beta_n (b-db') = \beta_n b - \beta_ndb' = \beta_n b - d\beta_{n+1}b' = \beta_n b -dc = 0 \ ,$$ we have $b-db' \in \ker \beta_n = \mathrm{im}\ \alpha_n$. So, there is an element $a \in A_n$ such that $\alpha_n a = b - db'$. This $a$ is a cycle: $$\alpha_{n-1}da = d\alpha_n a = db - d^2b' = 0 \ ,$$ and, since $\alpha_{n-1}$ is a monomorphism, this implies $da = 0$. But $A$ is exact. Hence there is some $a' \in A_{n+1}$ such that $da' = a$. Consider the element $b'+\alpha_{n+1}a' \in B_{n+1}$: $$d(b'+\alpha_{n+1}a') = db' + d\alpha_{n+1}a' = db' + \alpha_nda' = db' +\alpha_n a = db'+(b - db') = b \ .$$ So $b$ is also a boundary and we are done. -
How to write a code for finding the kostant partition function for the elements of root lattice of $A_{1}^{(1)}$ which is defined as follows: $K(\beta)$ = the co-efficient of $e^{\beta}$ in $\prod _{\alpha \in \bigtriangleup}(1-e^{\alpha})$ where $\bigtriangleup$ is the set of positive roots of $A_{1}^{(1)}$ and $\beta$ is an element of the root lattice ? How to write a code for finding the kostant partition function for the elements of root lattice of rank 1 affine lie algebra $A_{1}^{(1)}$ which is defined as follows: $K(\beta)$ = the co-efficient of $e^{\beta}$ in $\prod _{\alpha \in \bigtriangleup}(1-e^{\alpha})$ where $\bigtriangleup$ is the set of positive roots of $A_{1}^{(1)}$ and $\beta$ is an element of the root lattice ?
# $⊕P$-completeness of $⊕2SAT$ Is $⊕2SAT$ - the parity of the number of solutions of $2$-$CNF$ formulae $\oplus P$ complete? This is listed as an open problem in Valiant's 2005 paper https://link.springer.com/content/pdf/10.1007%2F11533719.pdf. Has this been resolved? Is there any consequence if $⊕2SAT\in P$? It is shown to be $$\oplus P$$-complete by Faben: https://arxiv.org/abs/0809.1836 See Thm 3.5. Note that counting independent sets is same as counting solutions to monotone 2CNF. • @Turbo: the solutions of a monotone 2SAT instance of the form $F = \bigwedge_{(i,j) \in E} \neg x_i \vee \neg x_j$ are exactly the independent sets of the graph $(V,E)$ where $V = \{x_1 \ldots x_n\}$. – holf Nov 22 '17 at 20:07 • Oh ok not max indep set. what we have is just indep set. – Mr. Nov 22 '17 at 20:51 The $$\oplus P$$-completeness of $$\oplus$$2SAT was resolved much earlier than Faben's preprint in 2008: it was resolved by Valiant himself in 2006. See Leslie G. Valiant: Accidental Algorithms. FOCS 2006: 509-517 https://ieeexplore.ieee.org/document/4031386 A link with no paywall: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.94.3342 Showing therefore that $$\oplus$$2SAT $$\in P$$ would imply that $$P = \oplus P$$, which further implies (by the usual proof of Toda's theorem) that the entire polynomial hierarchy is in $$BPP$$. This seems extremely unlikely! • Faben’s paper is from 2008, not 2018. Jan 28 at 7:50 • Thanks, maybe I need new glasses :) Jan 28 at 15:40 • I think both your glasses are working fine: in arxiv.org/abs/0809.1836, the date at center top is 2018, but the arxiv timestamp on the vertical left is 2008. Jan 28 at 16:37 • Yeah, the date in the pdf file shows 2018. Arxiv just uses regular LaTeX, thus unless the author overrides \date in the file, it will show the date it was compiled. This is often the date of submission, when the file is first processed, and the resulting pdf is cached. But an older paper may drop out of the cache, in which case it is compiled anew the next time someone requests the pdf, resulting in a completely different date. All in all, this is just another of the many reasons why it is a stupid idea to directly link to a pdf of a paper instead of to an informative meta page. Jan 28 at 16:56 • Yes, exactly. It happened to me once, and emitting a new version of a paper only to correct the date is not a good idea either, as the center top date would correctly go back in time, but the timestamp on the vertical left would switch from the past to the present... Jan 28 at 19:59 There are $$2$$ possible further reductions in addition to Faben's one, to show $$\oplus P$$-completeness of $$\oplus 2$$-SAT. First reduction is from $$\oplus$$CNF-SAT to (not necessarily monotone) $$\oplus 2$$-SAT, as follows: replace each clause $$c = \{\ell_1, \cdots, \ell_k\}$$ in the original instance with a set of $$k$$ clauses $$c_1 = \{\lnot u, \lnot\ell_1\}, \cdots, c_k = \{\lnot u, \lnot\ell_k\}$$, where $$u$$ is a fresh new variable. I like to call such operation as clause rotation. Each time you rotate a clause, the parity of the number of satisfying assignments stays unaltered. Actually it is more than that: the difference between the number of satisfying assignments having an odd number of variables set to true and the number of satisfying assignments having an even number of variables set to true stays unaltered. After having rotated every original clause (each time using a different fresh new variable of course), you end up with a $$\oplus 2$$-SAT instance which is not monotone (unless so was the original instance), and which has only $$n + m$$ variables (while in Faben's reduction the resulting monotone $$\oplus 2$$-SAT instance had $$3n + m$$ variables). Second reduction is from $$\oplus$$CNF-SAT to monotone $$\oplus 2$$-SAT, like Faben's but again with less variables. You create a graph with $$n + m$$ nodes: $$n$$ nodes for the variables and $$m$$ nodes for the clauses. There is an edge between a variable node and a clause node if and only if such variable is mentioned in such clause. There is an edge between $$2$$ clause nodes if and only if there is a variable which is mentioned positive in one and negative in the other. The parity of the number of independent sets of such graph is the same as the parity of the number of satisfying assignments of the original formula. But here the savings in variables had a price: the aforementioned odd-even difference is not preserved in this reduction (whereas Faben's reduction preserves it).
An Irrational Number is a real number that cannot be written as a simple fraction. Ask Question Asked 3 years, 8 months ago. What is the interior of that set? A rational number is the one which can be represented in the form of P/Q where P and Q are integers and Q ≠ 0. Each positive rational number has an opposite. We can also change any integer to a decimal by adding a decimal point and a zero. In particular, the Cantor set is a Baire space. The Pythagoreans wanted numbers to be something you could count on, and for all things to be counted as rational numbers. Learn the difference between rational and irrational numbers, and watch a video about ratios and rates Rational Numbers. Math Knowledge Base (Q&A) … Thread starter ShengyaoLiang; Start date Oct 4, 2007; Oct 4, 2007 #1 ShengyaoLiang. Just as I could represent 5.0 as 5/1, both of these are rational. This is the ratio of two integers. for part c. i got: intA= empty ; bdA=clA=accA=L Is this correct? In mathematics, a number is rational if you can write it as a ratio of two integers, in other words in a form a/b where a and b are integers, and b is not zero. Now any number in a set is either an interior point or a boundary point so every rational number is a boundary point of the set of rational numbers. So I can clearly represent it as a ratio of integers. It's not rational. Interior of Natural Numbers in a metric space. But for any such point p= ( 1;y) 2A, for any positive small r>0 there is always a point in B r(p) with the same y-coordinate but with the x-coordinate either slightly larger than … That means it can be written as a fraction, in which both the numerator (the number on top) and the denominator (the number on the bottom) are whole numbers. numbers not in S) so x is not an interior point. (A set and its complement … be doing exactly this proof using any irrational number in place of ... there are no such points, this means merely that Ehad no interior points to begin with, so thatEoistheemptyset,whichisbothopen and closed, and we’re done). Rational,Irrational,Natural,Integer Property Calculator Enter Number you would like to test for, you can enter sqrt(50) for square roots or 5^4 for exponents or 6/7 for fractions Rational,Irrational… 1/n + 1/m : m and n are both in N b. x in irrational #s : x ≤ root 2 ∪ N c. the straight line L through 2points a and b in R^n. Closed sets can also be characterized in terms of sequences. A rational number is a number that is expressed as the ratio of two integers, where the denominator should not be equal to zero, whereas an irrational number cannot be expressed in the form of fractions. Edugain. So set Q of rational numbers is not an open set. So the set of irrational numbers Q’ is not an open set. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, ... of the Cantor set, but none is an interior point. The basic idea of proving that is to show that by averaging between every two different numbers there exists a number in between. Look at the decimal form of the fractions we just considered. They are irrational because the decimal expansion is neither terminating nor repeating. The Density of the Rational/Irrational Numbers. While an irrational number cannot be written in a fraction. 1.222222222222 (The 2 repeats itself, so it is not irrational) Proposition 5.18. Are there any boundary points outside the set? Australia; School Math. Examples of Rational Numbers. All right, 14 over seven. It is a contradiction of rational numbers.. Irrational numbers are expressed usually in the form of R\Q, where the backward slash symbol denotes ‘set minus’. I'll try to provide a very verbose mathematical explanation, though a couple of proofs for some statements that probably should be provided will be left out. 23 0. a. Rational numbers are terminating decimals but irrational numbers are non-terminating. (b) The the point 2 is an interior point of the subset B of X where B = {x ∈ Q | 2 ≤ x ≤ 3}? What are its boundary points? You can locate these points on the number line. Any number that couldn’t be expressed in a similar fashion is an irrational number. Printable worksheets and online practice tests on rational-and-irrational-numbers for Year 9. Example: 1.5 is rational, because it can be written as the ratio 3/2. ⅔ is an example of rational numbers whereas √2 is an irrational number. Weierstrass's method has been completely set forth by Salvatore Pincherle in 1880, and Dedekind's has received additional prominence through the author's later work (1888) and the endorsement by Paul Tannery (1894). These two things are equivalent. The interior of a set, $S$, in a topological space is the set of points that are contained in an open set wholly contained in $S$. • The closure of A is the set c(A) := A∪d(A).This set is sometimes denoted by A. Irrational means not Rational . contains irrational numbers (i.e. It isn’t open because every neighborhood of a rational number contains irrational numbers, and its complement isn’t open because every neighborhood of an irrational number contains rational numbers. The interior of this set is (0,2) which is strictly larger than E. Problem 2 Let E = {r ∈ Q 0 ≤ r ≤ 1} be the set of rational numbers between 0 and 1. To check it is the full interior of A, we just have to show that the \missing points" of the form ( 1;y) do not lie in the interior. The opposite of is , for example. Non-repeating: Take a close look at the decimal expansion of every radical above, you will notice that no single number or group of numbers repeat themselves as in the following examples. A rational number is a number that can be written as a ratio. We use d(A) to denote the derived set of A, that is theset of all accumulation points of A.This set is sometimes denoted by A′. We have also seen that every fraction is a rational number. Since you can't make an open ball around 2 that is contained in the set. The name ‘irrational numbers’ does not literally mean that these numbers are ‘devoid of logic’. But an irrational number cannot be written in the form of simple fractions. Such a number could easily be plotted on a number line, such as by sketching the diagonal of a square. So this is rational. True. It cannot be expressed in the form of a ratio, such as p/q, where p and q are integers, q≠0. But you are not done. 4. The set E is dense in the interval [0,1]. To study irrational numbers one has to first understand what are rational numbers. Derived Set, Closure, Interior, and Boundary We have the following definitions: • Let A be a set of real numbers. So, this, for sure, is rational. SAT Subject Test: Math Level 1; NAPLAN Numeracy; AMC; APSMO; Kangaroo; SEAMO; IMO; Olympiad ; Challenge; Q&A. Viewed 2k times 1 $\begingroup$ I'm trying to understand the definition of open sets and interior points in a metric space. A set FˆR is closed if and only if the limit of every convergent sequence in Fbelongs to F. Proof. S is not closed because 0 is a boundary point, but 0 2= S, so bdS * S. (b) N is closed but not open: At each n 2N, every neighbourhood N(n;") intersects both N and NC, so N bdN. We need a preliminary result: If S ⊂ T, then S ⊂ T, then An irrational number was a sign of meaninglessness in what had seemed like an orderly world. Consider one of these points; call it x 1. Irrational numbers are the real numbers that cannot be represented as a simple fraction. 5: You can express 5 as $$\frac{5}{1}$$ which is the quotient of the integer 5 and 1. Thus intS = ;.) As you have seen, rational numbers can be negative. Be careful when placing negative numbers on a number line. and any such interval contains rational as well as irrational points. > Why is the closure of the interior of the rational numbers empty? Help~find the interior, boundary, closure and accumulation points of the following. So this is irrational, probably the most famous of all of the irrational numbers. It cannot be represented as the ratio of two integers. A Rational Number can be written as a Ratio of two integers (ie a simple fraction). Look at the complement of the rational numbers, the irrational numbers. Login/Register. 5.0-- well, I can represent 5.0 as 5/1. No, the sum of two irrational number is not always irrational. In rational numbers, both numerator and denominator are whole numbers, where the denominator is not equal to zero. The set of irrational numbers Q’ = R – Q is not a neighbourhood of any of its points as many interval around an irrational point will also contain rational points. They are not irrational. The rational number includes numbers that are perfect squares like 9, 16, 25 and so on. A rational number is a number that can be expressed as the quotient or fraction $\frac{\textbf p}{\textbf q}$ of two integers, a numerator p and a non-zero denominator q. An irrational number is a number which cannot be expressed in a ratio of two integers. • The complement of A is the set C(A) := R \ A. It is not irrational. Year 1; Year 2; Year 3; Year 4; Year 5; Year 6; Year 7; Year 8; Year 9; Year 10; NAPLAN; Competitive Exams. Among irrational numbers are the ratio ... Méray had taken in 1869 the same point of departure as Heine, but the theory is generally referred to the year 1872. The space ℝ of real numbers; The space of irrational numbers, which is homeomorphic to the Baire space ω ω of set theory; Every compact Hausdorff space is a Baire space. This preview shows page 2 - 4 out of 5 pages.. and thus every point in S is an interior point. A closed set in which every point is an accumulation point is also called a perfect set in topology, while a closed subset of the interval with no interior points is nowhere dense in the interval. In short, rational numbers are whole numbers, fractions, and decimals — the numbers we use in our daily lives.. We will now look at a theorem regarding the density of rational numbers in the real numbers, namely that between any two real numbers there exists a rational number. So, this, right over here, is an irrational number. In the following illustration, points are shown for 0.5 or , and for 2.75 or . So 5.0 is rational. ), and so E = [0,2]. But if you think about it, 14 over seven, that's another way of saying, 14 over seven is the same thing as two. . 0.325-- well, this is the same thing as 325/1000. There are no other boundary points, so in fact N = bdN, so N is closed. False. The venn diagram below shows examples of all the different types of rational, irrational numbers including integers, whole numbers, repeating decimals and more. (c) The point 3 is an interior point of the subset C of X where C = {x ∈ Q | 2 < x ≤ 3}? (d) ∅: The set of irrational numbers is dense in X. Clearly all fractions are of that Set of Real Numbers Venn Diagram. Rational Numbers. Let E = (0,1) ∪ (1,2) ⊂ R. Then since E is open, the interior of E is just E. However, the point 1 clearly belongs to the closure of E, (why? Integer $-2,-1,0,1,2,3$ Decimal $-2.0,-1.0,0.0,1.0,2.0,3.0$ These decimal numbers stop. An uncountable set is a set, which has infinitely many members. Rational and Irrational numbers both are real numbers but different with respect to their properties. Active 3 years, 8 months ago. Not in S ) so x is not always irrational you could count,! Open set example of rational numbers are whole numbers, both numerator and denominator are whole numbers fractions. With respect to their properties years, 8 months ago but an irrational number not... Of every convergent sequence in Fbelongs to F. Proof numbers whereas √2 is an interior.. Point in S is an irrational number is a set, which has infinitely many members has to understand... Are ‘ devoid of logic ’ the closure of the Rational/Irrational numbers, right over here, is an number. Easily be plotted on a number could easily be plotted on a number could easily be plotted a! And interior points in a ratio of two integers irrational numbers are terminating decimals but irrational numbers Q ’ not! It can not be represented as a simple fraction, q≠0 E is dense in x for 0.5,! Their properties can represent 5.0 as 5/1, both numerator and denominator whole. Pages.. and thus every point in S is an example of rational numbers are terminating decimals but irrational.. As by sketching the diagonal of a square represent it as a simple fraction E [..., points are shown for 0.5 or, and decimals — the numbers use. A number which can not be expressed in the form of simple fractions to first what... Interval [ 0,1 ] well, this, for sure, is an irrational number is a Baire space to! A rational number includes numbers that are perfect squares like 9, 16, 25 and E. Be characterized in terms of sequences starter ShengyaoLiang ; Start date Oct 4, #. If and only if the limit of every convergent sequence in Fbelongs to F. Proof points shown... In short, rational numbers p/q, where p and Q are integers, q≠0 as rational.. Whole numbers, fractions, and for all things to be something could... So the set of irrational numbers both are real numbers that are perfect squares like 9,,! Irrational, probably the most famous of all of the rational numbers empty includes numbers that are perfect like... Denominator are whole numbers, and so on what are rational numbers are non-terminating 1.5 is rational, because can... Rational/Irrational numbers is irrational, probably the most famous of all of the we. You have seen, rational numbers are the real numbers that can be written in the set of numbers... Repeats itself, so N is closed if and only if the limit of convergent. That these numbers are whole numbers, and for all things to be something you could on! Of all of the irrational numbers are terminating decimals but irrational numbers Q ’ not... The definition of open sets and interior points in a similar fashion is an number! Decimal form of the fractions we just considered real numbers 1 ShengyaoLiang for,. All fractions are of that an irrational number is a real number that couldn ’ t be expressed in fraction. While an irrational number is a set, which has infinitely many members different respect! Let a be a set FˆR is closed irrational numbers Q ’ is equal. So set Q of rational numbers can be written in the form of the numbers. Is this correct set FˆR is closed can not be represented as the ratio of two.! -2.0, -1.0,0.0,1.0,2.0,3.0 [ /latex ] these decimal numbers stop in our daily lives irrational numbers Q ’ is irrational. Is neither terminating nor repeating could count on, and for all things be! First understand what are rational is to show that by averaging between every two different numbers exists! Between every two different numbers there exists a number could easily be plotted on a number line, as! That these numbers are terminating decimals but irrational numbers Q ’ is not equal to zero numbers are whole,... Numbers is not an open set pages.. and thus every point in S is an example of rational are... Look at the complement of the irrational numbers to be something you count! Any number that couldn ’ t be expressed in the following definitions •. Not be written as a simple fraction rational-and-irrational-numbers for Year 9 real numbers but different with respect their. Number line is closed if and only if the limit of every convergent sequence in Fbelongs F.. -1,0,1,2,3 [ /latex ] these decimal numbers stop its complement … > is. Part c. I got: intA= empty ; bdA=clA=accA=L is this correct number between., 8 months ago similar fashion is an irrational number is a Baire.! Rational number includes numbers that are perfect squares like 9, 16, and. The Density of the rational numbers are non-terminating can interior points of irrational numbers written in a similar fashion is example. Denominator are whole numbers, both numerator and denominator are whole numbers, irrational. Could easily be plotted on a number which can not be written a! You could count on, and for all things to be something you could on... The ratio of two integers ( ie a simple fraction derived set, which has infinitely members. I 'm trying to understand the definition of open sets and interior points in a fraction irrational the! An interior point & a ) … interior of Natural numbers in a metric space in particular the! Months ago has infinitely many members so, this is irrational, probably the most famous of all the. 4 out of 5 pages.. and thus every point in S ) so is! Complement … > Why is the closure of the Rational/Irrational numbers thus every point in is... ’ t be expressed in the interval [ 0,1 ] = [ ]! Let a be a set and its complement … > Why is the same thing 325/1000! Are integers, q≠0 Fbelongs to F. Proof these are rational numbers as 5/1, both of these points call. P/Q, where the denominator is not irrational ) the Density of the rational numbers be... Is irrational, probably the most famous of all of the irrational numbers has infinitely many members practice tests rational-and-irrational-numbers. The name ‘ irrational numbers ’ does not literally mean that these numbers are terminating decimals irrational... Sets can also change any integer to a decimal by adding a decimal by adding a decimal adding. That are perfect squares like 9, 16, 25 and so on p/q, where the denominator is an! The Pythagoreans wanted numbers to be counted as rational numbers can be written in a similar fashion is irrational... Derived set, closure, interior, and so on an interior.! Where the denominator is not irrational ) the Density of the irrational numbers is not irrational... To zero different with respect to their properties name ‘ irrational numbers Q ’ is not always.! Sum of two integers ( ie a simple fraction ) equal to zero [ ]... The following definitions: • Let a be a set, closure, interior, and so =... Interval [ 0,1 ] can represent 5.0 as 5/1, both numerator denominator... Are ‘ devoid of interior points of irrational numbers ’ … > Why is the closure of the of... The diagonal of a square 3 years, 8 months ago printable worksheets and online tests... So set Q of rational numbers, where p and Q are integers, q≠0 a ) … of... To study irrational numbers one has to first understand what are rational and only the... Numbers we use in our daily interior points of irrational numbers so x is not irrational ) the Density of the Rational/Irrational.... Shows page 2 - 4 out of 5 pages.. and thus every in... It as a simple fraction ) because the decimal expansion is neither terminating nor repeating these on! In rational numbers are ‘ devoid of logic ’ interior points of irrational numbers can not be as., interior, and for 2.75 or E = [ 0,2 ] the name ‘ irrational numbers ‘! Points in a metric space thing as 325/1000 illustration, points are shown for 0.5 or and. Simple fractions is contained in the following illustration, points are shown for 0.5 or and! -1,0,1,2,3 [ /latex ] interior points of irrational numbers [ latex ] -2, -1,0,1,2,3 [ /latex ] these decimal numbers stop an.: 1.5 is rational terms of sequences in x = [ 0,2 ] example 1.5. Has to first understand what are rational the Density of the Rational/Irrational numbers a square you could count on and! [ latex ] -2, -1,0,1,2,3 [ /latex ] these decimal numbers stop there... The name ‘ irrational numbers one has to first understand what are rational numbers can be written as a of!, this, right over here, is rational, because it be... Our daily lives if and only if the limit of every convergent sequence in Fbelongs F.... Such a number line to first understand what are rational numbers are whole numbers, where p Q... Of two integers I 'm trying to understand the definition of open sets and interior points in similar! And so on integers, q≠0 part c. I got: intA= ;... Different numbers there exists a number which can not be written in a ratio of the interior Natural! Boundary we have also seen that every fraction is a number line it x 1 number! A video about ratios and rates rational numbers whereas √2 is an irrational number # 1 ShengyaoLiang all are... One has to first understand what are rational, this, right here., because it can not be expressed in a similar fashion is an interior point video!
Place your bets on the value 11-07-2018, 11:46 AM Post: #1 hp41cx Member Posts: 298 Joined: Dec 2013 Place your bets on the value Systems Analyst 48G+/58C/85B/PC1500A TH-78A/Samsung A51 Focal & All Basic´s 11-07-2018, 02:55 PM Post: #2 burkhard Senior Member Posts: 369 Joined: Nov 2017 RE: Place your bets on the value It's amusing that the seller is apparently ignorant as the the rarity of that particular variant to hardcore collectors. If he added "Red Dot" into the title, he might wake up some more bidders not fully paying attention. I'll stand back and watch the fun. 11-07-2018, 03:13 PM Post: #3 grsbanks Senior Member Posts: 1,219 Joined: Jan 2017 RE: Place your bets on the value I've just bought a 41CV plus a shedload of accessories (card reader, cards, couple of modules, books, battery pack, adapter etc) from a guy in Sweden so I'll have to sit this one out. I'd have been interested otherwise! 11-07-2018, 06:52 PM Post: #4 hewlpac Junior Member Posts: 36 Joined: Jan 2014 RE: Place your bets on the value When did they add the 2 line display? 11-07-2018, 08:50 PM Post: #5 burkhard Senior Member Posts: 369 Joined: Nov 2017 RE: Place your bets on the value (11-07-2018 06:52 PM)hewlpac Wrote:  When did they add the 2 line display? Two-line display? The LED calculators never had a two-line display. I think the LCD calculators first had a multiline display with the clamshell models: the business 18C in 1986 and the scientific 28C a year later. By "multiline" I mean they had 3 or 4 lines (depending on whether soft keys were showing) of small, very low contrast digits, that had a fussy viewing angle. 11-07-2018, 08:55 PM Post: #6 Massimo Gnerucci Senior Member Posts: 2,460 Joined: Dec 2013 RE: Place your bets on the value (11-07-2018 02:55 PM)burkhard Wrote:  It's amusing that the seller is apparently ignorant as the the rarity of that particular variant to hardcore collectors. If he added "Red Dot" into the title, he might wake up some more bidders not fully paying attention. I'll stand back and watch the fun. Often wasting time scanning seemingly unrelated items pays: I found my red dot that was advertised as a TI-35, and pictures were not well focused. This allowed me to have it (almost) for peanuts. Greetings, Massimo -+×÷ ↔ left is right and right is wrong 11-07-2018, 08:57 PM Post: #7 Raymond Del Tondo Member Posts: 288 Joined: Dec 2013 RE: Place your bets on the value (11-07-2018 11:46 AM)hp41cx Wrote:  HP-35 Red dot Awful auction pics. Much too dark and no frontal view. The red dot hole is hardly visible. On the 19C the seller seems to have used a photo box or alike, so one can at least see what's on sale. -- Ray 11-07-2018, 09:50 PM Post: #8 DA74254 Member Posts: 164 Joined: Sep 2017 RE: Place your bets on the value Please excuse my ignorance, but what's a "red dot" and why are they more sought for than a "non-red-dot" 35? (I'm not a serious collector, though I would like to own a 35, but for me those are priced way over my allowed spending for "old useless stuff" according to my wife) Esben 28s, 35s, 49G+, 50G, Prime G1 HW A, Prime G2 HW D, SwissMicros DM42 Elektronika MK-52 & MK-61 11-07-2018, 10:00 PM Post: #9 aurelio Senior Member Posts: 601 Joined: Dec 2013 RE: Place your bets on the value (11-07-2018 08:55 PM)Massimo Gnerucci Wrote:  I found my red dot that was advertised as a TI-35, and pictures were not well focused. This allowed me to have it (almost) for peanuts. lucky man 11-07-2018, 11:00 PM (This post was last modified: 11-07-2018 11:08 PM by edryer.) Post: #10 edryer Member Posts: 132 Joined: Dec 2013 RE: Place your bets on the value I was going to guess $300... but seems that was in the way way way too low ballpark. Seems these things are only going to increase in value.... must surely be considered investment material. HP-28S (1988 US model), DM41X (2020) 11-07-2018, 11:23 PM (This post was last modified: 11-07-2018 11:27 PM by Zaphod.) Post: #11 Zaphod Member Posts: 270 Joined: Apr 2018 RE: Place your bets on the value (11-07-2018 09:50 PM)DA74254 Wrote: Please excuse my ignorance, but what's a "red dot" and why are they more sought for than a "non-red-dot" 35? Ahhh, is it a power on indicator hole to the right of the power switch ? I didn't know there were two versions 11-08-2018, 01:55 AM (This post was last modified: 11-08-2018 02:00 AM by edryer.) Post: #12 edryer Member Posts: 132 Joined: Dec 2013 RE: Place your bets on the value "a fast, extremely accurate electronic slide rule" There are many stories (read quite a few, most recently in an excellent book on Calculator Algorithms "Inside your Calculator") where an HP representative (who used to attend College Maths meetings the time here around late 1972) would whip the 35 out of his pocket and do a simply calculation, something like y^x, and leave the assembled members in total shock... at a time slide rules could at best go to three significant figures (the good ones that is) and would require multiple steps taking a few minutes. Truly a piece of history. HP-28S (1988 US model), DM41X (2020) 11-08-2018, 02:18 AM Post: #13 Paul Berger (Canada) Senior Member Posts: 533 Joined: Dec 2013 RE: Place your bets on the value (11-07-2018 11:23 PM)Zaphod Wrote: (11-07-2018 09:50 PM)DA74254 Wrote: Please excuse my ignorance, but what's a "red dot" and why are they more sought for than a "non-red-dot" 35? Ahhh, is it a power on indicator hole to the right of the power switch ? I didn't know there were two versions There are actually 4 production versions of the 35 1. Red dot no model number on front bump on the 5 key 2. No red dot still no model number on front, early ones of this version still have the bump on the 5 key. 3. Now says HEWLETT PACKARD 35 on the front 4. Legends on top 4 rows moulded into keys rather that printed above them. 11-08-2018, 06:03 AM Post: #14 grsbanks Senior Member Posts: 1,219 Joined: Jan 2017 RE: Place your bets on the value (11-08-2018 02:18 AM)Paul Berger (Canada) Wrote: There are actually 4 production versions of the 35 1. Red dot no model number on front bump on the 5 key 2. No red dot still no model number on front, early ones of this version still have the bump on the 5 key. 3. Now says HEWLETT PACKARD 35 on the front 4. Legends on top 4 rows moulded into keys rather that printed above them. Thanks for the definitive list -- I did wonder what the variants were. Mine is the 2nd variant with the bump on the 5 key. 11-08-2018, 07:18 AM Post: #15 Massimo Gnerucci Senior Member Posts: 2,460 Joined: Dec 2013 RE: Place your bets on the value (11-08-2018 02:18 AM)Paul Berger (Canada) Wrote: (11-07-2018 11:23 PM)Zaphod Wrote: Ahhh, is it a power on indicator hole to the right of the power switch ? I didn't know there were two versions There are actually 4 production versions of the 35 1. Red dot no model number on front bump on the 5 key 2. No red dot still no model number on front, early ones of this version still have the bump on the 5 key. 3. Now says HEWLETT PACKARD 35 on the front 4. Legends on top 4 rows moulded into keys rather that printed above them. The four and two prototypes. Greetings, Massimo -+×÷ ↔ left is right and right is wrong 11-08-2018, 08:34 AM Post: #16 BartDB Member Posts: 162 Joined: Feb 2015 RE: Place your bets on the value I have a version 3. I'm not really bothered about getting a red dot. I'm just happy to have one model of what's considered the first pocket electronic slide rule. With patience and time one can get a good deal. I could never afford a 71B at the normal price, but now have 2 that I got at much less than normal price. Similarly an HP41C with printer in a very good condition. Patience and time and hope and pray someone doesn't post it on a forum somewhere.... . 11-08-2018, 10:14 AM (This post was last modified: 11-08-2018 10:15 AM by Maximilian Hohmann.) Post: #17 Maximilian Hohmann Senior Member Posts: 986 Joined: Dec 2013 RE: Place your bets on the value Hello! (11-08-2018 08:34 AM)BartDB Wrote: I have a version 3. I'm not really bothered about getting a red dot. The same with me. Once I have an otherwise complete collection (which for me means at least one specimen of each model number) I will start thinking about getting variants of each model... It would be nice to accidentally find an HP-35 red dot for small money but I will certainly never pay those 885$ for one (+ international shipping + import duties which would have resulted in well over 1000 Euros for me). I only paid that much money for a calculator once, but that was for an HP-01 in near mint condition. Regards Max NB: And still I want to congratulate the fotunate high bidder for this auction :-) 11-08-2018, 02:43 PM (This post was last modified: 11-08-2018 02:44 PM by burkhard.) Post: #18 burkhard Senior Member Posts: 369 Joined: Nov 2017 RE: Place your bets on the value (11-08-2018 10:14 AM)Maximilian Hohmann Wrote:  NB: And still I want to congratulate the fotunate high bidder for this auction :-) Wow, that was quite a good estate sale find for the seller! Lucky day for him! Congratulations all around. People do tend to complain about that auction site, but establishing a market, a way to link sellers and eager buyers, is what prevents items like this from winding up in the dumpster. Without such an efficient market, collectors would tend to occasionally find super-cheap deals at thrift shops and flea markets, but far more neat old stuff would just simply get trashed. 11-08-2018, 05:32 PM Post: #19 HP-Collection Senior Member Posts: 424 Joined: Dec 2013 RE: Place your bets on the value Congratulations to the winner (I didn't even bid on the item). The calculator seems to be modified as the gold plate, which is normally in the socket for the power cord is now in the battery socket. The calc must have been opened for this. Also the corners of the label don't seem to be intact anymore. 11-08-2018, 05:40 PM Post: #20 Maximilian Hohmann Senior Member Posts: 986 Joined: Dec 2013 RE: Place your bets on the value Hello! (11-08-2018 05:32 PM)HP-Collection Wrote:  The calculator seems to be modified as the gold plate, which is normally in the socket for the power cord is now in the battery socket. The calc must have been opened for this. I noticed that too and wondered why anybody would do that. Maybe the spring fell out and the seller (or a previous owner) didn't know where it belongs and fixed it at that place? Maybe someone from here bought it and can enlighten us once he receives the calculator. Regards Max « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
Get our free book (in Spanish or English) on rainwater now - To Catch the Rain. # Difference between revisions of "Human power" See also the Human power category. for subtopics, how-tos, project pages, designs, organization pages and more. ## Need The conversation surrounding energy consumption in the United States often hones in on dependency on foreign oil, the extent to which fossil fuel consumption contributes to global climate change, etc. Certainly, these are very pressing problems that will need to be addressed soon, but there is a second energy crisis that often is overlooked. While developed nations consume fossil fuels at an alarming rate, nearly 2 billion of the world's population is without electricity and is still heavily reliant on traditional fuels such as dung, wood, and other forms of biomass. [1] Thus far, efforts to reverse this trend have focused on a two-pronged approach: one, increase access to "modern" form of energy in an inexpensive, responsible way; and second, to make the current use of biofuels safer and more sustainable. While this is a worthwhile approach, it is not comprehensive. For centuries, human and animal power have been utilized, and their importance in the portfolio of a comprehensive strategy for addressing the energy crisis in developing countries should not be ignored. An estimated 1200 petajoules were produced by humans without electricity for the purpose of work in the year 2008. This is over 1.5 times that of wind energy produced in the same year![2] Much of this effort was expended in menial, repetitive tasks that can be made more efficient with human powered machines. ## History The use of tools as an extension of human power is much older than recorded history. The earliest known encyclopedia of games in Europe, Libro de Juegos (Book of Games), commissioned by King Alfonso X of Castille in 1283, depicts a bow lathe being used to turn backgammon pieces. This type of device is still in use today by artisans in Morocco. Fig.1 Early depiction of a bow lathe[3] From the Middle Ages and onward, labor-saving devices were improved upon for use in the home and on an industrial scale. Perhaps the most important of such machines was the cotton gin. Eli Whitney's simple mechanical solution to the painstaking chore of processing cotton increased productivity by fifty times.[4] Rather than decrease the need for slave labor in the American South, the cotton gin reaffirmed the demand for slaves. Thus, the human-powered cotton gin inadvertently changed the course of the history. The late 19th century saw the development of the bicycle. Over the following decades, it would evolve from little more than a plank with two wheels to the familiar diamond frame we see today. Countless improvements, including multiple speeds, have made the bicycle the most efficient means of human-powered transportation. In the developing world in particular, the cost of fossil fuels has necessitated the need of unique bicycle derivatives that match efficiency with workload.[5] Because of its prominence in the history of human power, bicycle technology has been launching point for many endeavors to capitalize on excess human energy. In the developed world, gyms are beginning to harness the power expended on stationary bikes for their electrical needs.[6] In last twenty years, attention has been increasingly paid to harvesting human energy in more unconventional ways. For instance, piezoelectric crystals, which produce electricity under tension or compression, are becoming cheaper and less fragile. Although their electrical production is very small, it is hoped that an array of such devices could be integrated into clothing to obtain the vibrational energy of a person, for the purpose of powering mobile electronics. [7][8] Another application of such technologies would be the integration of miniature generators located at the joints of the knees, for harvesting excess energy [9] While such innovations are certainly promising, they are prohibitively expensive for even the developed world at this point. ## Design Considerations ### Culture Fig 2. An cartoon from Punch magazine, 1895, which demonstrates changing cultural norms regarding women and bicycles[10] The cartoon to the right appeared in 1895 periodical called Punch. At this time in European and U.S. history, the emerging culture of cycling brought with it cultural changes, such as the gradual acceptance of women wearing pants. In the cartoon, parallels are drawn between the domestic use of treadle sewing machines and bicycles, to illustrate the sense of empowerment that cycling culture gave women in turn of the century Great Britain. In the modern era, there may be similar cultural obstacles to the social acceptability of a pedal-powered machine. For instance, women may discouraged from straddling the seat of a bicycle. In such cases, it would be necessary to redesign the machine for a recumbent or reclined orientation. If the labor requirement of the job is not too much, a hand crank may be substituted for pedal power, like that of a pull-start motor. [11] As a case of unanticipated cultural issues with a technology, the Universal Nut Sheller found great success in Malian communities for peanut shelling. This success was not duplicated in Ghanaian communities, however, who shell shea nuts as a community activity.[12] ### Logistics Another consideration before the implementation of a human-powered machine is ability of the community to secure the parts and technical expertise to both construct the machine and keep it in operation. One general guideline for the potential for human-powered machinery is the use of bicycles. If these are in wide use, if parts are reasonably easy to obtain, and if repair of bicycles occurs locally, then other applications of pedal power may be feasible. In addition to local support of bicycle infrastructure, it would be beneficial to have local access to either carpentry or metalworking skills, for the construction of the frame of the machine. ### Application The figure below shows the relationship between the Human Development Index and Base Power Consumption. The solid black line designates those countries which will benefit from electrical power generation using a human-powered device. The countries between the dotted and solid lines have an unreliable electrical grid or limited electrical access, but may also receive a benefit from human-powered electricity. This principle may be broadened to include other modes of utilizing human power.[13] Fig 3. Power requirements for common electrical applications [13] Candidates for human power are jobs that have an element of oscillatory or repetitive motion. As such, many of principles that inform the construction of human-powered devices have been applied to many areas of need, including a washing machine[14], a veritable human-powered factory for brick production,[15] and adaptations to existing infrastructure, such as in Figure 4. While electrical power generation has been advocated for by many in the developed world[16], the grassroots development of such technologies has been limited, for many of the reasons listed in the "Logistics" section. For instance, the rare earth magnets and LEDs that are often bundled with electrical power generators in developing countries are not locally available, are expensive, and have a low chance of repair. [17] Fig 4. Proposed coupling of a hand pump for pedal power [18] ## Theory The natural action of pedaling a bicycle, essentially transforming linear motion into circular motion, creates an oscillatory power function as seen in Figure 2. Fig 5. Relative power outputs throughout the crank cycle [19] In the case of a mobile bicycle, this effect is masked by three factors: inertia provided by the weight of the rider; frictional losses from equipment; and drag forces from the air. For stationary riders (people operating human-powered machines), however, this phenomenon comes becomes an important consideration in power delivery. For instance, for the milling of grains, it would be favorable to provide a more or less constant power to mill, to facilitate a consistent feed rate through the machine. For this reason, it is often prudent to introduce a means of “smoothing” the power output from the driven gear. The most common way of this is through a flywheel. A flywheel is a rotating mass used in mechanical systems for energy storage. It will substitute for the inertial mass that the rider's weight provides in the mobile example. For a circular hoop (imagine a bicycle wheel) of radius 'r' and mass 'm', moment of inertia I, is defined by $I_z = m r^2\!$ For a solid disk or cylinder of radius 'r' and mass 'm', the moment of inertia I is given by $I_z = \frac{m r^2}{2}\,\!$ Additionally, the kinetic energy of a flywheel is given by $E = \frac{I \omega^2}{2}\,\!$ where $\omega$ is the angular velocity of the flywheel. From initial inspection, we can see that for the same mass, the hoop has a higher moment of inertia, by a factor of 2. From an energy perspective, as compared to a solid cylinder, the hoop takes twice as long to reach a steady speed, but takes twice as long to slow down, all things held equal. The primary advantage of a solid-disk flywheel, however, is ease of manufacture. There have been several proposed designs for a method of power smoothing that does not require a spinning mass. In particular, a reciprocating spring system and an electrical circuit employing a large capacitor have been suggested. [20] Such systems seek to the portability of human-powered machines. While the massless mechanical system has not seen wide distribution, the later has found use in electrical power generation, in applications that require steady voltage input.[21] [22] To reduce the fatigue of the rider, the moment of inertia of the flywheel should be about 150 kg m^2 sec^-2, as determined empirically by Wilson and Bloop.[23][24] ## Construction Fig 6. Example of a two-man dynapod [25] Fig 7. Realization of a one-man dynapod for threshing grain in Uganda [26] Construction of a human-powered machine can be ## Evaluation In the realm of electrical power generation, a notable participant in harnessing human energy is NURU Energy.[27] ## Dissemination While the overall climate for human-powered machinery seems to revolve around bringing power to the developed world, there is domestic interest to resurrect some of this "antiquated" technologies for domestic use. For example, Fender Blender offers a pedal-powered base, inspired by cruiser bicycle aesthetics, for use with a standard blender. Additionally, all of the plastic parts of the machine are made from recycled plastics. [28] One success story of the implementation of human power, outside of the realm of pedal power is the treadle pump. [29] Education for Sustain [30] 1. Barnes, D.F. and W.M. Floor, RURAL ENERGY IN DEVELOPING COUNTRIES: A Challenge for Economic Development1. Annual Review of Energy and the Environment, 1996. 21(1): p. 497-530. 2. Fuller, R.J. and L. Aye, Human and animal power – The forgotten renewables. Renewable Energy, 2012. 48(0): p. 326-332. 3. Woods, Robert. “A Turn of the Crank Started the Civil War." Mechanical Engineering. 4. Cyders, T.J., Design of a Human-Powered Utility Vehicle for Developing Communities, in Department of Mechanical Engineering2008, Ohio University: Athens, OH. 5. Benkatraman, V. An electric workout through pedal power. The Christian Science Monitor, 2008. 6. Starner, T. and J.A. Paradiso, Human Generated Power for Mobile Electronics. Low Power Electronics Design, 2004. 7. Gonzalez, J.L., A. Rubio, and F. Moll, Human Powered Piezoelectric Batteries to Supply Power to Wearable Electronic Devices. International Journal of the Society of Materials Engineering for Resources, 2002. 10(1). 8. Donelan, J.M., et al., Biomechanical Energy Harvesting: Generating Electricity During Walking with Minimal User Effort. Science, 2008. 319(5864): p. 807-810. 9. Punch1895: London, United Kingdom. 10. Chandler, L., Redesign of a Human Powered Battery Charger for Use in Mali, in Department of Mechanical Engineering2005, Massachusetts Institute of Technology: Cambridge, MA. p. 29. 11. Mechtenberg, A.R., et al., Human power (HP) as a viable electricity portfolio option below 20 W/Capita. Energy for Sustainable Development, 2012. 16(2): p. 125-145. 12. Raduta, R. and J. Vechakul, Bicilavadora, 2005, Massachusetts Institute of Technology: Cambridge, MA. 13. Modak, J.P., Human-Powered Flywheel Motor Concept, Design, Dynamics and Applications, 2007. 14. Bhusal, P., A. Zahnd, and M. Eloholma, Replacing Fuel Based Lighting with Light Emitting Diodes in Developing Countries: Energy and Lighting in Rural Nepali Homes. Leukos, 2007. 3(4): p. 277-291. 15. Decker, K.D. Bike powered generators are not sustainable. Low-tech Magazine, 2011. 16. Pedal Power, in Supplement to Energy for Rural Development1981, National Academy Press: Washington, D.C. 17. Dean, T., The Human-Powered Home2008, Gabriola Island, BC, Canada: New Society Publishers. 18. Allen, J.S., In search of the massless flywheel. Human Power, 1991. 9(3). 19. Butcher, D. Pedal Power Generator - Electricity from Exercise. 2012 12/16/2012 12/17/2012]; Available from: http://www.los-gatos.ca.us/davidbu/pedgen.html. 20. Czap, N., Stationary bike designed to create electricity, in San Francisco Gate2008: San Francisco, CA. 21. Wilson, D.G., Understanding Pedal Power, 1986, Volunteers in Technical Assistance: Arlington, Virginia. 22. Tiwari, P.S., et al., Pedal power for occupational activities: Effect of power output and pedalling rate on physiological responses. International Journal of Industrial Ergonomics, 2011. 41(3): p. 261-267. 23. Weir, A., The Dynapod: A Pedal Power Unit, 1980, Volunteers in Technical Assistance: Mt. Rainier. 24. One man dynapod, Uganda 1972, 1972, Alex Weir. 25. NURU: Energy to Empower. POWERCycle 2012 [cited 2012 11/26/12]; Available from: http://nuruenergy.com/nuru-africa/the-solution/powercycle/. 26. Fender Blender. [cited 2012 12/12/12]; Available from: http://www.rockthebike.com/fender-blender-pro/. 27. The Treadle Pump, 1991, Development Technology Unit, University of Warwick, Department of Engineering: Conventry, UK. 28. Clarke, P., Education for Sustainability: Becoming Naturally Smart2012, New York, NY: Routledge. 140.
OpenSCAD is an incredibly powerful tool for generating functional 3D models by using code instead of a visual interface. The parts you create can be exported to STL files and are immediately ready-to-go for 3D Printing. (You can download OpenSCAD here!) As part of an experimental feature in OpenSCAD (available in the snapshot builds since October 2019), it is possible to pass higher-order functions as parameters. In this post, I'll explain how I used higher-order functions to create parametric polygons and curvature! My interest in this topic was specifically to model the involute angles in a gear. But you can use this technique for anything! ## What is a parametric curve? Imagine you have a mathematical function which describes some kind of curve. For example: $$-2x^2 + 2$$ you can plot this expression in a graphing calculator and perhaps see the enviable arc you're after but how can you get that shape added to some OpenSCAD part? How about doing it in a way where you can adjust the curvature simply by altering the formula? The ability to change the shape easily by adjusting a mathematical formula and seeing the resulting output in OpenSCAD is what I mean by parametric curve. The first order of business to discuss is, er, higher order functions. ## Higher order functions Simply put, a higher order function is just a bit of code which takes any other function as a parameter. Here's a very basic example. Let's say we want to write a test which outputs the value of any mathematical function provided to it, evaluated with the input of 5. We can express the test harness with this line of code: function evaluate_method(fn) = fn(5); evaluate_method will take a function as a parameter, and return the result of that function when it is passed the number 5 as an argument. To see this concept in action, we can simply pass in different mathematical expressions to it and echo the output. For example; to evaluate an expression like this: $$f(x) = x^2$$ We can run the following code: func1 = function(x) x * x; echo(evaluate_method(func1)); # output is 25 Similarly, if we want to evaluate this different expression: $$f(x) = x^3$$ The only thing that needs to change is the argument being passed into evaluate_method() func2 = function(x) x * x * x; echo(evaluate_method(func2)); # output is 125 In those examples, we're passing in an actual function as the argument instead of a number. The evaluate_method is able to run any function we give it, provided the expected parameters are what we think they are. This is really cool and a very powerful mechanism! With this language feature, we'll be able to accept curve functions as parameters. Among other things, using higher order functions in this way can help: • Make our code reusable • Allow us to swap out the definition of our geometry at a later date • Make dynamic adjustments without changing the primary bits of code ## Interpolation If you've ever used excel, you may have run into the concept of interpolation. Filling out a few cells of data and then letting the application "populate the rest" is interpolation. We will be using a similar concept to map over the parametric functions and evaluate them at various points in time. That rather confusing statement can be distilled into this block of code: function parametric_interpolation(fn, t0, t1, delta) = [for(i = [t0:delta:t1]) fn(i)]; Let's unpack this... We're defining a new function called parametric_interpolation which accepts the following arguments: • higher-order method representing a mathematical expression • starting value • ending value • a delta Given those inputs, the function will begin passing values to the function, starting at t0 and incrementing by delta each time until it reaches t1. It will collect all the values and return an array containing each output. Here's a table to further describe what is happening: f(x) t delta output 2x 0 1 0 2x 1 1 2 2x 2 1 4 In the table above, t is increasing by delta=1 each iteration, and then calling the function f(x) and evaluating the output. In OpenSCAD, this would result in an array of values. This is the secret sauce which makes higher-order functions super useful. But it's important to note that the code above is still incomplete. It's just evaluating a single function. To achieve total control for our parametric curve, we'll need to specify a mathematic expression for both f(x) and f(y) ## Putting it all together In order to leverage higher-order functions to specify separate expressions for both an x and y, we'll need to write a method which takes an f(x) function, f(y) function, t0, t1, and delta. And then uses that information to generate a tuple of points. Here's an example: function parametric_points(fx, fy, t0=0, t1=10, delta=0.01) = [for(i = [t0:delta:t1]) [fx(i), fy(i)]]; This function looks almost identical to the previous one, with the added feature of returning two values instead of 1. The X position and the Y position. Now we have something to work with. Going back to our earlier example $$-2x^2 + 2$$ Is particularly easy to incorporate by defining these math functions: // this represents time (t). We don't need anything fancy, so we can keep this an identity fn x = function(t) t; // this is simply the mathematical expression in code form y = function(t) -2 * pow(t, 2) + 2; /* And lastly, to give our shape a bit of width when it is extruded, I've duplicated the f(y) method but added a small offset, which is used in conjunction with the polygon module to create a thickness of 0.25mm */ y2 = function(t) y(t) + .25; Next we just need to pass those functions into our points plotter, and then use the resulting dataset with the OpenSCAD polygon module. points_1 = parametric_points(fx=x, fy=y, t0=-1, t1=1); points_2 = parametric_points(fx=x, fy=y2,t0=-1, t1=1); color("lime") linear_extrude(2) union() { polygon( concat( points_1, points_2 ) ); // Add another shape to the screen for fun translate([0,0 + 0.1,0]) square(center=true, [2,1]); } The full source code can be viewed on (github)[https://gist.github.com/SharpCoder/3c63781813ece0c1f139a414d33b1865]. ## Conclusion With this code, we've defined a generic and reusable mechanism for specifying curvature with mathematical expressions and brought the resulting shapes alive with OpenSCAD. That's all I've got today. Feel free to reach out to me on twitter @inventor_josh if you have any questions or feedback.
2019 Том 71 № 6 Asymptotics of Solutions of the Sturm–Liouville Equation with Respect to a Parameter Abstract On a finite segment [0, l], we consider the differential equation $$\left( {a\left( x \right)y\prime \left( x \right)} \right)\prime + \left[ {{\mu \rho }_{\text{1}} \left( x \right) + {\rho }_{2} \left( x \right)} \right]y\left( x \right) = 0$$ with a parameter μ ∈ C. In the case where a(x), ρ(x) ∈ L [0, l], ρ j (x) ∈ L 1[0, l], j = 1, 2, a(x) ≥ m 0 > 0 and ρ(x) ≥ m 1 > 0 almost everywhere, and a(x)ρ(x) is a function absolutely continuous on the segment [0, l], we obtain exponential-type asymptotic formulas as $\left| {\mu } \right| \to \infty$ for a fundamental system of solutions of this equation. English version (Springer): Ukrainian Mathematical Journal 53 (2001), no. 6, pp 866-885. Citation Example: Gomilko A. M., Pivovarchik V. N. Asymptotics of Solutions of the Sturm–Liouville Equation with Respect to a Parameter // Ukr. Mat. Zh. - 2001. - 53, № 6. - pp. 742-757. Full text
### কক্ষপথের বক্ষে ##### Score: 1 Point An object is orbiting earth at a height of $100$ $km$ from the surface. What's the period in seconds? Gravitational Constant, $G=6.673\times10^{-11}$ $N$ $kg^{-2}$ $m^2$ Mass of the Earth, $M_e = 6\times 10^{24}$ $kg$ Radius of the Earth, $R=6400$ $km$ Hint: Do you know Kepler's third law? Astrophysics Basic Dynamics Gravity #### Statistics Tried 44 Solved 32 First Solve @Sabbir612
Wireless Powered Cooperative Jamming for Secure OFDM System # Wireless Powered Cooperative Jamming for Secure OFDM System Guangchi Zhang, Jie Xu, Qingqing Wu, Miao Cui, Xueyi Li, and Fan Lin G. Zhang, J. Xu, M. Cui, and X. Li are with the School of Information Engineering, Guangdong University of Technology, Guangzhou, China (e-mail: [email protected], [email protected], [email protected], [email protected]). J. Xu is the corresponding author. Q. Wu is with the Department of Electrical and Computer Engineering, National University of Singapore (e-mail: [email protected]). F. Lin is with Guangzhou GCI Science & Technology Co., Ltd., Guangzhou, China (e-mail: [email protected]). ###### Abstract This paper studies the secrecy communication in an orthogonal frequency division multiplexing (OFDM) system, where a source sends confidential information to a destination in the presence of a potential eavesdropper. We employ wireless powered cooperative jamming to improve the secrecy rate of this system with the assistance of a cooperative jammer, which works in the harvest-then-jam protocol over two time-slots. In the first slot, the source sends dedicated energy signals to power the jammer; in the second slot, the jammer uses the harvested energy to jam the eavesdropper, in order to protect the simultaneous secrecy communication from the source to the destination. In particular, we consider two types of receivers at the destination, namely Type-I and Type-II receivers, which do not have and have the capability of canceling the (a-priori known) jamming signals, respectively. For both types of receivers, we maximize the secrecy rate at the destination by jointly optimizing the transmit power allocation at the source and the jammer over sub-carriers, as well as the time allocation between the two time-slots. First, we present the globally optimal solution to this problem via the Lagrange dual method, which, however, is of high implementation complexity. Next, to balance tradeoff between the algorithm complexity and performance, we propose alternative low-complexity solutions based on minorization maximization and heuristic successive optimization, respectively. Simulation results show that the proposed approaches significantly improve the secrecy rate, as compared to benchmark schemes without joint power and time allocation. Physical layer security, wireless powered cooperative jamming, OFDM system, joint power and time allocation. ## I Introduction With recent technical advancements in Internet of things (IoT), future wireless networks are envisioned to incorporate billions of low-power wireless devices to enable various industrial and commercial applications [1]. How to ensure the confidentiality of these devices’ wireless communication against illegitimate eavesdropping attacks is becoming an increasingly important task for cyber-physical security. However, this task is particularly challenging, as conventional key-based cryptographic techniques are difficult to be implemented due to the broadcast nature of wireless communications. To overcome this issue, physical layer security has emerged as a viable anti-eavesdropping solution at the physical layer [3, 2, 4]. The key design objective in physical-layer security is to maximize the so-called secrecy rate, which is defined as the communication rate of a wireless channel, provided that eavesdroppers cannot overhear any information from this channel. In the literature, there have been various approaches proposed to improve the secrecy rate. For example, one widely adopted approach is based on the idea of artificial noise (AN) (see, e.g., [5, 6]). In this approach, wireless transmitters send a combined version of both confidential information signals and AN, where the AN acts as jamming signals to interfere with eavesdroppers, thus avoiding the information leakage. Another celebrated approach is called cooperative jamming (see, e.g., [7, 8, 9]), where external network nodes cooperatively send jamming signals to disrupt the eavesdropping, thus helping protect the confidential information communication. As compared to the AN-based approach, cooperative jamming is able to further improve the secrecy rate by exploiting the cooperation diversity among different nodes. Cooperative jamming is also expected to have more abundant applications in the IoT era, where massive low-power wireless devices can cooperate in jamming to improve the network security. For instance, some idle devices in wireless networks can act as cooperative jammers to help ensuring the secrecy communication of other actively communicating devices. Nevertheless, the practical implementation of cooperative jamming in IoT networks is hindered by the low-power nature of wireless devices, since cooperative jamming will consume energy on these devices and thus they may prefer keeping idle to save energy instead of involving in the cooperation. To overcome this issue, a new efficient method, namely wireless powered cooperative jamming, has been proposed in [10, 11, 12, 13] motivated by the recent success of wireless information and power transfer via radio frequency (RF) signals [14, 15, 16, 17, 18, 19, 20, 21, 22, 25, 26, 23, 24].111It is worth noting that in addition to the far-field RF-based wireless power transfer, magnetic induction is a widely used near-field wireless power transfer technique for charging electronic devices [22, 26]. However, the magnetic induction has a limited operating range of less than one meter in general, which is much shorter than that of the RF-based wireless power transfer in the order of several meters. Therefore, RF-based wireless power transfer is expected to have more abundant applications to charge low-power IoT devices in a wide range, and thus is considered here in the wireless powered cooperative jamming systems. In this method, the cooperative jamming is powered by the wireless energy transferred from external wireless transmitters, and does not require cooperative jammers to consume their own energy. Therefore, wireless powered cooperative jamming is a promising solution to inspire low-power IoT devices to cooperate in the jamming. In [10, 11], wireless powered cooperative jamming was employed to secure a point-to-point communication system in the presence of an eavesdropper, where a cooperative jammer operates in an accumulate-and-jam protocol by first harvesting the wireless energy and storing in the battery over multiple blocks and then using the accumulated energy for cooperative jamming. The long-term secrecy performance is optimized by adjusting jamming parameters while taking into account the channel and battery dynamics over time. In [12, 13], wireless powered cooperative jamming was used in a secrecy two-way relaying communication system, where an eavesdropper aims to intercept the communicated information at the second hop, and more than one cooperative jammers operate in a harvest-then-jam protocol for cooperative jamming: in the first slot, the jammers harvest the wireless energy from the source, while in the second slot, they use the harvested energy to cooperatively jam the eavesdroppers. As the harvested energy is immediately used in the following slot, the harvest-then-jam protocol does not require large-capacity energy storages nor sophisticated energy management at cooperative jammers. For this reason, it is generally much easier to be implemented in practice than the accumulate-and-jam protocol. In this paper, we consider wireless powered cooperative jamming to secure a point-to-point communication system from a source to a destination with the presence of a potential eavesdropper. Different from prior works considering single-carrier systems, we focus on the multi-carrier orthogonal frequency division multiplexing (OFDM) system, which offers the following advantages. First, note that the wireless transmission must meet the transmit power spectrum density constraints imposed by regulatory authorities. In this case, the transferred power over a narrow-band system is often limited. By contrast, using OFDM over a wideband wireless power transfer system and exploiting the channel diversity over frequency can help deliver more power to intended receivers. On the other hand, as OFDM has been widely adopted in major existing and future wireless communication networks, using it here can also help better integrate wireless power transfer and wireless communication for future wireless networks (see, e.g., [26, 27, 28, 29, 30] and references therein). The cooperative jammer works in a harvest-then-jam protocol to help the secrecy communication by dividing each transmission block into two time-slots: in the first slot, the source sends dedicated energy signals to power the jammer; while in the second slot, the jammer uses the harvested energy to interfere with the eavesdropper to protect the confidential information transmission. In general, there exists a tradeoff in the time allocation between the two slots to optimize the performance of secrecy communication, i.e., while a longer WPT time in the first slot can transfer more energy to increase the jamming power for better confusing the eavesdropper, it can also reduce the efficient wireless information transmission (WIT) time in the second slot for delivering confidential data. Therefore, in order to improve the secrecy rate at the destination by maximally exploring the benefit of wireless power cooperative jamming, it is important to jointly design the time allocation, together with the transmit power allocation at the source and the jammer over sub-carriers, by taking into account the energy harvesting constraint at the jammer. We maximize the secrecy rate via joint time and power allocation by particularly considering two types of receivers at the destination, namely Type-I and Type-II receivers, which do not have and have the capability of canceling the (a-priori known) jamming signals, respectively (see Section II for the details). Under both receiver types, however, the two joint time and power allocation problems are non-convex and usually difficult to be solved. To tackle such challenges, we propose to recast each problem into a two-layer form, in which the outer layer corresponds to a single-variable time allocation problem and the inner layer is a sub-carrier transmit power allocation problem under given time allocation. The outer layer time allocation problem is solved via a one-dimension search. As for the inner-layer power allocation problem, we first present the globally optimal solution via the Lagrange dual method, which, however, is of high implementation complexity. Next, to balance the tradeoff between the implementation complexity and the performance, we further develop two suboptimal solutions based on minorization maximization and heuristic successive optimization, respectively. Simulation results show that the proposed approaches achieve significantly higher secrecy rate than benchmark schemes without joint time and power allocation, and the minorization maximization based suboptimal solution achieves a near optimal performance as compared to the optimal solution. It is worth noting that in the literature, there have been several existing works [28, 29, 30] investigating the physical layer security over OFDM systems. For example, the secrecy rate of OFDM systems was investigated in [28] under a Rayleigh fading channel setup without using AN or cooperative jamming. In [29] and [30], the AN-based approach and cooperative jamming were considered to improve the secrecy rate of OFDM systems, respectively. Different from these prior studies, in this paper the cooperative jamming is powered by WPT, and thus requires a more sophisticated design with joint time and power allocation for both WPT and jamming. This is new and has not been addressed. The remainder of the paper is organized as follows. Section II presents the system model and problem formulation. Sections III and IV propose three efficient approaches to obtain solutions to the two joint time and power allocation problems with Type-I and Type-II destination receivers, respectively. Section V presents simulation results to validate the performance of our proposed joint design as compared to other benchmark schemes. Finally, Section VI concludes this paper. ## Ii System Model and Problem Formulation ### Ii-a System Model As shown in Fig. 1, we consider secrecy communication in an OFDM system with a source communicating with a destination in the presence of a potential eavesdropper. We employ wireless powered cooperative jamming to secure this system, where a cooperative jammer uses the transferred energy from the source to help jam the eavesdropper against its eavesdropping. Suppose that the OFDM system consists of a total of orthogonal sub-carriers, and denote the set of sub-carriers as . We consider a block-based quasi-static channel model by assuming that the wireless channels remain constant over each transmission block and may change from one block to another. We focus on one particular block with a length of , and denote , , , , as the vectors collecting the channel coefficients of all the sub-carriers from the source to the jammer, from the source to the destination, from the source to the eavesdropper, from the jammer to the destination, from the jammer to the eavesdropper, respectively. Here, the superscript denotes the transpose operation. It is assumed that the source, destination, and the cooperative jammer perfectly know the global channel state information (CSI) , , , , and in order to obtain the performance upper bound of the wireless powered cooperative jamming system. Specifically, the CSI , , and associated with these users can be obtained via efficient channel estimation and feedback among them, while and can be obtained by monitoring the possible transmission activities of the eavesdropper, as commonly assumed in the physical-layer security literature [3, 4, 5, 6, 7]. Note that in practice the CSI acquisition may consume additional energy at the cooperative jammer, and the obtained CSI may not be perfect due to channel estimation and feedback errors. However, how to address these issues in practice is left for future work. We consider a harvest-then-jam protocol for the cooperative jammer by dividing each transmission block into two time-slots with lengths and , respectively, where and denote the portions of the two time-slots with α1+α2=1,0≤α1≤1,0≤α2≤1. (1) In the first time-slot, the source sends wireless energy to power the cooperative jammer; while in the second time-slot, the source transmits confidential information to the destination and simultaneously the jammer uses the harvested energy in the first time-slot to cooperate in jamming the eavesdropper against its eavesdropping. The detailed operation in the two slots is presented in the following, respectively. First, consider the WPT from the source to the jammer in the first time-slot. Over each sub-carrier , let denote the energy signal transmitted by the source, which is assumed to be a random variable with variance . Here, denotes the transmit power for WPT at the source over the sub-carrier , and denotes the statistic expectation. The harvested energy by the jammer is EEH=α1TηN∑n=1pPT,n|hJ,n|2, (2) where denotes the energy harvesting efficiency at the jammer. Note that similarly as in [14, 15, 16, 17, 18, 19, 20, 21], we adopt a linear energy harvesting model in (2) by considering the harvested power at the jammer lies in the linear regime of the energy harvester. In the literature, there have been various works [33, 34, 35, 36] investigating the wireless power transfer by considering the non-linearity of the energy harvester, while how to extend the wireless powered cooperative jamming into such a scenario is left for future work. Next, consider the cooperative jamming in the second time-slot. Over the sub-carrier , let and denote the confidential information signal transmitted by the source and the jamming signal transmitted by the jammer, respectively. The received signals by the destination and the eavesdropper over the sub-carrier are respectively denoted as yD,n=hD,nsIT,n+gD,nsJ,n+nD,n, (3) yE,n=hE,nsIT,n+gE,nsJ,n+nE,n, (4) where and denote the Gaussian noise at the receivers of the destination and the eavesdropper with mean zero and variances and , respectively. Assume that Gaussian signaling is employed for both and , which are thus cyclic symmetric complex Gaussian (CSCG) random variables with mean zero and variances and , with and denoting the transmit power of the source and the jamming power of the jammer over the sub-carrier , respectively. Let denote the maximum transmit sum power of the source over all sub-carriers, and denote the peak transmit power of the source over each sub-carrier. Then we have α1N∑n=1pPT,n+α2N∑n=1pIT,n≤PS, (5a) 0≤pPT,n,pIT,n≤PS,peak,n∈N. (5b) As for the jammer, as it uses the harvested wireless energy in (2) in the first time-slot to supply the cooperative jamming in the second time-slot, it is subject to the energy harvesting constraint: the total energy used for jamming in the second time-slot cannot exceed , i.e., α2TN∑n=1pJ,n≤EEH=α1TηN∑n=1pPT,n|hJ,n|2, (6a) 0≤pJ,n≤PJ,peak,n∈N, (6b) where denotes the peak transmit power of the jammer over each sub-carrier. In particular, we consider two types of receivers at the destination, namely Type-I and Type-II receivers [16], which do not have and have the capability of canceling the jamming signals ’s from the jammer, respectively. In order for a Type-II receiver to successfully cancel the jamming signals, such signals should be securely shared between the jammer and the destination before the cooperative jamming [29, 16, 31, 32]. This can be practically implemented as follows [32]. First, the same jamming signal generators and seed tables are pre-stored at both the jammer and destination (but not available at the eavesdropper). Next, before each transmission phase, one seed is randomly chosen from the seed table and the index of this seed is shared between the jammer and destination. In particular, the two-step phase-shift modulation-based method in [32] can be applied for the seed index sharing as follows. In the first step, the destination sends a pilot signal for the jammer to estimate the channel phase between the destination and jammer. In the second step, the jammer randomly chooses a seed index, and modulates it over the phase of the transmitted signal after pre-compensating the channel phase that it estimated in the previous step. The destination is able to decode the seed index sent by the jammer from the phases of the received signal. Since the length of this seed index sharing procedure is very short and the channel phase between the destination and jammer is different from that between the destination/jammer and the eavesdropper, the eavesdropper does not know the channel phase between the destination and jammer, and thus is not able to decode the signal containing the seed index in such a short time period. For Type-I and Type-II receivers, the secrecy rates of the secure OFDM system over the sub-carriers are respectively given by R(I)sec=N∑n=1[R(I)SD,n−RSE,n]+, (7) R(II)sec=N∑n=1[R(II)SD,n−RSE,n]+, (8) where . Here, and are the achievable rates over the sub-carrier from the source to the destination for Type-I and Type-II receivers, respectively, and denotes the achievable rate from the source to the eavesdropper over the sub-carrier , given by R(I)SD,n=α2log2⎛⎜⎝1+pIT,n|hD,n|2pJ,n|gD,n|2+σ2D⎞⎟⎠, (9) R(II)SD,n=α2log2⎛⎝1+pIT,n|hD,n|2σ2D⎞⎠, (10) RSE,n=α2log2⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠. (11) ### Ii-B Problem Formulation Our objective is to maximize the secrecy rates in (7) and in (8) for both types of destination receivers, subject to the transmit power constraint in (5) at the source, the energy harvesting constraint in (6) at the jammer, and the time constraint in (1). The decision variables include the transmit power allocation ’s (for WPT) and ’s (for WIT) at the source, and the jamming power allocation ’s at the jammer, as well as the time allocation and . For Type-I receiver, we mathematically formulate the secrecy rate maximization problem as (P1):maxα1,α2,pPT,pIT,pJ α2N∑n=1[log2⎛⎜⎝1+pIT% ,n|hD,n|2pJ,n|gD,n|2+σ2D⎞⎟⎠ −log2⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠] (12) s.t. where , , and . Note that in the objective function of problem (P1) we have omitted the positive operation , which is due to the fact that the optimal value of each summation term of the objective of problem (P1), i.e. , must be non-negative, and thus the problems with and without the positive operation have the same optimal value and the same optimal solution.222This fact can be proved by contradiction. If , we can increase its value to zero by setting without violating the constraints. Similarly, for Type-II receiver, the secrecy rate maximization problem is formulated as (P2):maxα1,α2,pPT,pIT,pJ α2N∑n=1[log2⎛⎝1+pIT% ,n|hD,n|2σ2D⎞⎠ −log2⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠] (13) s.t. Note that problems (P1) and (P2) are non-convex as their objective functions are non-concave. As a result, they are difficult to solve in general. In the following two sections, we tackle such difficulties for (P1) and (P2), respectively. ## Iii Solution to Problem (P1) with Type-I Destination Receiver First, consider problem (P1) with Type-I destination receiver. We solve this problem by formulating it in a nested form: maxα2α2R(I)(α2),s.% t.0≤α2≤1, (14) where R(I)(α2)= maxpPT,pIT,pJN∑n=1[log2⎛⎜⎝1+pIT,n|h%D,n|2pJ,n|gD,n|2+σ2D⎞⎟⎠ −log2⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠] (15a) s.t.(1−α2)N∑n=1pPT,n+α2N∑n=1pIT,n≤PS, (15b) 0≤pPT,n,pIT,n≤PS,% peak,n∈N, (15c) α2N∑n=1pJ,n≤(1−α2)ηN∑n=1pPT,n|hJ,n|2, (15d) 0≤pJ,n≤PJ,peak,n∈N. (15e) Here, the outer layer problem (14) corresponds to the time allocation via optimizing , while the inner layer problem (15) corresponds to the joint power allocation optimization under given time allocation. We solve problem (P1) by first solving (15) under any given , and then adopting a one-dimensional search over the interval to find the optimal to solve (14). In the following, we focus on solving the non-convex inner layer problem (15) under given . ### Iii-a Optimal Solution to Problem (15) Via The Lagrange Dual Method First, we present the optimal solution to problem (15). Despite the non-convexity, problem (15) can be shown to satisfy the “time-sharing” condition defined in [37] as the number of sub-carriers tends to infinity, and the duality gap is zero in this case.333It is observed in our simulations that when , the duality gap for problem (15) is negligibly small and thus can be ignored. Hence, we apply the Lagrange dual method [39] to find its optimal solution. The partial Lagrangian of problem (15) is L(I)(pPT,pIT,pJ,λ,μ) = N∑n=1[log2⎛⎜⎝1+pIT,n|hD,n|2pJ,n|gD,n|2+σ2D⎞⎟⎠ −log2⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠] +λ[PS−(1−α2)N∑n=1pPT,n−α2N∑n=1pIT,n] +μ[(1−α2)ηN∑n=1pPT,n|hJ,n|2−α2N∑n=1pJ,n], (16) where and are the dual variables associated with the constraints (15b) and (15d), respectively. The dual function is defined as g(λ,μ)=maxpPT,pIT% ,pJ L(I)(pPT,pIT,pJ,λ,μ) s.t. 0≤pPT,n≤PS,peak,∀n, 0≤pIT,n≤PS,peak,∀n, 0≤pJ,n≤PJ,peak,∀n. (17) Then, the dual problem of (15) is minλ,μg(λ,μ)s.t.λ≥0,μ≥0. (18) Due to the strong duality between problem (15) and the dual problem (18), in the following we solve problem (15) by first obtaining under given and via solving problem (17), and then find the optimal and to minimize for solving (18). First, consider problem (17) under any given and . In this case, problem (17) can be decomposed into subproblems as follows by removing irrelevant terms, where each subproblems in (19) and (20) are for one sub-carrier . maxpPT,n −λ(1−α2)pPT,n+μ(1−α2)η|hJ,n|2pPT,n s.t. 0≤pPT,n≤PS,peak, (19) maxpIT,n,pJ,n log2⎛⎜⎝1+pIT,n|hD,n|2pJ,n|gD,n|2+σ2D⎞⎟⎠ − log2⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠−λα2pIT,n−μα2pJ,n s.t. 0≤pIT,n≤PS,peak, 0≤pJ,n≤PJ,peak. (20) As for subproblem (19), as the objective function is linear over , it is evident that the optimal solution is (21) Note that if , is not unique, and can take any arbitrary value within . In this case, we set only for solving problem (17), which may not be the optimal solution of to problem (15) in general. As for subproblem (20), the optimization variables and couple together, thus making (20) difficult to solve. To handle this issue, we first obtain the optimal under any given , and then apply a one-dimension search to find the optimal within . To find the optimal to solve problem (20) under given , we define an≜|hD,n|2pJ,n|gD,n|2+σ2D, (22) bn≜|hE,n|2pJ,n|gE,n|2+σ2E. (23) When , the objective function of (20) is non-increasing with respect to , and the optimal solution of should be zero. When , the objective function of (20) is concave with respect to , and the optimal solution can be obtained by checking its first-order derivative. Therefore, the optimal for problem (20) under given is p∗IT,n(pJ,n)={0,an≤bn,min([p∗n]+,PS,peak),an>bn, (24) where p∗n= √(12bn−12an)2+1λα2ln2(1bn−1an) −12bn−12an. (25) In addition, let denote the optimal to problem (20), obtained via the one-dimensional search. Then becomes the optimal solution of for (20), denoted by . By combining them with for (19), the optimal solution to (17) under given is found. Next, we solve the dual problem (18). As this problem is convex but may not be differentiable in general, we find the optimal by applying the ellipsoid method [39]. The required subgradients of with respect to and are respectively given by PS−(1−α2)N∑n=1p∗PT,n−α2N∑n=1p∗IT,n, (26) (1−α2)ηN∑n=1p∗PT,n|hJ,n|2−α2N∑n=1p∗J,n. (27) Therefore, the optimal solution of (18) can be obtained as . With the optimal dual variable at hand, the corresponding ’s and ’s, which are obtained by solving problem (20), become the optimal solution to problem (15). Now, it remains to obtain the optimal solution of ’s for problem (15). In general, the optimal solution of ’s, denoted as ’s, cannot be obtained from (21), since the solution is not unique if . Fortunately, it can be shown that, given , , ’s, and ’s, any ’s that satisfy the constraints (15b), (15c), and (15d) are the optimal solution to problem (15). Thus we can find ’s by solving the following feasibility problem: find pPT (28a) s.t. (1−α2)N∑n=1pPT,n+α2N∑n=1p∗IT,n≤PS, (28b) 0≤pPT,n≤PS,peak,n∈N, (28c) α2N∑n=1p∗J,n≤(1−α2)ηN∑n=1pPT,n|hJ,n|2. (28d) The solution of problem (28) can be obtained by solving the following problem. maxpPT N∑n=1pPT,n|hJ,n|2 (29) s.t. This is because any solution to problem (28) is a feasible solution to problem (29), and thus the optimal solution to (29) must be a solution to problem (28). Let , where denotes the largest integer lower than , and denote as the th largest value in . The optimal solution to problem (29) is p∗PT,n=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩PS,peak,|hJ,n|>|~hJ,^k+1|,PS−α2∑Nn=1p∗IT,n1−α2−^kPS,peak,|hJ,n|=|~hJ,^k+1|,0,|hJ,n|<|~hJ,^k+1|. (30) Using (30), we obtain the closed-form optimal solution of ’s to problem (15). In summary, the overall algorithm is presented in Algorithm 1. Denote the required accuracy for the one-dimension search in finding and the convergence accuracy of the ellipsoid method as and , respectively. The complexity of the Algorithm 1 for finding the optimal solution is , where and are the radius and Lipschitz constant of the initial ellipsoid, respectively [40]. ### Iii-B Minorization Maximization (MM) Although the Lagrange dual method can find the optimal solution, it needs an exhaustive search of to find the optimal power and for each sub-carrier . As a result, the computational complexity is rather high and even prohibitive for large . Here, we propose a suboptimal approach to solve problem (15) based on the MM approach [41] to avoid exhaustive search, which obtains the power allocation solution iteratively. To facilitate the description, we rewrite (15) as maxpPT,pIT,pJ N∑n=1[ln(pIT,n|hD,n|2+pJ,n|gD,n|2+σ2D) −ln(pJ,n|gD,n|2+σ2D) −ln(pIT,n|hE,n|2+pJ,n|gE,n|2+σ2E) +ln(pJ,n|gE,n|2+σ2E)] (31) s.t. where the property is used. The MM approach solves this problem iteratively as follows: in each iteration, this approach first constructs a surrogate function that is a concave lower bound of the objective function of the original problem, then maximizes the surrogate function within the feasible region of the original problem to obtain a feasible solution. The iteration terminates until the series of the obtained feasible solution converges. Without loss of generality, we consider the -th iteration with . Suppose that , , denote the solution obtained in the -th iteration. We show how to find , and in the -th iteration. Note that the first-order Taylor expansions of convex functions and around and are their respective global under-estimators [39]. Therefore, we have −ln(pJ,n|gD,n|2+σ2D)≥−|gD,n|2(pJ,n−p(k)J,n)p(k)J,n|gD,n|2+σ2D−ln(p(k)J,n|gD,n|2+σ2D), (32) −ln(pIT,n|hE,n|2+pJ,n|gE,n|2+σ2E)≥−|hE,n|2(pIT,n−p(k)IT,n)+|gE,n|2(pJ,n−p(k)J,n)p(k)IT,n|hE,n|2+p(k)J,n|gE,n|2+σ2E−ln(p(k)IT,n|hE,n|2+p(k)J,n|gE,n|2+σ2E). (33) We construct a surrogate function of the objective function in (31) by replacing and with their respective first-order Taylor expansions. Then the maximization of the surrogate function within the feasible region of (31) is expressed as maxpPT,pIT,pJ N∑n=1[ln(pIT,n|hD,n|2+pJ,n|gD,n|2+σ2D) +ln(pJ,n|gE,n|2+σ2E)−|gD,n|2pJ,np(k)J,n|gD,n|2+σ2D −|hE,n|2pIT,n+|gE,n|2pJ,np(k)IT,n|hE,n|2+p(k)J,n|gE,n|2+σ2E] (34) s.t. where the constant terms in the objective function are removed. Since the first and second summation terms in the objective function of (34) are concave with respect to and , and the third and fourth summation terms in the objective function are linear, the objective function of (34) is concave. Furthermore, the constraint functions in (15b)–(15e) are all convex, so the feasible region of (34) is convex. As a result, problem (34) is convex. We solve it by using the Lagrange dual method given in Appendix A, without requiring the one-dimension exhaustive search applied in the optimal approach, and thus the complexity is lower. In summary, we have the MM approach as in Algorithm 2. Since problem (34) maximizes the surrogate function which is a lower bound of the objective function of problem (15), and the lower bound and the objective function of (15) are equal only at the given point , the objective value of problem (15) with the solution obtained by solving problem (34) is non-decreasing over iteration. As the optimal value of (15) is bounded from above, the MM approach is guaranteed to converge to at least a local optimum [41]. The complexity of the MM approach is , where is the iteration number. ### Iii-C Heuristic Successive Optimization The previous two approaches are implemented iteratively and thus may have relatively high computation complexity. To overcome this issue, we further propose a low-complexity heuristic successive optimization by finding , , and successively without any iteration. To this end, we decouple the variables and in the constraint (15b), and have the following problem: maxpPT,pIT,pJ N∑n=1[ln⎛⎜⎝1+pIT,n|hD% ,n|2pJ,n|gD,n|2+σ2D⎞⎟⎠ −ln⎛⎜⎝1+pIT,n|hE,n|2pJ,n|gE,n|2+σ2E⎞⎟⎠] (35a) s.t. N∑n=1pPT,n≤PS,0≤pPT,n≤PS,peak,∀n (35b) N∑n=1pIT,n≤PS,0≤pIT,n≤PS,peak,∀n (35c) N∑n=1pJ,n≤1−α2α2PEH,0≤pJ,n≤PJ,peak,∀n. (35d) where denotes the harvested power at the jammer. Problem (35) is obtained based on (15) by replacing the constraints (15b) and (15c) with (35b) and (35c). Since any variables , , and satisfying (35b) and (35c) must satisfy (15b) and (15c), the feasible region of problem (35) is a subset of that of (15). Therefore, solving (35) will result in a feasible solution to (15) and achieve its lower bound. Next, we solve problem (35) by finding , and successively as follows. 1) Solution of . Note that the optimal value of (35) can be viewed as a function of , denoted by . It is evident that for any given , we have . This is due to the fact that the larger can admit a larger feasible region for , , and for problem (35), as compared to that admitted by (see (35d)). Therefore, is non-decreasing function of . As a result, although is not directly involved in the objective function (35a), increasing in (35d) can increase the objective value in (35a). Hence, we propose to find the desirable by maximizing . This corresponds to allocating power over the sub-carriers with highest channel gains as follows. Sort the sequence in the descent order and form a new sequence
# The acme-cofunctor package [ Tags: acme, bsd3, library ] [ Propose Tags ] A Cofunctor is a structure from category theory dual to Functor. A Functor is defined by the operation fmap: fmap :: (a -> b) -> (f a -> f b) This means that its dual must be defined by the following operation: cofmap :: (b -> a) -> (f b -> f a) Since beginning his investigations, the author of this package has discovered that this pattern is at least as commonly used as Functor. In fact, many ubiquitous Haskell types (e.g. [], Maybe, ((->) a) turn out to have a Cofunctor instance. ## Properties Versions 0.1.0.0, 0.1.1.0 CHANGELOG.md base (==4.*) [details] BSD3 2014 Jasper Van der Jeugt Jasper Van der Jeugt Jasper Van der Jeugt Acme https://github.com/jaspervdj/acme-cofunctor head: git clone https://github.com/jaspervdj/acme-cofunctor Sat May 13 10:15:09 UTC 2017 by JasperVanDerJeugt NixOS:0.1.1.0 441 total (18 in the last 30 days) 2.0 (votes: 2) [estimated by rule of succession] λ λ λ Docs available Last success reported on 2017-05-13 Hackage Matrix CI ## Modules [Index] #### Maintainer's Corner For package maintainers and hackage trustees [back to package description] # acme-cofunctor A Cofunctor is a structure from category theory dual to Functor. We all know that a Functor is defined by the operation 'fmap': fmap :: (a -> b) -> (f a -> f b) This means that its dual must be defined by the following operation: cofmap :: (b -> a) -> (f b -> f a) Since beginning his investigations, the author of this package has discovered that this pattern is at least as commonly used as Functor. In fact, many ubiquitous Haskell types (e.g. [], Maybe, ((->) a) turn out to have a Cofunctor instance.
# Bifurcations from an attracting heteroclinic cycle under periodic forcing ### JOURNAL OF DIFFERENTIAL EQUATIONS #### Article There are few examples of non-autonomous vector fields exhibiting complex dynamics that may be proven analytically. We analyse a family of periodic perturbations of a weakly attracting robust heteroclinic network defined on the two-sphere. We derive the first return map near the heteroclinic cycle for small amplitude of the perturbing term, and we reduce the analysis of the non-autonomous system to that of a two-dimensional map on a cylinder. Interesting dynamical features arise from a discrete-time Bogdanov-Takens bifurcation. When the perturbation strength is small the first return map has an attracting invariant closed curve that is not contractible on the cylinder. Near the centre of frequency locking there are parameter values with bistability: the invariant curve coexists with an attracting fixed point. Increasing the perturbation strength there are periodic solutions that bifurcate into a closed contractible invariant curve and into a region where the dynamics is conjugate to a full shift on two symbols. ### Publication Year of publication: 2020 ### Identifiers ISSN: 0022-0396 Other: 2-s2.0-85082721019
# The integral $\int \frac{J_{d/2}^{2}(x)}{x} \ \mathrm{d}x$ I'd like to know what the explicit solution to the following integral is: $$\displaystyle \int_{t}^{\infty} \frac{J_{d/2}^{2}(x)}{x} \ \mathrm{d}x,$$ where $t > 0$, $d \in \mathbb{N}$, and $J_{\nu}$ denotes the Bessel function of the first kind. Using Mathematica, I've been able to get some results when $d$ is even. For instance, if we take $d = 2$, then Mathematica returns the expression $$\displaystyle \frac{1}{2}(J_{0}^{2}(t) + J_{1}^{2}(t)),$$ and when $d = 4$, we obtain a similar expression involving polynomials in $t$ multiplied by some factors of $J_{0}$ and $J_{1}$. However, when $d$ is odd, it returns expressions involving Fresnel integrals. This entices the question: is there an explicit solution to the above integral? If $d$ is even then the integral solutions (according to Mathematica) look like they can be given by some kind of recurrence relation. Does anyone know what it is? I managed to find the following formula here: $$\displaystyle \int \frac{J_{d/2}^2(x)}{x} \ \mathrm{d}x = \frac{x^d}{2^dd\ \Gamma^{2}\left(\frac{d+2}{2}\right)} \ _{2}F_{3}\left(\frac{d+1}{2}, \frac{d}{2}; \frac{d+2}{2}, \frac{d+2}{2}, d+1; -x^2 \right)$$ However, Mathematica suggests a closed form is available when $d$ is even, but cannot evaluate those sums. I've also had a look in some integral tables but they don't appear to have this particular integral for general $d$. I had Mathematica do the following integral, it states $$I_{ad}=\int_t^\infty \frac{J_{a/2}(x)J_{d/2}(x)}{x}\; dx =\frac{J_{\frac{a}{2}}(t) \left(2 (a-d) J_{\frac{d}{2}}(t)+4 t J_{\frac{d+2}{2}}(t)\right)-4 t J_{\frac{a+2}{2}}(t) J_{\frac{d}{2}}(t)}{(a-d) (a+d)}$$ the denominator $(a-d)$ kills it if $a=d$, however if we expand this out we have $$I_{ad} = \underbrace{-\frac{2 d J_{\frac{a}{2}}(t) J_{\frac{d}{2}}(t)}{(a-d) (a+d)}+\frac{2 a J_{\frac{a}{2}}(t) J_{\frac{d}{2}}(t)}{(a-d) (a+d)}}_{A}\underbrace{-\frac{4 t J_{\frac{a+2}{2}}(t) J_{\frac{d}{2}}(t)}{(a-d) (a+d)}+\frac{4 t J_{\frac{a}{2}}(t) J_{\frac{d+2}{2}}(t)}{(a-d) (a+d)}}_{B}$$ we can see that the numerators of $B$ will be the same when $a=d$, as well as the denominator. If we do the limit as $a\to d$ of this expression we get $$I_{dd}= \frac{-t J_{\frac{d}{2}}(t) J^{(1,0)}_{\frac{d}{2}+1}\left(t\right)+t J_{\frac{d}{2}+1}(t) J^{(1,0)}_{\frac{d}{2}}\left(t\right)+J_{\frac{d}{2}}(t){}^2}{d}$$ where $J^{(1,0)}_n(t)$ means the derivative of $J_n(t)$ with respect to $n$. This seems to agree numerically with the original integral of many real values of $t$ and $d$. If we let $d=2$ and simplify we get $$I_{22}=\frac{1}{2}(1-J^2_0(t)-J^2_1(t))$$ which seems to be the correct value rather than the one quoted in the question. You can get the series representation of $J^{(1,0)}_n(t)$, by differentiating that of $J_n(t)$ $$J^{(1,0)}_m(x)=\sum _{l=0}^{\infty } \frac{(-1)^l 2^{-2 l-m} x^{2 l+m} \left(-H_{l+m}+\log \left(\frac{x}{2}\right)+\gamma \right)}{\Gamma (l+1) \Gamma (l+m+1)}$$
There are many tasks in natural language processing that are challenging. This blog entry is on text summarization, which briefly summarizes the survey article on this topic. (arXiv:1707.02268) The authors of the article defined the task to be Automatic text summarization is the task of producing a concise and fluent summary while preserving key information content and overall meaning. There are basically two approaches to this task: • extractive summarization: identifying important sections of the text, and extracting them; and • abstractive summarization: producing summary text in a new way. Most algorithmic methods developed are of the extractive type, while most human writers summarize using abstractive approach. There are many methods in extractive approach, such as identifying given keywords, identifying sentences similar to the title, or wrangling the text at the beginning of the documents. How do we instruct the machines to perform extractive summarization? The authors mentioned about two representations: topic and indicator. In topic representations, frequencies, tf-idf, latent semantic indexing (LSI), or topic models (such as latent Dirichlet allocation, LDA) are used. However, simply extracting these sentences out with these algorithms may not generate a readable summary. Employment of knowledge bases or considering contexts (from web search, e-mail conversation threads, scientific articles, author styles etc.) are useful. In indicator representation, the authors mentioned the graph methods, inspired by PageRank. (see this) “Sentences form vertices of the graph and edges between the sentences indicate how similar the two sentences are.” And the key sentences are identified with ranking algorithms. Of course, machine learning methods can be used too. Evaluation on the performance on text summarization is difficult. Human evaluation is unavoidable, but with manual approaches, some statistics can be calculated, such as ROUGE. On November 21, 2016, the Python package shorttext’ was published. Until today, more than seven versions have been published. There have been a drastic architecture change, but the overall purpose is still the same, as summarized in the first introduction entry: This package shorttext‘ was designed to tackle all these problems… It contains the following features: • example data provided (including subject keywords and NIH RePORT); • text preprocessing; • pre-trained word-embedding support; • gensim topic models (LDA, LSI, Random Projections) and autoencoder; • topic model representation supported for supervised learning using scikit-learn; • cosine distance classification; and • neural network classification (including ConvNet, and C-LSTM). And since the first version, there have been updates, as summarized in the documention (News): ## Version 0.3.3 (Apr 19, 2017) • Deleted CNNEmbedVecClassifier. ## Version 0.3.2 (Mar 28, 2017) • Bug fixed for gensim model I/O; • Console scripts update; • Neural networks up to Keras 2 standard (refer to this). ## Version 0.3.1 (Mar 14, 2017) • Compact model I/O: all models are in single files; • Implementation of stacked generalization using logistic regression. ## Version 0.2.1 (Feb 23, 2017) • Removal attempts of loading GloVe model, as it can be run using gensim script; • Confirmed compatibility of the package with tensorflow; • Use of spacy for tokenization, instead of nltk; • Use of stemming for Porter stemmer, instead of nltk; • Removal of nltk dependencies; • Simplifying the directory and module structures; • Module packages updated. Although there are still additions that I would love to add, but it would not change the overall architecture. I may add some more supervised learning algorithms, but under the same network. The upcoming big additions will be generative models or seq2seq models, but I do not see them coming in the short term. I will add corpuses. I may add tutorials if I have time. I am thankful that there is probably some external collaboration with other Python packages. Some people have already made some useful contributions. It will be updated if more things are confirmed. Recently I have been drawn to generative models, such as LDA (latent Dirichlet allocation) and other topic models. In deep learning, there are a few examples, such as FVBN (fully visible belief networks), VAE (variational autoencoder), RBM (restricted Boltzmann machine) etc. Recently I have been reading about GAN (generative adversarial networks), first published by Ian Goodfellow and his colleagues and collaborators. Goodfellow published his talk in NIPS 2016 on arXiv recently. Yesterday I attended an event at George Mason University organized by Data Science DC Meetup Group. Jennifer Sleeman talked about GAN. It was a very good talk. In GAN, there are two important functions, namely, the discriminator (D), and the generator (G). As a generative model, the distribution of training data, all labeled positive, can be thought of the distribution that the generator was trained to produce. The discriminator discriminates the data with positive labels and those with negative labels. Then the generator tries to generate data, probably from noises, which should be negative, to fake the discriminator to see it as positive. This process repeats iteratively, and eventually the generator is trained to produce data that are close to the distribution of the training data, and the discriminator will be confused to classify the generated data as positive with probability $\frac{1}{2}$. The intuition of this competitive game is from minimax game in game theory. The formal algorithm is described in the original paper as follow: The original paper discussed about that the distribution of final generated data identical to that of the training data being the optimal for the model, and argued using the Jensen-Shannon (JS) divergence. Ferenc Huszár discussed in his blog about the relations between maximum likelihood, Kullback-Leibler (KL) divergence, and Jensen-Shannon (JS) divergence. I have asked the speaker a few questions about the concepts of GAN as well. GAN is not yet a very sophisticated framework, but it already found a few industrial use. Some of its descendants include LapGAN (Laplacian GAN), and DCGAN (deep convolutional GAN). Applications include voice generation, image super-resolution, pix2pix (image-to-image translation), text-to-image synthesis, iGAN (interactive GAN) etc. Adversarial training is the coolest thing since sliced bread.” – Yann LeCun Recently, gensim, a Python package for topic modeling, released a new version of its package which includes the implementation of author-topic models. The most famous topic model is undoubtedly latent Dirichlet allocation (LDA), as proposed by David Blei and his colleagues. Such a topic model is a generative model, described by the following directed graphical models: In the graph, $\alpha$ and $\beta$ are hyperparameters. $\theta$ is the topic distribution of a document, $z$ is the topic for each word in each document, $\phi$ is the word distributions for each topic, and $w$ is the generated word for a place in a document. There are models similar to LDA, such as correlated topic models (CTM), where $\phi$ is generated by not only $\beta$ but also a covariance matrix $\Sigma$. There exists an author model, which is a simpler topic model. The difference is that the words in the document are generated from the author for each document, as in the following graphical model. $x$ is the author of a given word in the document. Combining these two, it gives the author-topic model as a hybrid, as shown below: The new release of Python package, gensim, supported the author-topic model, as demonstrated in this Jupyter Notebook. P.S.: • I am also aware that there is another topic model called structural topic model (STM), developed for the field of social science. However, there is no Python package supporting this, but an R package, called stm, is available for it. You can refer to their homepage too. • I may consider including author-topic model and STM in the next release of the Python package shorttext. There has been a lot of methods for natural language processing and text mining. However, in tweets, surveys, Facebook, or many online data, texts are short, lacking data to build enough information. Traditional bag-of-words (BOW) model gives sparse vector representation. Semantic relations between words are important, because we usually do not have enough data to capture the similarity between words. We do not want “drive” and “drives,” or “driver” and “chauffeur” to be completely different. The relation between or order of words become important as well. Or we want to capture the concepts that may be correlated in our training dataset. We have to represent these texts in a special way and perform supervised learning with traditional machine learning algorithms or deep learning algorithms. This package shorttext‘ was designed to tackle all these problems. It is not a completely new invention, but putting everything known together. It contains the following features: • example data provided (including subject keywords and NIH RePORT); • text preprocessing; • pre-trained word-embedding support; • gensim topic models (LDA, LSI, Random Projections) and autoencoder; • topic model representation supported for supervised learning using scikit-learn; • cosine distance classification; and • neural network classification (including ConvNet, and C-LSTM). Readers can refer this to the documentation. There are many learning algorithms that perform classification tasks. However, very often the situation is that one classifier is better on certain data points, but another is better on other. It would be nice if there are ways to combine the best of all these available classifiers. # Voting The simplest way of combining classifiers to improve the classification is democracy: voting. When there are n classifiers that output the same classes, the result can be simply cast by a democratic vote. This method works quite well in many problems. Sometimes, we may need to give various weights to different classifiers to improve the performance. # Bagging and Boosting Sometimes we can generate many classifiers with the handful amount of data available with bagging and boosting. By bagging and boosting, different classifiers are built with the same learning algorithm but with different datasets. “Bagging builds different versions of the training set by sampling with replacement,” and “boosting obtains the different training sets by focusing on the instances that are misclassified by the previously trained classifiers.” [Sesmero etal. 2015] # Fusion Performance of classifiers depends not only on the learning algorithms and the data, but also the set of features used. While feature generation itself is a bigger and a more important problem (not to be discussed), we do have various ways to combine different features. Sometimes we separate features into different classifiers in which the answers are to be combined, or combine all these features into one classifier. The former is called late fusion, while the latter early fusion. # Stacking We can also treat the prediction results of various classifiers as features of another classifiers. It is called stacking. [Wolpert 1992] “Stacking generates the members of the Stacking ensemble using several learning algorithms and subsequently uses another algorithm to learn how to combine their outputs.” [Sesmero etal. 2015] Some recent implementation in computational epidemiology employ stacking as well. [Russ et. al. 2016] # Hidden Topics and Embedding There is also a special type of feature generation of one classifier, using hidden topic or embedding as the latent vectors. We can generate a set of latent topics according to the data available using latent Dirichlet allocation (LDA) or correlated topic models (CTM), and describe each datasets using these topics as the input to another classifier. [Phan et. al. 2011] Another way is to represent the data using embedding vectors (such as time-series embedding, Word2Vec, or LDA2Vec etc.) as the input of another classifier. [Czerny 2015] Word2Vec has hit the NLP world for a while, as it is a nice method for word embeddings or word representations. Its use of skip-gram model and deep learning made a big impact too. It has been my favorite toy indeed. However, even though the words do have a correlation across a small segment of text, it is still a local coherence. On the other hand, topic models such as latent Dirichlet allocation (LDA) capture the distribution of words within a topic, and that of topics within a document etc. And it provides a representation of a new document in terms of a topic. In my previous blog entry, I introduced Chris Moody’s LDA2Vec algorithm (see: his SlideShare). Unfortunately, not many papers or blogs have covered this new algorithm too much despite its potential. The API is not completely well documented yet, although you can see its example from its source code on its Github. In its documentation, it gives an example of deriving topics from an array of random numbers, in its lda2vec/lda2vec.py code: from lda2vec import LDA2Vec n_words = 10 n_docs = 15 n_hidden = 8 n_topics = 2 n_obs = 300 words = np.random.randint(n_words, size=(n_obs)) _, counts = np.unique(words, return_counts=True) model = LDA2Vec(n_words, n_hidden, counts) model.finalize() doc_ids = np.arange(n_obs) % n_docs loss = model.fit_partial(words, 1.0, categorical_features=doc_ids) ` A more comprehensive example is in examples/twenty_newsgroup/lda.py . Besides, LDA2Vec, there are some related research work on topical word embeddings too. A group of Australian and American scientists studied about the topic modeling with pre-trained Word2Vec (or GloVe) before performing LDA. (See: their paper and code) On the other hand, another group with Chinese and Singaporean scientists performs LDA, and then trains a Word2Vec model. (See: their paper and code) And LDA2Vec concatenates the Word2Vec and LDA representation, like an early fusion. No matter what, representations with LDA models (or related topic modeling such as correlated topic models (CTM)) can be useful even outside NLP. I have found it useful at some intermediate layer of calculation lately. Both LDA (latent Dirichlet allocation) and Word2Vec are two important algorithms in natural language processing (NLP). LDA is a widely used topic modeling algorithm, which seeks to find the topic distribution in a corpus, and the corresponding word distributions within each topic, with a prior Dirichlet distribution. Word2Vec is a vector-representation model, trained from RNN (recurrent neural network), to seek a continuous representation for words. They are both very useful, but LDA deals with words and documents globally, and Word2Vec locally (depending on adjacent words in the training data). A LDA vector is so sparse that the users can interpret the topic easily, but it is inflexible. Word2Vec’s representation is not human-interpretable, but it is easy to use. In his slides, Chris Moody recently devises a topic modeling algorithm, called LDA2Vec, which is a hybrid of the two, to get the best out of the two algorithms. Honestly, I never used this algorithm. I rarely talk about something I didn’t even try, but I want to raise awareness so that more people know about it when I come to use it. To me, it looks like concatenating two vectors with some hyperparameters, but  the source codes rejects this claim. It is a topic model algorithm. There are not many blogs or papers talking about LDA2Vec yet. I am looking forward to learning more about it when there are more awareness. Continue reading “LDA2Vec: a hybrid of LDA and Word2Vec” We “sensed” what has been the current hot issues in the past (and we still often do today.) Methods of “sensing,” or “detecting”, is now more sophisticated however as the computational technologies are now more advanced. The methods involved can be collected to a field called “computational journalism.” Recently, there is a blog post by Jeiran about understanding the public impression about Iran using computational methods. She divided the question into the temporal and topical perspectives. The temporal perspective is about various time-varying patterns of the number of related news articles; the topical perspective is about the distribution of various topics, using latent Dirichlet allocation (LDA), and Bayes’ Theorem. The blog post is worth reading. In February last year, there was a video clip online that Daeil Kim, a data scientist at New York Times, spoke at NYC Data Science Meetup. Honestly, I still have not watched it yet (but I think I should have.) What his work is also about computational journalism, on his algorithm, and LDA. Of course, computational journalism is the application of natural language processing and machine learning on news articles… However, as a computational physicist has to know physics, a computational journalist has to know journalism. A data scientist has to be someone who knows the technology and the subject matter.
Home > GB8I > Chapter 2 Unit 2 > Lesson INT1: 2.3.2 > Problem2-104 2-104. If you are traveling at a speed of $50$ miles per hour, how many feet per second is this? Show your work using units and Giant Ones. Review the Math Notes box in Lesson 2.3.1.
# Why FQHE need a lower energy state? There are a lot papers explaining why Laughlin's wavefunction are energetically favorable, but seldom explain why a lower energy state could explain the plateau at $\nu=1/3$. I met at several places claims like: a lower energy state at $\nu=1/3$ will pin the electron density at $\nu=1/3$. But why is that? And what actually it means? when we move $\nu$ from $1/3$ what happens? electrons adjust there distance or new particle been added? And is this a phase transition? Hope someone familiar this field could give me some help, thanks! - The Laughlin state alone doesn't explain the plateau. There is a lot more to the story. Firstly at filling factor=1/3 the many-body ground state of the interacting electron gas is "approximately" the Laughlin wavefunction. By this I mean that the overlap between the Laughlin state and the numerically found ground state (for any realistic interaction like coulombic) is very large, i.e. their inner product is quite close to 1. Using the plasma analogy one can show that this state corresponds to uniform electron density. (See Girvin's Les Houches notes for details on Plasma analogy.) Secondly the transport phenomena are decided by charged excitations in the system. For the filling factors 1/3,1/5,1/7,etc. the charged excitations are quasiholes and quasielectrons. While the former has a dip in the density profile at some point Z (say) in the 2D plane, the latter has the opposite thing in its density profile (as opposed to the earlier uniform case). The plasma analogy can again be used to show that these quasiparticles will have fractional e/3 charge in our case. (Atleast for now let us avoid justifying why they are excited states.) Now lets say we are sitting exactly at 1/3 filling factor and then we add an electron to the system. It will break into 3 quasielectrons which can be separated at no extra energy cost (the idea of fractionalization). Similarly if some more electrons are added they will produce more quasiparticles. Now start thinking in terms of the 'semiclassical percolation picture' that is applied to electrons to explain Integer QHE (Again see Girvin's notes). Instead of electrons we give the same arguments using quasiparticles to explain the plateaus around 1/3 filling factor. The conductivities stop changing when the added quasiparticles are either going into the valleys of the disorder potential or are ending up on shorelines at the 2 well separated edges. Let me clarify things a bit more. Think of starting with the 1/3 filling factor ground state. Now let us add adiabatically 1 flux quanta through a thin solenoid at the origin of space (See Laughlin's Nobel lecture). He shows that in this process e/3 charge flows towards the origin and gets collected there. Thus we have ended up with an exact ground eigenstate of the original hamiltonian + e/3 charge. So quasiholes are 'charged excitations', not the excited state when sitting at 1/3 filling. In fact the low energy gapped excitations at 1/3 filling are 'neutral collective excitations' (Again see Girvin's notes) and the existence of this gap is necessary for adiabaticity to work fine in the above thought experiment. (In the words of Laughlin the usage of the word quasiparticles here was "unfortunate".) Now if I just move the filling factor a bit in an experiment the new ground state is made up of new "quasiparticles". - Hi, Akshay,thanks for your detailed answering. The Girvin's note you mention is great,I will read it carefully. Here are two more questions, hope you could answer 1. The system is at The Laughlin's ground state only at $v=1/3$ or at the region where $v=1/3$ plateau exist? 2. What is the role of energy gap to excite the quasi-particles? does it block the scattering thus lead to the zero longitude resistence? –  Knightq Mar 15 '13 at 2:19 The Laughlin's state is the (approximate) ground state only at 1/3 filling factor. Also in a loose sense the gap is connected to the fact that the particles can't hop from the shoreline at one edge to another, which in turn implies 0 longitudinal resistance. –  Akshay Kumar Mar 15 '13 at 15:09 So if Laughlin's ground state is only true for 1/3, why when you add particles(filling factor moves) you still use the properties (quasi particles) of Laughlin state? And what is the ground state for v not equal to 1/3? –  Knightq Mar 15 '13 at 21:54 I edited my answer above. I hope it helps. The lectures and notes are definitely a must read. –  Akshay Kumar Mar 16 '13 at 4:11 I see. Thanks a lot! –  Knightq Mar 17 '13 at 18:50
# Is there a more useful formulation of the frame condition for the McKinsey axiom? I am looking for a Kripke frame condition corresponding to the McKinsey axiom M: $\Box\Diamond p \rightarrow \Diamond\Box p$. I read somewhere the following condition: "For every partitioning of the set of worlds into two disjoint partitions, every world can see a world whose successors lie all in the same partition." This follows from rewriting the formula as $\Diamond\Box \lnot p \lor \Diamond\Box p$. But it is difficult to use because it involves sets of worlds. So I am looking for a frame condition for KM of the form $\forall w P(w)$, where $P$ is a predicate expressing visibility between $w$ and other worlds. In the case of M, this condition cannot be first-order, but it could still be second order. For example: • Formula $(p\land \Box(\Diamond p \rightarrow p)) \rightarrow \Box p$ corresponds to frames for which $\forall w$, if $wRw'$ then there is a finite sequence $w_0,...,w_n$ such that $w_0=w'$, $w_0Rw_1Rw_2...Rw_nRw$ and also $wRw_i$ for $1\le i \le n$. See article A Simple Incomplete Extension of T which is the Union of Two Complete Modal Logics with f.m.p. by Roy A. Benton. • Formula $\Box(\Box p \rightarrow p) \rightarrow \Box p$ (Loeb) corresponds to frames for which $\forall w$ we have $wRw' \land w'Rw'' \rightarrow wRw''$ (transitive) and also there is no infinite sequence of worlds $wRw_1Rw_2R...$ starting from $w$ (converse well-founded). See P. Blackburn Modal Logic pp 131; it is also shown there that both the Loeb and the McKinsey formulas do not correspond to a first order condition. The above examples are not first-order conditions. But note that they describe their class of frames by stating what an arbitrary world can see, i.e. without using a partition or a valuation. So my question is: is there a similar frame condition known for axiom M? This should correspond to the frames of KM itself, i.e not in conjunction with other axioms. My hope is that in such a form it would be better suited for analyzing the extensions of KM − any extension, not just K4M. • How would you formalize the condition that "$P$ is a predicate expressing visibility between $w$ and other worlds" in a way that excludes the condition you gave, if you're allowing $P$ to be second-order? Do you mean that $P$ has access to an extra "reachability" relation, aside from the visibility one, but that no explicit second-order quantification is allowed? May 12 '15 at 17:38 • Let $P(w)$ say that $p$ holds at world $w$, and leq $R(w,v)$ say that $v$ is visibile from $w$. Then the McKinsey axiom is directly stated as $(\forall w)(\exists v)[R(w,v) \land P(v)] \to (\exists w)(\forall v)[R(w,v) \to P(v)]$. If that is not what you are looking for, can you clarify what you are looking for? May 12 '15 at 17:51 • @Gregory J. Puleo: I am not sure how to formalize this, but I would like something that does not quantify over partitions of worlds. Something like in the example I gave (there are formulas that correspond to such descriptions.) As I understand, $w$ sees $w'$ in one step or in two steps is first-order, but in a finite number of steps is not. The purpose is to get a condition which can be combined more easily with other frame conditions like transitivity, convergence etc. May 12 '15 at 17:54 • @Carl Mummert: I would like something that does not involve the valuations $P(v)$. Only who sees who if possible. May 12 '15 at 17:58 • Edited to clarify, added examples. May 13 '15 at 4:55 I believe the question is answered by the following paper; • "A Note on Modal Formulae and Relational Properties", J. F. A. K. van Benthem, The Journal of Symbolic Logic, Vol. 40, No. 1 (Mar., 1975), pp. 55-58. DOI: 10.2307/2272270 URL: http://www.jstor.org/stable/2272270 which states: Theorem 1. There is no first-order formula $\phi$ such that $F \vDash \phi \Leftrightarrow F \vdash \Box\Diamond p \to \Diamond \Box p$ for all $F$. I found the result cited in • "The McKinsey Axiom is not Canonical", Robert Goldblatt, The Journal of Symbolic Logic, Vol. 56, No. 2 (Jun., 1991), pp. 554-562, DOI: 10.2307/2274699 URL: http://www.jstor.org/stable/2274699 Goldblatt attributes the result independently to van Benthem's paper above and to his own paper • "First-Order Definability in Modal Logic", R. I. Goldblatt, The Journal of Symbolic Logic, Vol. 40, No. 1 (Mar., 1975), pp. 35-40. DOI: 10.2307/2272267 URL: http://www.jstor.org/stable/2272267 • I understand that the class of frames corresponding to M is not first-order definable. But does this mean there is no frame condition of the form I am asking? May 13 '15 at 4:48 See : • Alexander Chagrov & Michael Zakharyaschev, Modal Logic (1997), page 82 : A transitive frame $\mathfrak F$ validates the McKinsey formula iff satisfies the McKinsey condition where the McKinsey condition is : $\forall x \exists y(xRy \land \forall z(yRz \to y=z))$. • Yes, this is also called "every world sees a final world" in Hughes&Cresswell pp. 131. But it only works in conjunction with axiom 4, i.e. in K4M, or in S4.M if also reflexive. I am looking for a frame characterzation of KM. May 13 '15 at 4:49
# Talk:Generalized normal distribution WikiProject Statistics (Rated Start-class, Low-importance) This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion. Start  This article has been rated as Start-Class on the quality scale. Low  This article has been rated as Low-importance on the importance scale. WikiProject Mathematics (Rated Start-class, Low-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: Start Class Low Importance Field:  Probability and statistics ## Fat Error The PDF formula presented for the generalized normal distribution version 1, aka exponential power distribution aka generalized error distribution, is NOT a generalized NORMAL distribution, but most probably a generalized Laplace distribution. The PDF formula presented cannot reproduce the unmodified normal distribution! alpha=1 leads to sd= 0.707 (Normal distribution: sd=1) and alpha= sqrt(2) leads to density(0)= 0.2829 (Normal Distribution: density(0)= 0.3989). In case of doubt, I can prove the R code to reproduce my calculations. — Preceding unsigned comment added by Consuli74 (talkcontribs) 09:15, 29 June 2016 (UTC) ## Merger proposal This is the same distribution as Generalized normal distribution. While it shouldn't be too hard to merge the two, the main question is which name to use? Any comments? -3mta3 (talk) 12:46, 2 March 2009 (UTC) One thing that needs thinking about is how to handle other distributions which are also called "generalised normal". For example, documentaion for R includes a generalised normal distribution which is not the one referred to here (see eg. [1] ). Nether of these distributions is what is called either the skew-generalised normal or skew-normal distribution. Also I note that the ISI glossary has "Kapteyn's univariate distribution" as an alternative name for a generalised normal (not clear which) [2], but I guess this name should be avoided. Melcombe (talk) 14:31, 2 March 2009 (UTC) I agree. I added this page recently because I didn't find this distribution (under either name) on the List of probability distributions page. Since "Gaussian distribution" redirects to "Normal distribution", I propose that we merge these two pages under "generalized normal distribution" with a redirect from "generalized Gaussian distribution." Then we can add a comment in the text about the "Kapteyn" name. I don't know what to do about the other generalized normal distribution. Maybe we could have them on the same page with two copies of the probability distribution template. Skbkekas (talk) 16:39, 2 March 2009 (UTC) On further investigation, the generalized normal distribution referred to in the R documentation cited above ([3]) does not appear to include the normal distribution as a special case (also, the literature reference in the R code is to a 1990 paper of Hosking's, but this paper does not discuss anything like the distribution in the R code, and I didn't find any use of the term "generalized normal" in Hosking's other papers on JSTOR). In that sense, it is "generalized" in the same way that the lognormal, inverse normal, and half-normal distributions are (i.e. derived from the normal via a transformation). I think we can clarify that "generalized here" means a parameteric family that includes the normal distribution as a special case. This includes skew-normal, which already has a page that we can link to. Skbkekas (talk) 19:22, 2 March 2009 (UTC) The above was moved from Talk:Generalized Gaussian distribution. Melcombe (talk) 10:34, 3 March 2009 (UTC) Actually the generalized normal of Hosking does include the normal distribution as a special case, as well as the three-parameter log-normal of both positive and negative skewness. In one sense it might really be only be a form of reparameterisation of the three-parameter lognormal distribution, but the point here is that it appears in the literature under the name "generalized normal". Melcombe (talk) 10:46, 3 March 2009 (UTC) Given that Hosking's version is in most cases just a log-normal, it might be better overall to have a separate article for that parameterisation but calling it something like "three-parameter lognormal" or "shifted lognormal", or even "generalised log-normal". I think putting several diffeent families of distributions in a single article is too confusing. The diagrams are an excellent contribution though. Melcombe (talk) 09:45, 5 March 2009 (UTC) Melcombe is correct that these are both generalizations of the normal distribution in the same sense. I have included a discussion of both of them on this page, calling them "version 1" and "version 2" for lack of better terms. I don't have any references for "version 2" except the R documentation, so if someone could add the appropriate reference to Hosking's book or paper that would be great. Skbkekas (talk) 15:37, 5 March 2009 (UTC) I have added a reference for this. However, it seems that by the time of that book they had decided to changec the name to just "lognormal distribution" and it appears only under that name in the book. However, it is certainly the same distribution as Hosking originally called the "generalized normal" . The earliest ref is "Hosking, J. R. M.: 1986, The theory of probability weighted moments. IBM Research Report, RC12210" but this is not readiliy accessible. The "generalized normal" terminology has been used by others based on this earlier report: for example, from 1998, http://www.springerlink.com/content/pk6871x147547766/ , and, from 2007, http://linkinghub.elsevier.com/retrieve/pii/S0022169407005069 . Melcombe (talk) 12:34, 6 March 2009 (UTC) ## Multivariate Version We need a multivariate generalization for this function. Currently, only the univariate version is given (where x and alpha are scalars). I was almost able to figure out the multivariate version (where x is a vector and alpha is a matrix), but I couldn't figure out the scale factor (to ensure unit variance). Almon.David.Ing (talk) 17:22, 17 June 2009 (UTC) ## Version 1 Questions Does the claim of continuous derivatives under "Parameter Estimation" refer to only to the \beta? The LaPlace distribution (\beta=1) has no derivative in \mu at zero, which contradicts the claim of floor(\beta)=1 continuous derivatives in the text. Similarly, I think that the loglikelihood is infinitely smooth is \alpha. Some clarification would be nice. Also, I don't understand the CDF plot; since all of the exponential power densities as illustrated have \mu=0, the CDF of the \beta=0.5 case should be 0.5 when x=0 (like all of the others)...? Actually, the CDF of the \beta=1 case is wrong too; there must be a bug in the integrating function used for the plots. I looked at the python code and couldn't find it, unless scipy's gammainc function is buggy which would be odd. 69.201.131.239 (talk) 04:36, 19 March 2012 (UTC) The CDF of version 1 seems to have a value other than zero at x = -\inf which is odd. It works out correct if the \Gamma(1/\beta) is not present in the denominator of the second term. — Preceding unsigned comment added by 27.251.48.50 (talk) 06:13, 28 March 2012 (UTC) ah, okay. python must use a normalized incomplete gamma. i would redo the graphs but i have a mac and right now i don't feel like jumping through the hoops to install numpy and scipy. The CDF plot is indeed wrong (as pointed out, since the PDFs are even CDFs should all equal 0.5 at x = 0); it seems like it just shows the integral of the PDFs as shown in the figure above, i.e. on the interval [-3, 3]. Would anyone who's good with Python be able to fix this? — Preceding unsigned comment added by 151.225.20.72 (talk) 08:19, 25 July 2013 (UTC) Fixed the CDFs. The plot is visually accurate when simply re-running the original code in a modern python setup. jugander (t) 00:44, 28 April 2015 (UTC) ## Assessment comment The comment(s) below were originally left at Talk:Generalized normal distribution/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section. This article addresses the univariate function (where x and alpha are univariate). It could be expanded to address the multivariate function (where x is a vector and alpha is expressed as a matrix). I was almost able to derive this, but I am not sure about the scale factor (to ensure unit area). Almon.David.Ing (talk) 17:12, 17 June 2009 (UTC) Last edited at 17:12, 17 June 2009 (UTC). Substituted at 03:11, 3 May 2016 (UTC) Hello fellow Wikipedians, I have just modified one external link on Generalized normal distribution. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs. You may set the |checked=, on this template, to true or failed to let other editors know you reviewed the change. If you find any errors, please use the tools below to fix them or call an editor by setting |needhelp= to your help request. • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool. • If you found an error with any archives or the URLs themselves, you can fix them with this tool. If you are unable to use these tools, you may set |needhelp=<your help request> on this template to request help from an experienced user. Please include details about your problem, to help other editors. Cheers.—InternetArchiveBot 04:27, 9 January 2017 (UTC) ## Version 2 issues I've been trying to use the PDF that's listed for version 2, but it doesn't seem to work. In order to better understand the PDF I tried to follow the links (11 in particular since it's online, but it's broken). I was able to Google the documentation in question, but it does not give details on what the actual function in R does (or what formula it's following). The details listed for 'version' 2 are pretty sparse and I've spent well over an hour at this point trying to make the function work as listed (and to track down more details on it, short of trying to get the book listed through an interlibrary loan). The PDF listed suggests that it is proportional to Φ(y)/x, where Φ(y)= 1/sqrt(2) exp(-y^2). However, it then suggests that y (when κ≠0) is -1/κ...where the negative sign would be counteracted by the square...? The log function (I assume base 10 log?) leads to imaginary values in the exponential, which leads to large oscillations/asymptotes in the final plot. Anyway, I'm not sure what the right answer is, but Version 2 section could use a bit of clarification. This seemed like the best place to bring the subject up. — Preceding unsigned comment added by Plasma geek (talkcontribs) 16:36, 29 June 2017 (UTC)
Calculus # Implicit Differentiation - Trigonometric Functions If $10xy=\cot(xy),$ what is $\displaystyle \frac{dy}{dx}?$ If $7\cos x \sin y=11,$ what is $\displaystyle \frac{dy}{dx}?$ If $\sin (x+3y)=y^{4}\cos x,$ what is $\frac{dy}{dx}?$ Let $y = \sin(6x + 20y)$. The slope of the tangent at the point $(x,y) = (0,0)$ can be expressed as $-\frac{a}{b}$, where $a$ and $b$ are coprime positive integers. What is the value of $a+b$? If $x^2y^2+x\sin y=13,$ what is $\displaystyle \frac{dy}{dx}?$ ×
My ARM assembler woes... Recommended Posts Jonny_S    149 So earlier on I was saying how I didn't mind doing some ARM assembler, I have now changed my mind. I've had to write a sorting program that takes my name and sorts it alphabetically...I started 12 hours ago and I just can't figure out whats wrong. It seems that its looping through and sorting once so it sorts the character next to it but thats it! Its real late and I'm a beaten man I could literally cry over this now...so I was wondering if anyone could look through the routine and see where its going wrong? sortstring MOV r0, r1 ; save the contents of r1 MOV r2,#12 ; the size of the string MOV r3,#1 ; the counter for the outer loop mainloop MOV r8,#0 ; variable that holds if a swap has occured CMP r3, r2 ; see if the loop counter matches the string size MOV r9, #0 ; set the inner loop counter MOV r10, #12 SUB r10, r10, #r3 sortloop CMP r9, r10 ; see if r9 and r10 match BGE endloop ; if r9 >= r10 then exit this loop LDRB r4,[r1],#1 ; load a byte (or character) into r4 LDRB r5,[r1] ; load the byte next to r4 into r5 CMP r4,r5 BLE skipsort ; if r4 > r5 STRB r4,[r1],#-1 ; write the updated charaters to srcstring STRB r5,[r1] MOV r8, #1 ; indicatea a swap has occured skipsort ADD r1, r1, #1 ; increment the address that r1 points to ADD r9,r9,#1 ; increment the inner loop counter B sortloop ; jump back to the top endloop CMP r8,#0 BEQ endsort ; if no swap occured finish the program ADD r3,r3,#1 ; increase the outer-loop counter endsort MOV r1, r0 ; restore the pointer to the string MOV pc, lr ; return Share on other sites cdoty    733 Quote: Original post by Jonny_S LDRB r4,[r1],#1 ; load a byte (or character) into r4 LDRB r5,[r1] ; load the byte next to r4 into r5... STRB r4,[r1],#-1 ; write the updated charaters to srcstring STRB r5,[r1] (In my best Homer voice)Mmmm... Arm In doing a quick look over the code, It would appear that r4 is being moved two spots backwards, and r5 is being put back where it came from. I think you want: LDRB r4,[r1] ; load a byte (or character) into r4 LDRB r5,[r1],#1 ; load the byte next to r4 into r5... STRB r4,[r1],#1 ; write the updated charaters to srcstring STRB r5,[r1] [Edited by - cdoty on May 16, 2007 11:13:19 AM]
# Logistic Regression cost optimization function ## In this tutorial, we will learn how to update learning parameters (gradient descent). We'll use parameters from the forward and backward propagation Let's begin with steps we defined in the previous tutorial and what's left to do: • Define the model structure (data shape) (done); • Initialize model parameters; • Learn the parameters for the model by minimizing the cost: - Calculate current loss (forward propagation) (done); - Calculate current gradient (backward propagation) (done); • Use the learned parameters to make predictions (on the test set); • Analyse the results and conclude the tutorial. In the previous tutorial, we defined our model structure, learned to compute a cost function and its gradient. In this tutorial, we will write an optimization function to update the parameters using gradient descent. So we'll write the optimization function that will learn w and b by minimizing the cost function J. For a parameter θ, the update rule is (α is the learning rate): $\theta =\theta -\alpha d\theta$ ## The cost function in logistic regression: One of the reasons we use the cost function for logistic regression is that it’s a convex function with a single global optimum. You can imagine rolling a ball down the bowl-shaped function (image bellow)  -  it would settle at the bottom. Similarly, to find the minimum cost function, we need to get to the lowest point. To do that, we can start from anywhere on the function and iteratively move down in the direction of the steepest slope, adjusting the values of w and b that lead us to the minimum. For this, we use the following two formulas: In these two equations, the partial derivatives dw and db represent the effect that a change in w and b have on the cost function, respectively. By finding the slope and taking the negative of that slope, we ensure that we will always move in the minimum direction. To get a better understanding, let’s see this graphically for dw: When the derivative term is positive, we move in the opposite direction towards a decreasing value of w. When the derivative is negative, we move toward increasing w, thereby ensuring that we’re always moving toward the minimum. The alpha term in front of the partial derivative is called the learning rate and measures how big a step to take at each iteration. The choice of learning parameters is an important one - too small, and the model will take very long to find the minimum, too large, and the model might overshoot the minimum and fail to find the minimum. Gradient descent is the essence of the learning process - through it, the machine learns what values of weights and biases minimize the cost function. It does this by iteratively comparing its predicted output for a set of data to the true output in the training process. ## Coding optimization function: So we will implement an optimization function, but first, let's see what are the inputs and outputs to it: Arguments: w - weights, a NumPy array of size (ROWS * COLS * CHANNELS, 1); b - bias, a scalar; X - data of size (ROWS * COLS * CHANNELS, number of examples); Y - true "label" vector (containing 0 if a dog, 1 if cat) of size (1, number of examples); num_iterations - number of iterations of the optimization loop; learning_rate - learning rate of the gradient descent update rule; print_cost - True to print the loss every 100 steps. Return: params - a dictionary containing the weights w and bias b; grads - a dictionary containing the gradients of the weights and bias concerning the cost function; costs - list of all the costs computed during the optimization. Here is the code: def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): costs = [] for i in range(num_iterations): grads, cost = propagate(w, b, X, Y) # update w and b w = w - learning_rate*dw b = b - learning_rate*db # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) # update w and b to dictionary params = {"w": w, "b": b} # update derivatives to dictionary "db": db} return params, grads, costs Let's test the above function with variables from our previous tutorial where we were writing propogate() function: params, grads, costs = optimize(w, b, X, Y, num_iterations = 100, learning_rate = 0.009, print_cost = False) print("w = " + str(params["w"])) print("b = " + str(params["b"])) print("db = " + str(grads["db"])) If everything is fine as a result, you should get: w = [[-0.49157334] [-0.16017651]] b = 3.948381664135624 dw = [[ 0.03602232] [-0.02064108]] db = -0.01897084202791005 ## Full tutorial code: import os import cv2 import numpy as np import matplotlib.pyplot as plt import scipy ROWS = 64 COLS = 64 CHANNELS = 3 TRAIN_DIR = 'Train_data/' TEST_DIR = 'Test_data/' #train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] #test_images = [TEST_DIR+i for i in os.listdir(TEST_DIR)] return cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) def prepare_data(images): m = len(images) X = np.zeros((m, ROWS, COLS, CHANNELS), dtype=np.uint8) y = np.zeros((1, m)) for i, image_file in enumerate(images): if 'dog' in image_file.lower(): y[0, i] = 1 elif 'cat' in image_file.lower(): y[0, i] = 0 return X, y def sigmoid(z): s = 1/(1+np.exp(-z)) return s def propagate(w, b, X, Y): m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) z = np.dot(w.T, X)+b # tag 1 A = sigmoid(z) # tag 2 cost = (-np.sum(Y*np.log(A)+(1-Y)*np.log(1-A)))/m # tag 5 # BACKWARD PROPAGATION (TO FIND GRAD) dw = (np.dot(X,(A-Y).T))/m # tag 6 db = np.average(A-Y) # tag 7 cost = np.squeeze(cost) "db": db} w = np.array([[1.],[2.]]) b = 4. X = np.array([[5., 6., -7.],[8., 9., -10.]]) Y = np.array([[1,0,1]]) ''' grads, cost = propagate(w, b, X, Y) print(cost) train_set_x, train_set_y = prepare_data(train_images) test_set_x, test_set_y = prepare_data(test_images) train_set_x_flatten = train_set_x.reshape(train_set_x.shape[0], ROWS*COLS*CHANNELS).T test_set_x_flatten = test_set_x.reshape(test_set_x.shape[0], -1).T print("train_set_x shape " + str(train_set_x.shape)) print("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print("train_set_y shape: " + str(train_set_y.shape)) print("test_set_x shape " + str(test_set_x.shape)) print("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print("test_set_y shape: " + str(test_set_y.shape)) train_set_x = train_set_x_flatten/255 test_set_x = test_set_x_flatten/255 ''' def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): costs = [] for i in range(num_iterations): grads, cost = propagate(w, b, X, Y) # update w and b w = w - learning_rate*dw b = b - learning_rate*db # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) # update w and b to dictionary params = {"w": w, "b": b} # update derivatives to dictionary "db": db} print("db = " + str(grads["db"]))
ITK  4.8.0 Insight Segmentation and Registration Toolkit Examples/DataRepresentation/Mesh/PointSet1.cxx /*========================================================================= * * * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * * Unless required by applicable law or agreed to in writing, software * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * *=========================================================================*/ // Software Guide : BeginLatex // // The \doxygen{itk::PointSet} is a basic class intended to represent geometry // in the form of a set of points in $N$-dimensional space. It is the base // class for the \doxygen{itk::Mesh} providing the methods necessary to // manipulate sets of points. Points can have values associated with // them. The type of such values is defined by a template parameter of the // \code{itk::PointSet} class (i.e., \code{TPixelType}). Two basic // interaction styles of PointSets are available in ITK. These styles are // referred to as \emph{static} and \emph{dynamic}. The first style is used // when the number of points in the set is known in advance and is not // expected to change as a consequence of the manipulations performed on the // set. The dynamic style, on the other hand, is intended to support // insertion and removal of points in an efficient manner. Distinguishing // between the two styles is meant to facilitate the fine tuning of a // \code{PointSet}'s behavior while optimizing performance and memory // management. // // \index{itk::PointSet} // \index{itk::PointSet!Static} // \index{itk::PointSet!Dynamic} // // In order to use the PointSet class, its header file should be included. // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet #include "itkPointSet.h" // Software Guide : EndCodeSnippet int main(int, char *[]) { // Software Guide : BeginLatex // // Then we must decide what type of value to associate with the // points. This is generally called the \code{PixelType} in order to make the // terminology consistent with the \code{itk::Image}. The PointSet is also // templated over the dimension of the space in which the points are // represented. The following declaration illustrates a typical // instantiation of the PointSet class. // // \index{itk::PointSet!Instantiation} // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet typedef itk::PointSet< unsigned short, 3 > PointSetType; // Software Guide : EndCodeSnippet // Software Guide : BeginLatex // // A \code{PointSet} object is created by invoking the \code{New()} method // on its type. The resulting object must be assigned to a // \code{SmartPointer}. The PointSet is then reference-counted and can be // shared by multiple objects. The memory allocated for the PointSet will // be released when the number of references to the object is reduced to // zero. This simply means that the user does not need to be concerned // with invoking the \code{Delete()} method on this class. In fact, the // \code{Delete()} method should \textbf{never} be called directly within // any of the reference-counted ITK classes. // // \index{itk::PointSet!New()} // \index{itk::PointSet!Pointer} // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet PointSetType::Pointer pointsSet = PointSetType::New(); // Software Guide : EndCodeSnippet // Software Guide : BeginLatex // // Following the principles of Generic Programming, the \code{PointSet} class has a // set of associated defined types to ensure that interacting objects can be // declared with compatible types. This set of type definitions is commonly known // as a set of \emph{traits}. Among the traits of the \code{PointSet} class is // \code{PointType}, which is used by the point set to represent points in space. // The following declaration takes the point type as defined in the \code{PointSet} // traits and renames it to be conveniently used in the global namespace. // // \index{itk::PointSet!PointType} // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet typedef PointSetType::PointType PointType; // Software Guide : EndCodeSnippet // Software Guide : BeginLatex // // The \code{PointType} can now be used to declare point objects to be // inserted in the \code{PointSet}. Points are fairly small objects, so // it is inconvenient to manage them with reference counting and smart // pointers. They are simply instantiated as typical C++ classes. The Point // class inherits the \code{[]} operator from the \code{itk::Array} class. // This makes it possible to access its components using index notation. For // efficiency's sake no bounds checking is performed during index access. It is // the user's responsibility to ensure that the index used is in the range // $\{0,Dimension-1\}$. Each of the components in the point is associated // with space coordinates. The following code illustrates how to instantiate // a point and initialize its components. // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet PointType p0; p0[0] = -1.0; // x coordinate p0[1] = -1.0; // y coordinate p0[2] = 0.0; // z coordinate // Software Guide : EndCodeSnippet PointType p1; p1[0] = 1.0; // Point 1 = { 1,-1,0 } p1[1] = -1.0; p1[2] = 0.0; PointType p2; // Point 2 = { 1,1,0 } p2[0] = 1.0; p2[1] = 1.0; p2[2] = 0.0; // Software Guide : BeginLatex // // Points are inserted in the PointSet by using the \code{SetPoint()} method. // This method requires the user to provide a unique identifier for the // point. The identifier is typically an unsigned integer that will enumerate // the points as they are being inserted. The following code shows how three // points are inserted into the PointSet. // // \index{itk::PointSet!SetPoint()} // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet pointsSet->SetPoint( 0, p0 ); pointsSet->SetPoint( 1, p1 ); pointsSet->SetPoint( 2, p2 ); // Software Guide : EndCodeSnippet // Software Guide : BeginLatex // // It is possible to query the PointSet in order to determine how many points // have been inserted into it. This is done with the \code{GetNumberOfPoints()} // method as illustrated below. // // \index{itk::PointSet!GetNumberOfPoints()} // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet const unsigned int numberOfPoints = pointsSet->GetNumberOfPoints(); std::cout << numberOfPoints << std::endl; // Software Guide : EndCodeSnippet // Software Guide : BeginLatex // // Points can be read from the PointSet by using the \code{GetPoint()} method // and the integer identifier. The point is stored in a pointer provided by // the user. If the identifier provided does not match an // existing point, the method will return \code{false} and the contents of the // point will be invalid. The following code illustrates point access // using defensive programming. // // \index{itk::PointSet!GetPoint()} // // Software Guide : EndLatex // Software Guide : BeginCodeSnippet PointType pp; bool pointExists = pointsSet->GetPoint( 1, & pp ); if( pointExists ) { std::cout << "Point is = " << pp << std::endl; } // Software Guide : EndCodeSnippet // Software Guide : BeginLatex // // \code{GetPoint()} and \code{SetPoint()} are not the most efficient methods // to access points in the PointSet. It is preferable to get direct access // to the internal point container defined by the \emph{traits} and use // iterators to walk sequentially over the list of points (as shown in // the following example). // // Software Guide : EndLatex return EXIT_SUCCESS; }
# Camera pose to world coordinate transformation I have managed to receive camera_pose for ORB-SLAM by following their code. If I run \$ echo /world /camera_pose Translation [-0.8, 0.66, -0.04] Rotation in Quaternion [ 0.2, -0.3, -0.2, 0.2] in RPY radian: [0.071, -0.032, -0.34] in RPY degree [0.4, -25.09, -3.2] Does this mean these are the translations and rotations of the camera_pose compared to the world coordinates? How do I retrieve the estimate of the camera_pose in global coordinates (I want x,y,z,yaw,pitch, roll)? If you need to transform quaternion to euler angels you just need to use this equation: def quaternion_to_euler(q): (x, y, z, w) = (q[0], q[1], q[2], q[3]) t0 = +2.0 * (w * x + y * z) t1 = +1.0 - 2.0 * (x * x + y * y) roll = math.atan2(t0, t1) t2 = +2.0 * (w * y - z * x) t2 = +1.0 if t2 > +1.0 else t2 t2 = -1.0 if t2 < -1.0 else t2 pitch = math.asin(t2) t3 = +2.0 * (w * z + x * y) t4 = +1.0 - 2.0 * (y * y + z * z) yaw = math.atan2(t3, t4) return [yaw, pitch, roll] I think translations are already in global coordinates.
# Limited Entropy Dot ComNot so random thoughts on security featured by Eloi Sanfèlix 18Apr/101 ## Crypto Series: Introduction to the RSA algorithm After seeing how the ElGamal system works, today we are going to take a look at the RSA public key cryptosystem. The RSA algorithm was first published by Rivest, Shamir and Adleman in 1978 and is probably the most used crypto algorithm today. Despite this fact, the algorithm seems to have been invented by Clifford Cocks, a british mathematician who worked for a UK intelligence agency. Since this work was never published due to the top-secret classification, the algorithm received its name from Rivest, Shamir and Adleman who were the first to discuss it publicly. A document declassified in 1997 revealed the fact that Clifford Cocks had actually described an equivalent system in 1973. Let me remind you once again that these posts are not intended to be 100% accurate in a mathematical sense, but an introduction for people who doesn't know much about cryptography. If you want more accurate and complete descriptions, take a crypto book such as the Handbook of Applied Cryptography I've linked in most of my posts :). Setting up the RSA algorithm The RSA algorithm is based on the assumption that integer factorization is a difficult problem. This means that given a large value n, it is difficult to find the prime factors that make up n. Based on this assumption, when Alice and Bob want to use RSA for their communications, each of them generates a big number n which is the product of two primes p,q with approximately the same length. $n = p\cdot q$ Next, they choose their public exponent e, modulo n. Typical values for e include 3 (which is not recommended!) and $2^{16}+1$ (65537). From e, they compute their private exponent d so that: $e \cdot d \equiv 1 \pmod{\varphi(n)}$ Where $\varphi(n)$ is the Euler's totient of n. This is a mathematical function which is equal to the number of numbers smaller than n which are comprimes with n, i.e. numbers that do not have any common factor with n. If n is a prime p, then its totient is p-1 since all numbers below p are comprimes with p. In the case of the RSA setup, n is the product of two primes. In that case, the resulting value is lcm((p-1)(q-1)) because only the multiples of p and q are not comprimes with n. Once our two parties have their respective public and private exponents, they can share the public exponents and the modulus they computed. Encryption with RSA Once the public key (i.e. e and n) of the receiving end of the communication is known, the sending party can encrypt messages like this: $c = m^e \pmod{n}$ When this message is received, it can be decrypted using the private key and a modular exponentiation as well: $m^{\prime} = c^d \pmod{n} = m$ Example sage: p=random_prime(10000) sage: q=random_prime(10000) sage: n=p*q sage: p,q,n (883, 2749, 2427367) sage: e=17 sage: G=IntegerModRing(lcm(p-1,q-1)) sage: d = G(e)^-1 sage: G(d)*G(e) 1 sage: m=1337 sage: G2=IntegerModRing(n) sage: c=G2(m)^e sage: c 1035365 sage: m_prime=G2(c)^d sage: m_prime 1337 In the commands above, I first create two random primes below 10000 and compute n. Then I create a IntegerModRing object to compute things modulo lcm(p-1,q-1) and perform the computation of the private exponent as the inverse of the public exponent on that ring. Next, I create a new ring modulo N. Then I can use the public exponent to encrypt a message m and use the private exponent to decipher the cryptotext c... and it works! Correctness of RSA encryption/decryption We have seen it works with our previous example, but that doesn't prove that it really works always. I could have chosen the numbers carefully for my example and make them work. Euler's theorem tells us that given a number n and another number a which does not divide n the following is true: $a^{\varphi(n)} \equiv 1 \pmod{n}$ Therefore, and since $e\cdot d \equiv 1 \pmod{\varphi(n)}$, for any message m that does not divide n the encryption and decryption process will work fine. However, for values of m that divide n we need to use more advanced maths to prove the correctness. Another way to prove it is to use Fermat's little theorem and the Chinese Remainder Theorem. I will explain these theorems in my next post and then I will provide a complete proof based on them. RSA for signing In the case of RSA, digital signatures can be easily computed by just using d instead of e. So, for an RSA signature one would take message m and compute its hash H(m). Then, one would compute the signature s as: $s = (H(m))^d \pmod{n}$ For verifying the signature, the receiving end would have to compute the message hash H(m) and compare it to the hash contained in the signature: $H'(m) = s^e \pmod{n} \equiv (H(m)^d)^e \pmod{n} \equiv H(m) \pmod{n}$ Therefore, if the hash computed over the received message matches the one computed from the signature, the message has not been altered and comes from the claimed sender. Security of RSA In order to completely break RSA, one would have to factor n into it's two prime factors, p and q. Otherwise, computing d from e would be hard because (p-1) and (q-1) are not known and n is a large number (which means that computing its totient is also difficult). In a few posts I will show an algorithm to solve the factorization problem. However, another way to break RSA encrypted messages would be to solve a discrete logarithm. Indeed, since $c=m^e \pmod{n}$, if one solves the discrete logarithm of c modulo n, the message would be recovered. Luckily, we already know that discrete logs are not easy to compute. And in this case, solving one does not break the whole system but just one message.
Category:Fresnel Sine Integral Function The Fresnel sine integral function is the real function $\R \to \R$ defined by: $\displaystyle \map {\operatorname S} x = \sqrt {\frac 2 \pi} \int_0^x \sin u^2 \rd u$
Big Black Book bad ass book of doom? not quite yet... Sunday, March 28, 2010 Did u know? Just so u know... Mathematics: The probability in a game of bridge of all four players getting a single-suit hand is approximately 4.47×10−28 Mathematics: The probability in a game of bridge of one player getting a single-suit hand is approximately 2.52×10−11 (0.00000000252%) Mathematics: The probability of rolling snake eyes 10 times in a row on a pair of fair dice is about 2.74×10−16 Mathematics — Lottery: The odds of winning the Grand Prize (matching all 6 numbers) in the US Powerball lottery, with a single ticket, under the rules as of August 2009, are 195,249,053 to 1 against, for a probability of 5.12×10−9 (0.000000512%). Mathematics — Lottery: The odds of winning any prize in the US Powerball Multistate Lottery, with a single ticket, under the rules as of 2006, are 36.61 to 1 against, for a probability of 0.027 (2.7%) Mathematics — Lottery: The odds of winning the Jackpot (matching the 6 main numbers) in the UK National Lottery, with a single ticket, under the rules as of August 2009, are 13,983,815 to 1 against, for a probability of 7.15×10−8 (0.00000715%). Mathematics — Lottery: The odds of winning any prize in the UK National Lottery, with a single ticket, under the rules as of 2003, are 54 to 1 against, for a probability of about 0.018 (1.8%) Mathematics - Poker: The odds of being dealt a royal flush in poker are 649,739 to 1 against, for a probability of 1.5 × 10−6 (0.00015%). Mathematics — Poker: The odds of being dealt a straight flush (other than a royal flush) in poker are 72,192 to 1 against, for a probability of 1.4 × 10−5 (0.0014%). Mathematics — Poker: The odds of being dealt a four of a kind in poker are 4,164 to 1 against, for a probability of 2.4 × 10−4 (0.024%). Mathematics — Poker: The odds of being dealt a full house in poker are 693 to 1 against, for a probability of 1.4 × 10−3 (0.14%). Mathematics — Poker: The odds of being dealt a flush in poker are 507.8 to 1 against, for a probability of 1.9 × 10−3 (0.19%). Mathematics — Poker: The odds of being dealt a straight in poker are 253.8 to 1 against, for a probability of 4 × 10−3 (0.39%). Mathematics — Poker: The odds of being dealt a three of a kind in poker are 46 to 1 against, for a probability of 0.021 (2.1%) Mathematics — Poker: The odds of being dealt two pair in poker are 20 to 1 against, for a probability of 0.048 (4.8%). Mathematics — Poker: The odds of being dealt only one pair in poker are about 5 to 2 against (2.37 to 1), for a probability of 0.42 (42%). Mathematics — Poker: The odds of being dealt no pair in poker are nearly 1 to 2, for a probability of about 0.5 (50%) Mathematics — Poker: the number of unique combinations of hands and shared cards in a 10-player game of Texas Hold'em is approximately 2.117 × 1028 Mathematics — Playing cards: There are 2 598 960 different 5-card poker hands that can be dealt from a standard 52-card deck. Mathematics — Cards: 52! = 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 (≈8 × 1067) - the number of ways to order the cards in a 52-card deck. Mathematics: the number system understood by most computers, the binary system, uses 2 digits: 0 and 1. Mathematics: the hexadecimal system, a common number system used in computer programming, uses 16 digits where the last 6 are usually represented by letters: 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. Mathematics: π ≈ 3.141592653589793, the ratio of a circle's circumference to its diameter Mathematics — Known digits of π: as of 2002, the number of known digits of π was 1,241,100,000,000 (1.2411 × 1012). Mathematics: 7,625,597,484,987 - a number that often appears when dealing with powers of 3. It can be expressed as 196833, 279, 327, $3^{3^3}$ and 33 or when using Knuth's up-arrow notation it can be expressed as $3 \uparrow\uparrow 3$ and $3 \uparrow\uparrow\uparrow 2$. Mathematics — NCAA Basketball Tournament: There are 9,223,372,036,854,775,808 (263) possible ways to enter the bracket. Mathematics — Rubik's Cube: There are 43,252,003,274,489,856,000 (about 43 × 1018) different positions of a 3x3x3 Rubik's Cube. Mathematics: There are 7,401,196,841,564,901,869,874,093,974,498,574,336,000,000,000 (≈7.401 × 1045) possible permutations for the Rubik's Revenge (4x4x4 Rubik's Cube). Mathematics: There are 282 870 942 277 741 856 536 180 333 107 150 328 293 127 731 985 672 134 721 536 000 000 000 000 000 (2.8287 × 1074) possible permutations for the Professor's Cube (5x5x5 Rubik's Cube). Mathematics: There are 157 152 858 401 024 063 281 013 959 519 483 771 508 510 790 313 968 742 344 694 684 829 502 629 887 168 573 442 107 637 760 000 000 000 000 000 000 000 000 (1.5715 × 10116) distinguishable permutations of the V-Cube 6 (6x6x6 Rubik's Cube). Mathematics: There are 19 500 551 183 731 307 835 329 126 754 019 748 794 904 992 692 043 434 567 152 132 912 323 232 706 135 469 180 065 278 712 755 853 360 682 328 551 719 137 311 299 993 600 000 000 000 000 000 000 000 000 000 000 000 (1.9501 × 10160) distinguishable permutations of the V-Cube 7 (7x7x7 Rubik's Cube). Mathematics — Sudoku: There are 6,670,903,752,021,072,936,960 (≈6.7 × 1021) 9×9 sudoku grids. Mathematics: 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000; 10100, a googol. Mathematics: 10googol ($10^{10^{100}}$), a googolplex. Mathematics: 243,112,608 × (243,112,609 − 1) is a 25,956,377-digit perfect number, the largest known as of 2009. Mathematics: Graham's number, the last ten digits of which are ...24641 95387. Arises as an upper bound solution to a problem in Ramsey theory and is probably the largest number seriously used in a mathematical proof. Representation in powers of 10 would be impractical (the number of digits in the exponent far exceeds the number of particles in the observable universe). Computing: There are 128 characters in the ASCII character set. Computing — Computational limit of a 32-bit CPU: 2 147 483 647 is equal to 231−1, and as such is the largest number which can fit into a signed (two's complement) 32-bit integer on a computer, thus marking the upper computational limit of a 32-bit CPU such as Intel's Pentium-class computer chips. Computing - IPv4: 4,294,967,296 (232) possible unique IP addresses. Computing — web pages: approximately 8 × 109 web pages indexed by Google as of 2004 Computing — Manufacturing: An estimated 6 × 1018 transistors were produced worldwide in 2008. BioMed: the DNA of the simplest viruses has some 5,000 base pairs. BioMed: Each neuron in the human brain is estimated to connect to 10,000 others BioMed: Each human being is estimated to have 30,000 to 40,000 genes. BioMed — Strands of hair on a head: The average human head has about 100,000–150,000 strands of hair. BioMed — Species: The World Resources Institute claims that approximately 1.4 million species have been named, out of an unknown number of total species (estimates range between 2 and 100 million species) BioMed — base pairs in the genome: approximately 3 × 109 base pairs in the human genome. BioMed — bacteria in the human body: there are roughly 1010 bacteria in the human oral cavity. BioMed — Neurons in the brain: approximately 1011 neurons in the human brain. BioMed — Bacteria on the human body: the surface of the human body houses roughly 1012 bacteria. BioMed — Cells in the human body: the human body consists of roughly 1014 cells, of which only 1013 are human. The remainder of the cells are bacteria, which mostly reside in the gastrointestinal tract, although the skin is also covered in bacteria. BioMed-Insects: 200,000,000,000,000 (2 × 1014) - The estimated number of ants on Earth. BioMed — Atoms in the human body: the average human body contains roughly 7 × 1027 atoms. BioMed: 1030, number of bacterial cells on Earth Language: There are about 6,500 mutually unintelligible languages and dialects. Language: There are 20,000–40,000 distinct Chinese characters, depending on how one counts them. Language: 267,000 words in James Joyce's Ulysses. Language — English words: The New Oxford Dictionary of English contains about 350,000 definitions for English words. Records: As of July 2004, the largest number of decimal places of π that have been recited from memory - > 42000 Info — Web sites: as of 26 February, 2010, Wikipedia contains approximately 3,206,530 articles in the English language. Info — Books: The British Library claims that it holds over 150 million items. The Library of Congress claims that it holds approximately 119 million items. Genocide: Approximately 6,000,000 Jews were killed in the Holocaust. Demographics: approx. 402,000,000 native speakers of English. Demographics — world population: 6,587,890,000 - Estimated total mid-year population for the world in 2007 (April 10). Physical cosmology — Age of the universe: Current theory and observations suggest that approximately 1.4 × 1010 years have passed since the Big Bang. Cosmology: 1 × 1063 is Archimedes’ estimate in The Sand Reckoner of the total number of grains of sand that could fit into the entire cosmos, the diameter of which he estimated to be what we call 2 light years. Astronomy — stars in our galaxy: approximately 4 × 1011 stars in the Milky Way galaxy. Marine biology: 3,500,000,000,000 (3.5 × 1012) - estimated population of fish in the ocean. Economics: Hyperinflation in Zimbabwe estimated in February 2009 by some economists at 10 sextillion percent, or a factor of 1020 Geo — Grains of sand: all the world's beaches put together have been estimated to hold roughly 1021 grains of sand. Chemistry: there are roughly 6.022 × 1023 molecules in one mole of any substance (Avogadro's number) Chess: 1 × 1050 is an estimate of the number of legal chess positions. Chess: Shannon number, 10120, an estimation of the game-tree complexity of chess. Board games: 4.8231 × 10115, number of ways to arrange the tiles in English Scrabble (100! / 9! / 2! / 2! / 4! / 12! / 2! / 3! / 2! / 9! / 1! / 1! / 4! / 2! / 6! / 8! / 2! / 1! / 6! / 4! / 6! / 4! / 2! / 2! / 1! / 2! / 1! / 2!). Taken from Wikipedia 4 Comment(s): Jess said... SINCE WHEN U BECAME SO GEEKY!!! Semua nombor nombor one!! >___< lolol :D intellectual side of vince eyyyy Vincent said... Was looking for something related to numbers and found this there. Thought it would be interesting to post this up. ;) Ikmal said...
# Recent content by BustedBreaks 1. ### Prove not integrable. Is this correct? Well, what I was trying to say was that the upper sum cannot equal 0 because for any interval [a,b] M will not be 0 because there is always an irrational greater than 0 no matter how small the partition [Xo, X1] gets. Because M will never be 0 the inf of the Upper Darboux Sums will never be 0... 2. ### Prove not integrable. Is this correct? Let f:[0,1] be defined as f(x)= 0 for x rational, f(x)=x for x irrational Show f is not integrable m=inf(f(x) on [Xi-1, Xi]) M=sup(f(x) on [Xi-1, Xi]) Okay so my argument goes like this: I need to show that the Upper integral of f does not equal the lower integral of f Because... 3. ### Advanced Calc. Continuity problem ^ If it does, I can't see it. I feel like I need to find an N in terms of e to show that this si continuous or something. 4. ### Advanced Calc. Continuity problem So I've been trying to figure this out. The question is: If the limit x->infinity of Xn=Xo Show that, by definition, limit x->infinity sqrt(Xn)=sqrt(Xo) I'm pretty sure I need to use the epsilon definition. I worked on it with someone else and we think that what we have to show is the... 5. ### New at Mathematica, Need some help hi, I have two equations that I have used the Solve function in Mathematica to solve for A in both equations What I am having trouble with is trying to equate the results and solving for another variable J automatically Basically this is what I want to do: Solve[Eqn1, A] It gives {{A ->... 6. ### Direct Sum of Rings This may be a dumb question, but I jsut want to make sure I understand this correctly. For R_{1}, R_{2}, ..., R_{n} R_{1} \oplus R_{2} \oplus, ..., R_{n}=(a_{1},a_{2},...,a_{n})|a_{i} \in R_{i} does this mean that a ring which is a direct sum of other rings is composed of specific elements... 7. ### First order PDE help I'm trying to solve this equation: Ux + Uy + U = e^-(x+y) with the initial condition that U(x,0)=0 I played around and and quickly found that U = -e^-(x+y) solves the equation, but does not hold for the initial condition. For the initial condition to hold, I think there needs to be some... 8. ### Wave equation with Neumann BDC The problem statement is: Solve the Neumann problem for the wave equation on the half line 0<x<infinity. Here is what I have U_{tt}=c^{2}U_{xx} Initial conditions U(x,0)=\phi(x) U_{t}(x,0)=\psi(x) Neumann BC U_{x}(0,t)=0 So I extend \phi(x) and \psi(x) evenly and get... 9. ### Solve this PDE 2Ux-Uy+5U=10 with U(x,0)=0 I'm following an algorithm my teacher gave us and I'm trying to understand it... I'm trying to solve this PDE 2Ux-Uy+5U=10 with U(x,0)=0 First I need to solve the homogeneous equation. So I set up the relation: V(y)=U(2y+c, y) to solve 2Ux-Uy=0 where the characteristic equation is y=1x/2... 10. ### Deriving a heat equation Consider heat flow in a long circular cylinder where the temperature depends only on t and on the distance r to the axis of the cylinder. Here r=\sqrt{x^{2}+y^{2}} is the cylindrical coordinate. From the three dimensional heat equation derive the equation u_{t}=k(u_{rr}+\frac{u_{r}}{r}). My... 11. ### Cosets of a subset of S_3 Okay so I get your method here and I am trying to apply it to this one (23)(13) but I am not getting the answer the book has which is (123) I set it up like this (23) (13) 123 123 132 321 then so 1 goes to 3 then 3 goes to 2, 2 goes to 2 then 2 goes to 3, 3 goes to 1 and 1 goes... 12. ### Cosets of a subset of S_3 I am having trouble understanding this example: Let G=S_3 and H={(1),(13)}. Then the left cosets of H in G are (1)H=H (12)H={(12), (12)(13)}={(12),(132)}=(132)H I cannot figure out how to produce this relation: (12)H={(12), (12)(13)}={(12),(132)}=(132)H I understand (12)H={(12), (12)(13)}... 13. ### Question of integration To give a bit more context. I was trying to solve the partial differential equation: 3U_{y}+U_{xy}=0 with the hint, let V=U_y substituting we have 3V+V_{x}=0 then -3=\frac{V_{x}}{V} I didn't really know how to continue from here so I just played around and figured out that V=e^{3x} and... 14. ### Question of integration So in doing a homework problem I have convinced my self that \int\frac{V_{x}}{V}=ln(V) which I vaguely remember learning in class, but I'm having trouble deriving it. Can someone help me out? 15. ### Show functions of this form are a vector space etc I see what you mean by the difference in functions, however I feel like they would have written it the way you did, {1, sin2x, cos2x}, if that's what they meant? They way I see it, the function in the question represents all functions of that form which is why they have function plural...
# A concrete realization of the nontrivial 2-sphere bundle over the 5-sphere? Since $\pi_4 (PU(2)) = \pi_4 (SO(3)) = {\mathbb Z}_2$, the two-element group, we know that half of the two-sphere bundles over the 5-sphere $S^5$ are trivial and the other half are non-trivial and all isomorphic. Can you write an explicit concrete realization for this non-trivial bundle? I have in mind something along the lines of the Hirzebruch surface (see eg. http://en.wikipedia.org/wiki/Hirzebruch_surface) $\Sigma_1$ (or $\Sigma_k$, $k$ odd) which realizes the unique topologically non-trivial $S^2$ bundle over $S^2$. (Since $\pi_1 (SO(3) = {\mathbb Z}_2$ again half' the two-sphere bundles over the 2-sphere $S^2$ are trivial and the other half are non-trivial and all isomorphic.) • I'm trying to pull one back from $S^5/S^1 = \mathbb{CP}^2$, but it seems like the nontrivial bundle there comes from $\pi_3(Diff(S^2))$, not from $\pi_4$, grr. – Allen Knutson Aug 21 '13 at 4:22 Is this concrete enough? Recall that $\mathrm{SU}(3)$ fibers over $S^5$, with fibers equal to $\mathrm{SU}(2)$ and that this fibration is nontrivial. Let $S^1\subset \mathrm{SU}(2)$ be (any) subgroup and let $B = \mathrm{SU}(3)/S^1$. Then $B$ fibers over $S^5$ with fibers $S^2$. If $B$ were trivial, then $\mathrm{SU}(3)\to S^5$ would be trivial as well, but it is not, since $S^5$ is not parallelizable. In more detail: Regard $\mathrm{SU}(3)$ as the set of triples $(e_1,e_2,e_3)$ of special unitary bases of $\mathbb{C}^3$. Define a map $\pi:\mathrm{SU}(3)\to S^5\subset\mathbb{C}^3$ by $$\pi(e_1,e_2,e_3) = e_1\ .$$ This is a smooth submersion with fibers isomorphic to $\mathrm{SU}(2)$. Let $B$ be the set of pairs $(v,L)$, where $v\in S^5\subset\mathbb{C}^3$ and $L\in\mathbb{CP}^2$ is a line that is Hermitian orthogonal to the line spanned by $v$, i.e., $L$ is a line in $v^\perp\simeq\mathbb{C}^2$. Then $B\to S^5$ given by $(v,L)\mapsto v$ is a smooth $S^2$ bundle over $S^5$. If $B$ were trivial, there would be a section of $B$ over $S^5$ and hence a smooth mapping $\lambda:S^5\to\mathbb{CP}^2$ such that $\bigl(v,\lambda(v)\bigr)\in B$ for all $v\in S^5$. This would define a smooth complex line bundle $\Lambda$ over $S^5$, and, since every complex line bundle over $S^5$ is trivial, there would be a nonvanishing section of this line bundle, i.e., a mapping $\sigma:S^5\to S^5$ such that $\lambda(v) = \mathbb{C}\cdot\sigma(v)$ for all $v\in S^5$. However, then there would exist a unique mapping $\tau:S^5\to S^5$ such that $\zeta(v) = \bigl(v,\sigma(v),\tau(v)\bigr)$ is a special unitary frame for all $v\in S^5$, i.e., $\zeta:S^5\to \mathrm{SU}(3)$ would be a section of the nontrivial bundle $\mathrm{SU}(3)\to S^5$. • A more succinct description of $B$ is that it is the pullback to $S^5$ of the projectivized tangent bundle of $\mathbb{CP}^2$. – Eric Wofsey Aug 21 '13 at 9:47 • @Eric: Yes, but that doesn't make it obvious that $B$ is nontrivial as a bundle over $S^5$. – Robert Bryant Aug 21 '13 at 11:17
##### Hi, please draw the initial PPF and new PPF in the graph and don't forget to... Hi, please draw the initial PPF and new PPF in the graph and don't forget to specify the right number for graph line, and check my answers if they are correct and if not please correct them. Thank you. 11. Opportunity cost and production possibilities Susan is a skilled toy maker who is able to ... ##### Apers Stereochemistry Part 1 hit oo n d o 1. Chiral compounds-compounds with non-superimposable mirror images.... apers Stereochemistry Part 1 hit oo n d o 1. Chiral compounds-compounds with non-superimposable mirror images. For a on-superimposable mirror images. For a C to be chiral (asymmetric), to be it must have 4 different groups. Place an asterisk on all asymmetric carbons, if present. CH3CH2CHCH3 CH3CH2O... ##### 1) Consider Fig. 12, which shows an infinite line of charge which contains a uniform charge... 1) Consider Fig. 12, which shows an infinite line of charge which contains a uniform charge of a coulombs in every meter. You will now use Gauss' Law to derive an equation for the electric field around a line of charge. Fig. 12 Describe the electric field in the region surrounding the charge. (D... ##### Suppose we lived in a different universe, in which the force law governing electrical interactions between... Suppose we lived in a different universe, in which the force law governing electrical interactions between charged particles was kq 192 . r124 12 Would Gauss's Law be valid in this hypothetical universe? Explain... ##### H20 (1 equiv.) HCI (catalytic) ОН THF 65 °C H20 (1 equiv.) HCI (catalytic) ОН THF 65 °C... ##### The field of health sciences is wide and varied. Many jobs in the profession require additional... The field of health sciences is wide and varied. Many jobs in the profession require additional training and certification. Others can be entered directly from the Bachelor's degree. Compile a list of possible health science fields that you might be interested in. You will need to do some resear... ##### Part of this isnposted somewhere else on here but it os not fully answered can someone... part of this isnposted somewhere else on here but it os not fully answered can someone help reanswer this one! thank you Complete the following questions. In addition to answering the items below, you must submit an analysis of the assignment. Analyze the specific outcomes and write an analysis d... ##### An archer fish spies a meal of a grasshopper sitting on a long stalk of grass... An archer fish spies a meal of a grasshopper sitting on a long stalk of grass at the edge of the pond in which he is swimming. If the fish is to successfully spit at and strike the grasshopper, which is 0.27 m away horizontally and 0.605 m above his mouth, what is the minimum speed at which the arch... ##### Explain the Australian government’s and community services processes and culture regarding advocacy and presentation. Explain the Australian government’s and community services processes and culture regarding advocacy and presentation.... ##### Practice Exercise 01 ac (Part Level Submission) Windsor Inc. purchased a new machine on October 1,... Practice Exercise 01 ac (Part Level Submission) Windsor Inc. purchased a new machine on October 1, 2020, at a cost of $120,000. The company estimated that the machine will have a salvage value of$13,000. The machine is expected to be used for 10,000 working hours during its 4 year life Compute the ...
# How to get the p-value for the full model from R's coxph? In a coxph model with 2 variables (let's say age and sex), how can I get the p-values for LR, Wald and score tests which I see in the summary(coxphobject) as below? Concordance= 0.653 (se = 0.058 ) Rsquare= 0.085 (max possible= 0.924 ) Likelihood ratio test= 30.09 on 12 df, p=0.002708 Wald test = 34.73 on 12 df, p=0.0005169 Score (logrank) test = 34 on 12 df, p=0.0006736, Robust = 17.61 p=0.1279 I think those values represented in the summary as p=... for each model are the p-values for the full model, but I cannot figure out how to retrieve them. If there is only one variate, let's say age, then I can retrieve the p-value using summary(coxphobject)$coefficients[5], which is the same as the one I see for the Wald test in the summary(coxphobject). But when there are two variables (age and sex), summary(coxphobject)$coefficients[1,5] and summary(coxphobject)$coefficients[2,5] give the p-values for age and sex separately, but they both are different from the p-values I see for the full model in the summary(coxphobject) for LR, Wald and score tests. I appreciate any help. Thanks. - ## 1 Answer 1) Put summary(coxphobject) into a variable summcph <- summary(coxphobject) 2) examine it with str() str(summcph) Values! Values everywhere! so we find, (proceeding line by line in your above output): a) the Concordance values summcph$concordance b) the Rsquare values summcph$rsq c) The Likelihood ratio test values summcph$logtest d) The Wald test values summcph$waldtest e) The score test values summcph$sctest f) The robust values summcph\$robscore It really helps if you post a reproducible example, rather than make us go find your data set in order to check we're doing all the options the same. For example, you didn't mention you had a cluster term. (It would take an extra few moments for you, and would have saved me ten minutes while I tried to figure out why I couldn't get the last couple of values. At the least you could have mentioned which example you ran in the help!) -
## Algebra & Number Theory ### Tubular approaches to Baker's method for curves and varieties Samuel Le Fourn #### Abstract Baker’s method, relying on estimates on linear forms in logarithms of algebraic numbers, allows one to prove in several situations the effective finiteness of integral points on varieties. In this article, we generalize results of Levin regarding Baker’s method for varieties, and explain how, quite surprisingly, it mixes (under additional hypotheses) with Runge’s method to improve some known estimates in the case of curves by bypassing (or more generally reducing) the need for linear forms in $p$-adic logarithms. We then use these ideas to improve known estimates on solutions of $S$-unit equations. Finally, we explain how a finer analysis and formalism can improve upon the conditions given, and give some applications to the Siegel modular variety $A2(2)$. #### Article information Source Algebra Number Theory, Volume 14, Number 3 (2020), 763-785. Dates Revised: 20 August 2019 Accepted: 7 October 2019 First available in Project Euclid: 2 July 2020 https://projecteuclid.org/euclid.ant/1593655271 Digital Object Identifier doi:10.2140/ant.2020.14.763 Mathematical Reviews number (MathSciNet) MR4113780 Subjects Secondary: 11J86: Linear forms in logarithms; Baker's method #### Citation Le Fourn, Samuel. Tubular approaches to Baker's method for curves and varieties. Algebra Number Theory 14 (2020), no. 3, 763--785. doi:10.2140/ant.2020.14.763. https://projecteuclid.org/euclid.ant/1593655271 #### References • Y. Bilu, “Effective analysis of integral points on algebraic curves”, Israel J. Math. 90:1-3 (1995), 235–252. • Y. Bugeaud and K. Győry, “Bounds for the solutions of unit equations”, Acta Arith. 74:1 (1996), 67–80. • P. Corvaja, V. Sookdeo, T. J. Tucker, and U. Zannier, “Integral points in two-parameter orbits”, J. Reine Angew. Math. 706 (2015), 19–33. • J.-H. Evertse and K. Győry, Unit equations in Diophantine number theory, Cambridge Studies in Advanced Mathematics 146, Cambridge University Press, 2015. • G. van der Geer, “On the geometry of a Siegel modular threefold”, Math. Ann. 260:3 (1982), 317–350. • K. Győry, “Bounds for the solutions of $S$-unit equations and decomposable form equations II”, preprint, 2019. • K. Győry and K. Yu, “Bounds for the solutions of $S$-unit equations and decomposable form equations”, Acta Arith. 123:1 (2006), 9–41. • J.-i. Igusa, “On Siegel modular forms genus two, II”, Amer. J. Math. 86 (1964), 392–412. • J.-i. Igusa, “On the graded ring of theta-constants”, Amer. J. Math. 86 (1964), 219–246. • S. Lang, Fundamentals of Diophantine geometry, Springer, 1983. • S. Le Fourn, “A tubular variant of Runge's method in all dimensions, with applications to integral points on Siegel modular varieties”, Algebra Number Theory 13:1 (2019), 159–209. • A. Levin, “Variations on a theme of Runge: effective determination of integral points on certain varieties”, J. Théor. Nombres Bordeaux 20:2 (2008), 385–417. • A. Levin, “Linear forms in logarithms and integral points on higher-dimensional varieties”, Algebra Number Theory 8:3 (2014), 647–687. • A. Levin, “Extending Runge's method for integral points”, pp. 171–188 in Higher genus curves in mathematical physics and arithmetic geometry, edited by A. Malmendier and T. Shaska, Contemp. Math. 703, Amer. Math. Soc., Providence, RI, 2018. • Q. Liu, “Courbes stables de genre $2$ et leur schéma de modules”, Math. Ann. 295:2 (1993), 201–222. • D. W. Masser and G. Wüstholz, “Fields of large transcendence degree generated by values of elliptic functions”, Invent. Math. 72:3 (1983), 407–464. • J. H. Silverman, “Arithmetic distance functions and height functions in Diophantine geometry”, Math. Ann. 279:2 (1987), 193–216. • M. Streng, Complex multiplication of abelian surfaces, Ph.D. thesis, Universiteit Leiden, 2010, https://openaccess.leidenuniv.nl/handle/1887/15572. • P. Vojta, Diophantine approximations and value distribution theory, Lecture Notes in Mathematics 1239, Springer, 1987.
Find a Residential property to rent in Blouberg Finding a Residential property using the map is faster and recommended! To search for properties using the map click here View all property to rent in Blouberg View Blouberg suburbs starting with: big bay big bay has approximately 11 properties to rent. The suburb has a total area of approximately 0.9095921 km2.  Estimated average price for listed properties in this area: R 14 931. blouberg rise blouberg rise has approximately 2 properties to rent. The suburb has a total area of approximately 0.5661691 km2. blouberg sands blouberg sands has approximately 8 properties to rent. The suburb has a total area of approximately 0.4933812 km2. bloubergrant bloubergrant has approximately 7 properties to rent. The suburb has a total area of approximately 0.2340826 km2. bloubergstrand bloubergstrand has approximately 11 properties to rent. The suburb has a total area of approximately 1.394676 km2.  Estimated average price for listed properties in this area: R 19 818. flamingo vlei flamingo vlei has approximately 1 properties to rent. The suburb has a total area of approximately 1.244601 km2. killarney gardens killarney gardens has approximately 1 properties to rent. The suburb has a total area of approximately 1.091855 km2. parklands parklands has approximately 31 properties to rent. The suburb has a total area of approximately 2.467123 km2.  Estimated average price for listed properties in this area: R 10 318. parklands ext parklands ext has approximately 8 properties to rent. The suburb has a total area of approximately 1.588657 km2. richwood richwood has approximately 3 properties to rent. The suburb has a total area of approximately 0.7671467 km2. sunningdale sunningdale has approximately 4 properties to rent. The suburb has a total area of approximately 1.725565 km2. table view table view has approximately 58 properties to rent. The suburb has a total area of approximately 6.136843 km2.  Estimated average price for listed properties in this area: R 12 747.
# All Questions 557 questions 3k views ### Is zero inflation desirable? Is zero inflation really desirable? To be more precise: Does inflation in real life have benefits that in some situations outweigh its social cost? E.g.: it works as a disincentive against holding ... 24k views ### How can I obtain Leontief and Cobb-Douglas production function from CES function? In most Microeconomics textbooks it is mentioned that the Constant Elasticity of Substitution (CES) production function, $$Q=\gamma[a K^{-\rho} +(1-a) L^{-\rho} ]^{-\frac{1}{\rho}}$$ (where the ... 7k views ### What is the economic purpose of increasing the minimum wage? It is generally accepted among economists that minimum wage warps the equilibrium point between the supply and demand of labor by instituting a price floor and increases unemployment for unskilled ... 16k views ### Fundamental equations in economics For the other sciences it´s easy to point to the most important equations that ground the discipline. If I want to explain Economics to a physicist say, what are considered to be the most important ... 4k views ### How do economies grow? These days, we hear again and again about the so-called "need" for economic growth. But how do countries actually grow economically? That is, why/how does their GDP increase over time? Related, what ... 1k views ### Implications of abolishing Fractional Reserve Banking on mortgages and interest rates Suppose for a moment that someone with legislative power decides to abolish Fractional Reserve Banking and passes a law that forces banks to only lend the money they own, that is M0. What would be the ... 2k views ### What is the Gross Domestic Product (GDP)? I suppose GDP is supposed to create a measure of a country's wealth/welfare, something easily indexable. But how exactly is it composed? And is its composition disputed? How good is it at measuring a ... 594 views ### Destroying the dollar Let's destroy the USD dollar: I am the government of a small, economically and geopolitically unimportant country that has its own currency and a local central bank. I order the local central bank (at ... 18k views ### How will non-rich citizens make a living if jobs keep getting replaced by robots and are outsourced? Decades ago a factory job could support a wife and kids until retirement and they offered insurance, benefits, etc. Now, no more unions, those jobs as well as tech and customer service jobs are ... 3k views ### Mathematical Micro/Macro Economics Textbook Recommendation I was formerly an economics major and now also majoring in mathematics. I want a textbook that is rigorously based on mathematics; not just using mathematcis whenever the author wants, but in a more ... 1k views ### What would happen if the world switched to a single currency? What would happen if all countries suddenly stopped using local currencies and adopted a global currency (like the Euro, but for everyone)? 2k views ### From an economics perspective, what are the ramifications of a currency with fixed money supply? I'm thinking specifically of bitcoins. What are the pros and cons of having a fixed number of coins, as opposed to more "normal" currencies? Would the currency have no inflation? 504 views ### Experiments contradicting the expected utility model This is a question I asked on the cognitive science beta, but which never got any answer. I do not know what the policy should be for question migration/reposting (maybe worth discussing in the meta?),... 945 views ### Price Elasticity of Demand for Positive Price Increases What does it mean when the price elasticity of demand %Qd/%P is greater than one? Typically I hear that it means the demand is elastic since if, say, the price decreases by 1% the demand for the ... 391 views 181 views ### How does a central bank create the money used for quantitative easing or lowering the value of their currency? Up until last week, the Swiss central bank used Francs to buy Euros, in an effort to lower the value of the Franc; today the European central bank announced that it would use Euros to buy bonds in ... 328 views ### Are there fundamental reasons why (exponential) economic growth is highly desirable? One of the most widely published measures of the economy is the economic growth as a % of the GDP; i.e. the degree to which an economy grows exponentially. In my understanding, when the rate of ... 1k views ### Who exactly foots the bill if Greece defaults Apologies if the topic is not appropriate (economics newbie here) but I am curious as to who exactly would foot the bill if Greece defaults on the ~300 billion dollars it owes. It looks like most of ... 1k views 891 views ### Why is modest inflation a good thing? [duplicate] I have been reading a BBC news article about inflation in the UK, which is saying that inflation has recently become negative (http://www.bbc.co.uk/news/business-33147660). The article suggests that ... 1k views ### Alternatives to Pigouvian tax Two common drawbacks of Pigouvian subsidy mentioned in the literature are related to monetisation and measurement of social cost (Baumol) and reciprocity of social cost (Coase). What alternatives to ...
# Math Help - Functions Problem!! 1. ## Functions Let S: N ---> P(N) be the function defined by S(n) = {kn I k belongs to N} and M: P(N) ---> N be the function defined by: M(A) = 1 if A = Empty Set M in (A) if A does not equal an Empty Set (a) Is S injective? Is S surjective? prove your claims. (b) Is M injective? Is M surjective? prove your claims. (c) For n belongs to N, find (M o S)|(n). (d) For A belongs to P(N), find (S o M)(A). 2. Originally Posted by modi4help Let S: N ---> P(N) be the function defined by S(n) = {kn I k belongs to N} and M: P(N) ---> N be the function defined by: M(A) = 1 if A = Empty Set M in (A) if A does not equal an Empty Set (a) Is S injective? Is S surjective? prove your claims. (b) Is M injective? Is M surjective? prove your claims. (c) For n belongs to N, find (M o S)|(n). (d) For A belongs to P(N), find (S o M)(A). Does P(N) MEAN the power set of N? If yes ,then : FOR S to be injective we must have: $S(n_{1}) = S(n_{2})\Longrightarrow n_{1}=n_{2}$ But $S(n_{1})=S(n_{2})\Longrightarrow [(y=kn_{1},k\in N)\Longrightarrow(y=kn_{2},k\in N)]\Longrightarrow kn_{1}=kn_{2}$ $\Longrightarrow n_{1}=n_{2}$. Hence S is injective. It is not surjective because NO nεN CAN give us S(n) = $\emptyset\in P(N)$. Now what do you mean " M in (A)"??
### Quiz 10.7: Question: For what values does the series $$\sum \limits _{n=0} ^\infty \frac{nx^n}{4^n(n^2+1)}$$ converge absolutely and conditionally. Solution: $$\lim \limits _{n \rightarrow \infty} \frac{(n+1)|x^{n+1}|4^n(n^2+1)}{n|x^n|4^{n+1}((n+1)^2+1)} = \frac{|x|}{4}\lim \limits _{n \rightarrow \infty} \frac{(n+1)(n^2+1)}{n(n^2+2n+3)} = \frac{|x|}{4} \lim \limits _{n \rightarrow \infty} \frac{(1+1/n)(1+1/n^2)}{(1+2/n+3/n^2)} = \frac{|x|}{4}$$ Therefore, $$\sum \limits _{n=0} ^\infty \frac{nx^n}{4^n(n^2+1)}$$ converges absolutely for $$-4 < x < 4$$ by the Ratio Test $$x = - 4$$ $$\sum \limits _{n=0} ^\infty \frac{n(-1)^n}{(n^2+1)}$$ An alternating series with $$u_n= \frac{n}{n^2+1}$$ 1) $$u_n>0$$ 2) $$u_n$$ non-increasing 3) $$\lim \limits _{n \rightarrow \infty} \frac{n}{n^2+1} = \lim \limits _{n \rightarrow \infty} \frac{1}{n+1/n}=0$$ So, $$\sum \limits _{n=0} ^\infty \frac{n(-1)^n}{(n^2+1)}$$ converges by AST $$x = 4$$ $$\sum \limits _{n=0} ^\infty \frac{n}{(n^2+1)}$$ $$\frac{1}{n},\frac{n}{n^2+1}$$ are positive And, $$\lim \limits _{n \rightarrow \infty} \frac{\frac{n}{n^2+1}}{\frac{1}{n}} = \lim \limits _{n \rightarrow \infty} \frac{n^2}{n^2+1} = \lim \limits _{n \rightarrow \infty} \frac{1}{1+1/n^2} = 1 >0$$ And, $$\sum \limits _{n=1} ^\infty \frac{1}{n}$$ is a divergent p-series $$p = 1 \leq 1$$ So, $$\sum \limits _{n=0} ^\infty \frac{n}{(n^2+1)}$$ diverges by LCT So our power series, $$\sum \limits _{n=0} ^\infty \frac{nx^n}{4^n(n^2+1)}$$ converges absolutely for $$-4 < x < 4$$ and conditionally for $$x= -4$$
# Soft question: Is one destined to fail at writing proofs without knowledge of vector calculus in $|R^n$, ODE/ PDE, basic number theory? I've never taken a rigorous linear algebra class or a multivariable calculus or ODE/PDE class prior to studying cardinalities, injective,surjective, well-ordering principle, minimal counterexample, ZFC. I only know applied single variable calculus and linear algebra for scientists and engineers. Is the reason I'm not understanding how to prove that the real numbers are not countable and surjections and injections between sets and how to use the well ordering principle because I haven't seen how to use the implicit function theorem, how to do curl and divergence, and what's a non-homogeneous equation just to name a few topics from vectorcalculus/ODE/PDE? Is the reason I'm bad at proving things about the cartesian product, division algorithim, injective and surjective on a characteristic function, powersets, unions and intersections, convex hulls, countability of rationals and integers, partitions of a set, event spaces infinite sequences product because I've never seen ODEs,PDEs, differential geometry? Many of the people I know who are not struggling with proving things have taken multivariable calculus, vector calculus, ODE/PDE. I'm sure a genius would not struggle with writing proofs even if they never learned how to do ODE/PDE but for the average person, would not knowing ODE/PDE/Vector calculus mean that they haven't practiced or been taught the mathematical maturity needed to prove that the rationals are countable? • No, those are advanced topics and are not required to write proofs on basic topics. Nevertheless, learning to write proofs, like taking a class on real analysis, will make all subsequent math classes easier. Apr 15 '17 at 20:59 • PS: I wrote up a description of the standard Cantor diagonal argument here, intended for the non-expert (no guarantee that this is any easier than other writeups, all writeups on this are doing essentially the same things): ee.usc.edu/stochastic-nets/docs/levels-of-infinity.pdf Apr 15 '17 at 21:03
# Posts Tagged ‘deducer’ ## Using Deducer to work with R If one checks out the initial question that prompted this series, a common theme in the answers is that one should use the GUI as a tool to help one build code (and not just as a crutch to do the analysis). Being able to view the code produced by the GUI should help beginner R users learn the commands (and facilitate scripting analysis in the future).  The following blog post highlights one of the popular GUI extensions to R, the Deducer package, and my initial impressions of the package. It is obvious the makers of the Deducer package have spent an incredible amount of time creating a front end GUI to build plots using the popular ggplot2 package. While it has other capabilities, it is probably worth checking out for this feature alone (although it appears R-Commander recently added similar capabilities as well). Installation: Long story short, on my Windows 7 laptop I was able to install Deducer once I updated to the latest version of R (2.13 as of writing this post), and installed some missing Java dependencies. After that all was well, and installing Deducer is no different than installing any other package. What does Deducer do? Deducer adds GUI functionality to accomplish the following tasks: • load in data from various formats (csv, SPSS, etc.) • view data + variable types in separate data viewer • conduct data transformations (recode, edit factors, transformations, transpose, merge) • statistical analysis (mean differences, contingency tables, regression analysis) • A GUI for building plots using the ggplot2 package Things I really like: • data viewer (with views for spreadsheet and variable view) • ability to import data from various formats • regression model explorer • interactive building of plots (the ability to update currently built plots is pretty awesome) Minor Quibbles: • I would like all of the commands to open in their own call window, not just plots (or be able to route commands to an open (or defined script window) ala log files. • I am unable to use the console window if another deducer window is open (e.g. data view, regression model explorer). Overall I’m glad I checked out the package. I suspect I will be typing library(Deducer) in the future when I am trying to make some plots with the ggplot2 package. The maintainers of the package did a nice job with including a set of commands that are essential for data analysis, along with an extensive set of online tutorials and a forum for help with the software. While the code Deducer produces by point and click is not always the greatest for learning the R language, it is a start in the right direction for those wishing to learn R.
Speed of response and crossover frequency relationship In SISO(Single Input Single output) cases, My professor keeps mentioning the speed of response and crossover frequency interchangeably. I don't really understand what the speed of response means also ? Does higher speed of response mean lower transient time? Also what is the relationship between crossover frequency and speed of response? I am aware that the crossover frequency is when gain is 0db on bode plot • Typically cutoff is at the -3 dB mark. – schadjo Mar 27 '18 at 19:48 • Firstly be careful when using abbreviations be careful as they may not always be obvious to everybody. I'm guessing 'SISO' is referring to a Single Input Single output control loops. Secondly provide the context with a specific example and your thoughts. We are happy to help but your professor isn't interested in how clever we are. Hint: what happens if you take a simple RC low-pass filter and reduce its corner frequency. How does this affect the output voltage ability to follow the input. This isn't a control loop but should provide some insight. – Warren Hill Mar 27 '18 at 19:54 • I made a mistake between cutoff and cross over , sorry – aadil095 Mar 27 '18 at 19:55 • Crossover frequency is proportional to the time constant $\tau$ in a step response ( en.wikipedia.org/wiki/Step_response#With_one_dominant_pole ). But low-frequency gain $A_0$, and feedback factor $\beta$ are also important factors. – HKOB Mar 27 '18 at 21:11 • $\omega_c=1/\tau$ for a 1st order TF. What course are you doing that deals with quite advanced control topics (in other posts) and yet you haven't covered basic 1st order concepts? – Chu Mar 27 '18 at 22:23
# Sh:545 • Džamonja, M., & Shelah, S. (1996). Saturated filters at successors of singular, weak reflection and yet another weak club principle. Ann. Pure Appl. Logic, 79(3), 289–316. • Abstract: Suppose that \lambda is the successor of a singular cardinal \mu whose cofinality is an uncountable cardinal \kappa. We give a sufficient condition that the club filter of \lambda concentrating on the points of cofinality \kappa is not \lambda^+-saturated. The condition is phrased in terms of a notion that we call weak reflection. We discuss various properties of weak reflection • published version (28p) Bib entry @article{Sh:545, author = {D{\v{z}}amonja, Mirna and Shelah, Saharon}, title = {{Saturated filters at successors of singular, weak reflection and yet another weak club principle}}, journal = {Ann. Pure Appl. Logic}, fjournal = {Annals of Pure and Applied Logic}, volume = {79}, number = {3}, year = {1996}, pages = {289--316}, issn = {0168-0072}, mrnumber = {1395679}, mrclass = {03E05 (03E35)}, doi = {10.1016/0168-0072(95)00040-2}, note = {\href{https://arxiv.org/abs/math/9601219}{arXiv: math/9601219}}, arxiv_number = {math/9601219} }
How to compute this GCD Sum? Revision en1, by -synx-, 2017-07-09 17:09:55 I have been trying to solve this problem for past 2 days, but I havent come up with a formal solution. I have tried to change the order of sums and grouping by gcd but couldnt get any further than getting a bound on distinct values of gcd .
# Geometric Phase Generated Optical Illusion ## Abstract An optical illusion, such as “Rubin’s vase”, is caused by the information gathered by the eye, which is processed in the brain to give a perception that does not tally with a physical measurement of the stimulus source. Metasurfaces are metamaterials of reduced dimensionality which have opened up new avenues for flat optics. The recent advancement in spin-controlled metasurface holograms has attracted considerate attention, providing a new method to realize optical illusions. We propose and experimentally demonstrate a metasurface device to generate an optical illusion. The metasurface device is designed to display two asymmetrically distributed off-axis images of “Rubin faces” with high fidelity, high efficiency and broadband operation that are interchangeable by controlling the helicity of the incident light. Upon the illumination of a linearly polarized light beam, the optical illusion of a ‘vase’ is perceived. Our result provides an intuitive demonstration of the figure-ground distinction that our brains make during the visual perception. The alliance between geometric metasurface and the optical illusion opens a pathway for new applications related to encryption, optical patterning, and information processing. ## Introduction Optical illusions, such as “Fraser spiral illusion”, “Nuts illusion”, and “Rubin’s vase”, are characterized by visually perceived images that are deceptive or misleading, violating the saying “seeing is believing”. Traditional optical illusions are typically realized by using specific visual tricks, i.e., complicated graphic design, or under extreme natural environment such as mirages, meaning that they are mainly demonstrated in macroscopic scale. The realization of optical illusions based on optical nanodevices with high resolution has not been demonstrated. Metasurfaces1,2,3, a new subtype of metamaterials, consisting of a thin layer of plasmonic or dielectric nanostructures have attracted considerable attention in nanophotonics due to their unique capability of manipulating electromagnetic wavefront at subwavelength resolution in a desirable manner. Various types of metasurfaces have been proposed and designed to realize novel optical functionalities such as generalized Snell’s law of refraction1, 2, 4, 5, Spin-Hall effect6, dual-polarity planar metlens7, wave plates8, vortex beam generation9,10,11,12, spin-controlled photonics13,14,15, and unidirectional excitation of surface plasmon polariton16. Computer-generated holograms (CGH)17,18,19 offer important advantages over optical holograms since there is no need for a real object. A holographic image can be generated by digitally computing a holographic interference pattern and encoding it into a specific surface structure or a spatial light modulator for subsequent illumination by suitable coherent light source. Benefiting from the unprecedented manipulation of light propagation due to the desired phase change at the interface, metasurfaces have been employed for the application of holography20,21,22,23, including highly efficient broadband holograms11, 24, 25, image-switchable holograms, full-color holograms26,27,28 and nonlinear holograms29. With the advancement of nanotechnology and integrated photonics, miniaturization and integration are the two main tireless pursuits for the production of optical devices. To date, all of the demonstrated metasurface holograms are based on the phase profile to generate the corresponding holographic image. How to generate an additional visual image based on the same ultrathin metasurface device without its closely related phase profile, which can be considered as an optical illusion, has not been demonstrated. In this paper, we propose and experimentally demonstrate an approach to realize an optical illusion based on a metasurface. The most famous example of figure-ground perception is probably the vase-face drawing that Edgar Rubin described. The brain usually identifies an object by distinguishing the shape or figure from the background. The perceived image in brain depends critically on which border is assigned. If we create two separated faces regions with central symmetric distribution, a shape of vase is perceived (optical illusion) because the human visual system settle the faces as background. We take this drawing as an example for demonstration. The metasurface device is designed to display two asymmetrically distributed off-axis images of “Rubin faces” with high fidelity and a wide field of view. Upon the illumination of a linearly polarized light beam, the optical illusion of “vase” can be perceived. The reflective-type metasurface consisting of metallic nanorod array and ground metallic film with the dielectric layer sandwiched between them, is used to generate Pancharatnam-Berry (P-B) phase over a broad range of wavelengths with high efficiency. The realization of optical illusion with metasurface represents a unique application where metasurfaces can better show their superior performance due to the generated geometric phase at the interface. The optical illusion that we demonstrated is caused by the information gathered by the eye, which is processed in the brain to give a perception that does not tally with a physical measurement of the stimulus source. This type of stimulus is of great interest and importance since it provides a marvelous and intuitive demonstration of the figure-ground distinction the brain makes during visual perception. ## Results To improve efficiency and image quality while maintaining the broadband property, we leverage the recent advances in the realization of high efficiency, broadband reflective-type configuration and geometric metasurfaces. In comparison with other types of metasurfaces, a metasurface consisting of nanorods with spatially varying orientation shows superior phase control for circular polarization and can ease the fabrication. Figure 1 shows the schematic of the geometric-phase induced optical illusions. The reflective-type metasurface, consisting of a gold ground layer, a SiO2 spacer layer and a top layer of elongated gold nanorods, is utilized to generate the required phase profile (Fig. 1 top left). Each unit cell of the metasurface, containing a subwavelength nanorod with carefully designed azimuthal orientation, can be considered as an anisotropic scatterer. When a circularly polarized light beam is incident onto nanorods, the reflected light consists of two parts: one has the same handedness with an additional phase change (known as P-B phase), and the other has the opposite handedness without phase change22. By carefully controlling the orientation of the nanorods, the desired continuous phase profile with constant amplitude can be achieved. As shown in Fig. 1 (bottom right), an off-axis “Rubin face” located at left side or right side of the viewing screen can be reconstructed upon the illumination of right-handed or left-handed circularly polarized (RCP or LCP) light. Since a linearly polarized light beam can be decomposed into two opposite circularly polarized light beam with equal components, an additional image named “vase” without encoding the corresponding phase profile onto the designed metasurface can be perceived between the two faces. When two reconstructed images have a common border, and one is seen as figure (“Rubin face”) and the other as ground (“vase”), the immediate perceptual experience is characterized by a shaping effect which emerges from the common border of the fields and which operates only on one image or operates more strongly on one than on the other. Unlike the previous polarization multiplexed metasurface holograms with symmetrically distributed target images22, 23, the two off-axis “Rubin faces” are designed asymmetrically, as shown in Fig. 2a. For RCP light illumination, two “Rubin faces” (one upright and one inverted) are reconstructed on the two sides of the zero-order spot. While for the LCP light illumination, the two “Rubin faces” are rotated 180° counterclockwise and horizontally flipped around point O due to the phase-conjugation induced by different helicity of the incident light. For the case of linearly polarized light which can be decomposed into LCP and RCP light with same components, the upright and inverted “Rubin’s vase” illusions are generated on the both sides. The Gerchberg-Saxton algorithm is utilized to obtain the expected phase profile of the phase-only hologram. The target image is discretized as a number of pixels in which each pixel is regarded as a point source. The design method can be found in Methods. By encoding the phase of the hologram into the metasurfaces, the target images are reconstructed under the illumination of properly polarized light. Our hologram is designed with an off-axis angle of β 1 = 9.75° and a large field of view of 30° × 23° along horizontal and vertical directions, respectively (Fig. 2b). Although arbitrary phase levels can be achieved since the encoding process from the phase profile into pixelated nanoantennas is very straightforward, we choose 32-phase levels (Fig. 2c) instead of continuous phase distribution to minimize the near field coupling between neighbouring nanorods. Here, a 2 × 2 periodic array of the phase (“Rubin face”) pattern with pixel size of 300 nm × 300 nm and pixel number of 2000 × 2000 is designed to improve the fidelity of constructed image (see Fig. 3a)22. Based on the concept of Dammann grating, the 2 × 2 periodic array design can improve the image quality by reducing the effect of laser speckle in the reconstructed images (see Supplementary Section 1). The whole size of sample is 600 um. A 150-nm-thick gold film and a 85-nm-thick glass spacer are deposited on the silicon substrate by electron beam evaporation. The length, width and thickness of each nanorod are 220 nm, 80 nm and 30 nm, respectively. The nanorod structure is fabricated using standard electron beam lithography and a subsequent lift-off procedure. The detailed fabrication process is given in Supplementary Section 2. The scanning electron microscopy (SEM) image of the fabricated metasurface consisting of nanorods with spatially varying orientation is shown in Fig. 3b. Figure 4 shows the target images, simulation results and corresponding experimental results upon the illumination of incident light with different polarization states. Figure 4a–c illustrate the original target images (“Rubin faces”), depending on the polarization states of the incident light. These target images of “Rubin face” or “optical illusion (vase)” can be simulated by considering light emission from all the discretized point sources, as shown in Fig. 4d–f. Experimentally, a polarizer and a quarter-wave plate are located behind the tunable laser source (NKT, SuperK EXTREME) to generate the required polarized states. Then, the light beam with a beam size of 2 mm is focused by using a plano-convex lens (f = 150 mm) and incident onto the fabricated sample (Fig. 3b). Two off-axial holographic images are reconstructed at the normal incidence. Here, a viewing screen is used to display holographic images. Figure 4g–i show the experimentally captured holographic images for different polarization states of the incident light at the wavelength of 633 nm. The distance between the screen and the metasurface is 60 mm. Upon the illumination of RCP light, a holographic image named “Rubin face” with high signal-to-noise is reconstructed on the left side of the screen (Fig. 4g). It should be noted that the size of the “Rubin face” is proportional to the reconstructed distance between the sample and the screen. When the polarization of incident beam is changed from RCP to LCP, a horizontally flipped image of “Rubin face” is displayed on the right side (Fig. 4i), which clearly shows that the position of the holographic image is solely dependent on the helicity of the incident light. LP light can be decomposed into LCP and RCP light with equal components, therefore, two pairs of different centrosymmetric “Rubin faces” (one upright and one inverted) shown in Fig. 4h are generated. Even more intriguingly, an additional image of “vase” is also perceived between these two “Rubin faces”. It should be mentioned that the “vase” is the optical illusion perceived by our eyes during the visual perception, which has no corresponding phase profile encoded onto the metasurface. The images of the inverted illusions are shown in Supplementary Figure S2. ## Discussion As a new optical device, its performance is our main concern. Signal-to-noise ratio (SNR) is one of the most critical factors to determine the quality of optical illusion. The SNR here can be defined as the ratio between the mean power of area A and the standard deviation of area B (see Fig. 5a). The calculated SNR is nearly infinity because the power of the background is nearly zero. In experiment, the background noise is mainly caused by the irregularity of nanorods and non-rigid of the plane-wave incidence. The measured SNR of the optical illusion is 7.6 (Fig. 5a), which can be further improved by optimising the fabrication process and optical experimental setup. The conversion efficiency is defined as the ratio of the power of all the reconstructed images and the input power. Here a condenser lens (f = 32 mm) is used to collect the generated images. The efficiency was measured over an ultra-broadband super-continuous spectrum in the range from 530 nm to 1090 nm, and it is higher than 45% in a relatively broad spectral ranging from 770 nm to 1090 nm. We achieve the maximum conversion efficiency in experiment is 69.94% at the wavelength of 910 nm. No twin images are observed in our experiment since the pixel size (300 nm) is much smaller than the wavelength of the incident light. The dependence of conversion efficiency and SNR on the wavelength is given in the Supplementary Section 4. In theory, the designed device to reconstructed optical illusion can be worked over a wide range of wavelengths, since the metasurfaces exhibit a dispersion-less phase profile resulting from the geometric P-B phase determined by the orientation of nanorods. The simulated conversion efficiency can be found in Supplementary Section 5. The difference between experimental results and simulation results is mainly due to the titanium layer between nanorods and SiO2 layer and the fabrication error of nanorods. In order to show the robustness of our proposed method for the realization of optical illusions, we also developed another metasurface device to generate Moiré fringes based on the same approach. In this case, the original target objects are two position and polarization-dependent concentric annulus. The simulated and measured results for the developed metasurface device under the illumination of incident with different polarized states are given in Fig. 6b–g, respectively. For the LCP light illumination, the left concentric annulus is located on the left side of the imaging plane (Fig. 6e,h), while the right concentric annulus are shifted on the right side under the illumination of RCP light (see Fig. 6g,j). For the LP light illumination, both of the concentric annulus are partially overlapped with each other. Moiré fringe is generated by the superposition of the light intensity of these overlapped concentric annulus, leading to the significant fishnet distribution, as shown in Fig. 6c,f,i. The calculated and measured results show good agreement, except for a slight mismatch due to the fabrication error. Unlike the optical illusion generated by the two separated “Rubin faces”, the Moiré fringe is obtained by the overlapping of two concentric annulus. In this case, the corresponding phase profile of the Moiré fringe is actually encoded onto the metasurfaces, then, the holographic image (Moiré fringe) can be reconstructed under the illumination of the LP light. Benefiting from the advantages of highly-efficient broadband reflective-type configuration and geometric metasurfaces, our designed device shows good capability to operate in the broadband. The experimental results at different wavelengths are shown in Supplementary Figure S5. In conclusion, we have experimentally demonstrated optical illusions based on reflective metasurfaces. “Rubin faces” are realized by the geometric phase profile induced by the metasurface consisting of metallic nanorods on the top and metallic film at the bottom with the dielectric layer sandwiched between them. Upon the illumination of linearly polarized light, “Rubin’s vase” is perceived without mapping the corresponding phase profile onto the metasurface. The demonstrated metasurface devices have shown high performance in optical illusion generation with high efficiency and broad bandwidth. Our result not only provides an intuitive demonstration of the figure-ground distinction that our brains make during the visual perception, but also opens an avenue for new applications related to encryption, optical patterning, and information processing. ## Methods ### The design of holographic image To realize a target image with a pixel array of m × n and a projection angle of α and β in the horizontal and vertical directions of the imaging plane, the period of the hologram dx and dy can be calculated by $$dx=\frac{m\lambda }{2\,\tan (\alpha /2)}$$ and $$dy=\frac{m\lambda }{2\,\tan (\beta /2)}$$, respectively. The number of pixels of the hologram is determined by M = dx/s and N = dy/s, where s is the pixel size of the hologram in both horizontal and vertical directions. ## References 1. 1. Yu, N. F. et al. Light Propagation with Phase Discontinuities: Generalized Laws of Reflection and Refraction. Science 334, 333–337 (2011). 2. 2. Huang, L. et al. Dispersionless phase discontinuities for controlling light propagation. Nano Lett. 12, 5750–5755 (2012). 3. 3. Devlin, R. C., Khorasaninejad, M., Chen, W. T., Oh, J. & Capasso, F. Broadband high-efficiency dielectric metasurfaces for the visible spectrum. Proceedings of the National Academy of Sciences, 201611740 (2016). 4. 4. Ni, X., Emani, N. K., Kildishev, A. V., Boltasseva, A. & Shalaev, V. M. Broadband light bending with plasmonic nanoantennas. Science 335, 427–427 (2012). 5. 5. Ding, X. et al. Ultrathin Pancharatnam–Berry Metasurface with Maximal Cross‐Polarization Efficiency. Adv. Mater. 27, 1195–1200 (2015). 6. 6. Yin, X., Ye, Z., Rho, J., Wang, Y. & Zhang, X. Photonic spin Hall effect at metasurfaces. Science 339, 1405–1407 (2013). 7. 7. Chen, X. et al. Dual-polarity plasmonic metalens for visible light. Nat. Commun. 3, 1198 (2012). 8. 8. Yu, N. et al. A broadband, background-free quarter-wave plate based on plasmonic metasurfaces. Nano Lett. 12, 6328–6333 (2012). 9. 9. Chen, S., Cai, Y., Li, G., Zhang, S. & Cheah, K. W. Geometric metasurface fork gratings for vortex‐beam generation and manipulation. Laser Photonics Rev (2016). 10. 10. Yue, F. et al. Vector Vortex Beam Generation with a Single Plasmonic Metasurface. ACS photonics 3, 1558–1563 (2016). 11. 11. Wang, B. et al. Visible-frequency dielectric metasurfaces for multiwavelength achromatic and highly dispersive holograms. Nano Lett. 16, 5235–5240 (2016). 12. 12. Mehmood, M. et al. Visible‐Frequency Metasurface for Structuring and Spatially Multiplexing Optical Vortices. Adv. Mater (2016). 13. 13. Shitrit, N. et al. Spin-optical metamaterial route to spin-controlled photonics. Science 340, 724–726 (2013). 14. 14. Wen, D., Yue, F., Ardron, M. & Chen, X. Multifunctional metasurface lens for imaging and Fourier transform. Sci. Rep. 6, 27628 (2016). 15. 15. Wen, D. et al. Metasurface Device with Helicity‐Dependent Functionality. Advanced Optical Materials 4, 321–327 (2016). 16. 16. Huang, L. et al. Helicity dependent directional surface plasmon polariton excitation using a metasurface with interfacial phase discontinuity. Light Sci. Appl. 2, e70 (2013). 17. 17. Leith, E. N. & Upatnieks, J. Reconstructed wavefronts and communication theory. JOSA 52, 1123–1130 (1962). 18. 18. Slinger, C., Cameron, C. & Stanley, M. Computer-generated holography as a generic display technology. IEEE Computer 38, 46–53 (2005). 19. 19. Kelly, D. P. et al. Digital holographic capture and optoelectronic reconstruction for 3D displays. International journal of digital multimedia broadcasting 2010 (2010). 20. 20. Ni, X., Kildishev, A. V. & Shalaev, V. M. Metasurface holograms for visible light. Nat. Commun. 4 (2013). 21. 21. Huang, L. et al. Three-dimensional optical holography using a plasmonic metasurface. Nat. Commun. 4 (2013). 22. 22. Zheng, G. et al. Metasurface holograms reaching 80% efficiency. Nat. Nanotechnol. 10, 308–312 (2015). 23. 23. Wen, D. et al. Helicity multiplexed broadband metasurface holograms. Nat. Commun. 6 (2015). 24. 24. Yifat, Y. et al. Highly efficient and broadband wide-angle holography using patch-dipole nanoantenna reflectarrays. Nano Lett. 14, 2485–2490 (2014). 25. 25. Huang, K. et al. Silicon multi‐meta‐holograms for the broadband visible light. Laser Photonics Rev. 10, 500–509 (2016). 26. 26. Huang, Y.-W. et al. Aluminum plasmonic multicolor meta-hologram. Nano Lett. 15, 3122–3127 (2015). 27. 27. Li, X. et al. Multicolor 3D meta-holography by broadband plasmonic modulation. Science Advances 2, e1601102 (2016). 28. 28. Montelongo, Y. et al. Plasmonic nanoparticle scattering for color holograms. Proceedings of the National Academy of Sciences 111, 12679–12683 (2014). 29. 29. Ye, W. et al. Spin and wavelength multiplexed nonlinear metasurface holography. Nat. Commun. 7 (2016). ## Acknowledgements This work is supported by Engineering and Physical Sciences Research Council of the United Kingdom (Grant Ref: EP/P029892/1 and EP/M003175/1). X.Z. and H.L. acknowledge the support from the Chinese Scholarship Council (CSC, Nos 201608310007 and 201606200099). G.Z. acknowledges the National Natural Science Foundation of China (Nos. 11374235, 11574240, 11774273), the Outstanding Youth Funds of Hubei Province (No. 2016CFA034), the Open Foundation of State Key Laboratory of Optical Communication Technologies and Networks, Wuhan Research Institute of Posts & Telecommunications (No. OCTN-201605). ## Author information Authors ### Contributions X.C. and G.Z. initiated the idea. F.Y., D.W., Z. Li designed the sample. F.Y. fabricated the samples. F.Y., X.Z., C.Z. performed the measurements. X.Z., F.Y. and X.C. prepared the manuscript. X.C. supervised the project. F.Y., X.Z., D.W., Z.L., C.Z., H.L., B.D.G., W.W., G.Z. and X.C. discussed and analysed the results. ### Corresponding authors Correspondence to Guoxing Zheng or Xianzhong Chen. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Yue, F., Zang, X., Wen, D. et al. Geometric Phase Generated Optical Illusion. Sci Rep 7, 11440 (2017). https://doi.org/10.1038/s41598-017-11945-z • Accepted: • Published: • ### Optimized chemical potential graphene-based coding metasurface approach for dynamic manipulation of terahertz wavefront • Susan Fallah • , Kasra Rouhi •  & Ali Abdolali Journal of Physics D: Applied Physics (2020) • ### Recent advances in optical metasurfaces for polarization detection and engineered polarization profiles • Yuttana Intaravanne •  & Xianzhong Chen Nanophotonics (2020) • ### Metasurface hologram for polarization measurement • Scott Hermon • , Aning Ma • , Fuyong Yue • , Fillmon Kubrom • , Yuttana Intaravanne • , Jin Han • , Yong Ma •  & Xianzhong Chen Optics Letters (2019) • ### Spin-Decoupled Multifunctional Metasurface for Asymmetric Polarization Generation • Yuehong Xu • , Quan Li • , Xueqian Zhang • , Minggui Wei • , Quan Xu • , Qiu Wang • , Huifang Zhang • , Wentao Zhang • , Cong Hu • , Zhenwei Zhang • , Cunlin Zhang • , Xixiang Zhang • , Jiaguang Han •  & Weili Zhang ACS Photonics (2019) • ### High-performance polarization beam splitter based on anisotropic plasmonic nanostructures • Zhengyong Song • , Qiongqiong Chu • , Longfang Ye • , Yanhui Liu • , Chunhui Zhu •  & Qing Huo Liu Applied Physics B (2018)
# # 2016.09.06 : Question4_a gefragt 2017-08-22 18:07:02 +0100 Das ist ein Wiki-Beitrag. Jeder mit Karma >75 darf diesen Beitrag verbessern. As I checked with global model checking it holds for all initial states (so it holds for s2), but in the solution (for local model checking) it doesn't hold. I have two other question in local/ global model checking : 1. should we eliminate the states without successor or the ones which do not have an infinite path? 2. By the way here in this question we can reach to state s8 in one step or to state s1 in two steps which in both states "c | a" holds. As "si |= <>x" results in all OR of successor states of si, then by local model checking (K, s2) |= S1 bearbeiten retag schließen löschen ## 1 Antwort geantwortet 2017-08-22 18:47:04 +0100 I don't see that: The solutions for global and local model checking as computed by http://es.cs.uni-kl.de/tools/teaching/ModelChecking.html both state that the formula holds in state s2. Global model checking computes 〖nu x. (a | c | <>x)〗= {s0;s1;s2;s3;s4;s5;s6;s7;s8} and the proof tree for local model checking s2 ⊢ nu x. (a | c | <> x) is as shown below. I guess that you read "0:" in the wrong way; it does not say that the proof goal is wrong, it just means that here is the node with number/name 0. For checking whether a proof goal in the tree was proved or disproved, you have to check the symbols ⊢ for being proved or ⊬ for being disproved. The proof tree below proves that the formula holds in s2 which is consistent with global model checking. • Should we eliminate the states without successor or the ones which do not have an infinite path? No, you must not do that, it will make the results for µ-calculus incorrect. It is the case that for LTL model checking, only infinite paths are considered, so for temporal logics, you may safely remove the finite paths. However, that is not the case with the µ-calculus. • You are right with that, if I understand it correctly. Looking at the local model checking proof tree, we could ignore nodes 2,3,4, and could branch from node 5 to the successor state s8 that directly will prove a|c. Yes, that will make the local model checking proof shorter, but still correct (since <> already holds if we can prove it for one successor). mehr ## Ihre Antwort Du kannst deinen Eintrag bereits als Gast verfassen. Deine Antwort wird zwischengespeichert, bis du dich eingeloggt oder registriert hast. Bitte nur konstruktive Antworten auf diese konkrete Frage posten. Falls du eine Frage stellen willst, dann stelle eine neue Frage. Für kurze Diskussionen und Nachfragen benutze bitte die Kommentar-Funktion. Deine Fragen und Antworten kannst du jederzeit nachbearbeiten und verbessern. Gute Fragen und Antworten kannst du mit einem Upvote oder Downvote bewerten. Schreibe deine Antwort auf diese Frage [Vorschau ausblenden] ## Statistik Gefragt: 2017-08-22 18:07:02 +0100 Gesehen: 38 mal Letztes Update: Aug 22 '17
# What are the intercepts of 2x-11y=4? Jul 24, 2016 $x = 2$ $y = - \frac{4}{11}$ #### Explanation: $2 x - 11 y = 4$ $x$-intercept is when $y = 0$ So by putting $y = 0$ in the above equation we get $2 x - 11 \left(0\right) = 4$ or $2 x = 4$ or $x = 2$ --------Ans$1$ and $y$-intercept is when $x = 0$ So by putting $x = 0$ in the above equation we get $2 \left(0\right) - 11 y = 4$ or $- 11 y = 4$ $y = - \frac{4}{11}$ ---------Ans$2$
# Qual è la scomposizione in fattori primi di 180? #### Risposta: 180=2^2xx3^2xx5 #### Spiegazione: We have to obtain the factors of 180 which are color(blue)"prime numbers" Acolor(blue)" prime number" being a number with only 2 factors.That is 1 and the number itself. For example "5 only has 2 factors 1 and 5"rArr5=1xx5 Here are the first few color(blue)"prime numbers" 2,3,5,7,11,13,..... A number which is not prime is a color(red)"composite number". Composite numbers have more than 2 factors. All composite numbers can be expressed as a product of prime factors. For example 10 has factors 1 ,2 ,5 and 10 but can be expressed as a product of prime factors. rArr10=2xx5 Esistono diversi modi per trovare i fattori primi di un numero. Eccone uno. Starting with 180, divide it by the lowest prime number 2 , repeat until it cannot be divided by 2 and move on to next prime number 3 and so on. When the remainder is 1 stop-we have obtained our prime factors. •180÷color(red)(2)=90 •90÷color(red)(2)=45 " which cannot be divided by" 2to3 •45÷color(red)(3)=15 •15÷color(red)(3)=5" which cannot be divided by" 3to5 •5÷color(red)(5)=1" having reached 1, we are there" rArr180=color(red)(2xx2xx3xx3xx5)=2^2xx3^2xx5
How to trim adapters and remove polyA tails from QuantSeq 3' FWD data 0 0 Entering edit mode 4.7 years ago Chloe • 0 I am trying to work out the best way to trim poly A tails of variable length from the 3' end of my data and if possible also remove adapters and other contaminating sequence at the same time from the 5' end. My data is 75 bp single end. Libraries were prepared using the QuantSeq 3' FWD kit and sequenced using the Illumina NextSeq 500 I think in order to remove the adapters etc from the 5' end I will just need to trim the first 12 nts (this is what it says to do under FAQ on the QuantSeq website, although it does not specifically say how to remove adapters) The QuantSeq website says to do this: for sample in runID*R1_001.fastq; do cat ${sample} | bbduk.sh in=stdin.fq out=${sample}_trimmed_clean ref=/data/resources/polyA.fa.gz,/data/resources/truseq.fa.gz k=13 ktrim=r forcetrimleft=11 useshortkmers=t mink=5 qtrim=t trimq=10 minlength=20; done I downloaded bbduk to try this but it didn't have the ref=/data/resources/polyA.fa.gz file and I am at a loss on how to make it myself. Does anyone have any ideas on how to do this in either bbduk or trimmomatic or something else? RNA-Seq quantseq trim poly-A tails • 3.7k views 1 Entering edit mode Brian Bushnell : may have included the polyA file in a past iteration of bbmap suite but that file is no longer there. Can you try the following instead (replace path_to with a real path on your computer in the command below): for sample in runID*R1_001.fastq; do cat ${sample} | bbduk.sh in=stdin.fq out=${sample}_trimmed_clean ref=/path_to/bbmap/resources/truseq.fa.gz literal=AAAAAAAAA k=13 ktrim=r forcetrimleft=11 useshortkmers=t mink=5 qtrim=t trimq=10 minlength=20; done 0 Entering edit mode Thanks I'll try that. Will that only remove poly As of that exact length from the end of the read? The reads are short and generated towards the 3' end, so some of the poly A tails seem to be in the middle of the read 0 Entering edit mode With short fragment lengths it's possible/likely that you sequenced first the mRNA, then the polyA tail and then in the adapter. So you would first have to remove the adapter to "expose" the terminal poly A sequence, and remove that too. For NextSeq sequencing you can also have a polyG tail, corresponding to the two-colour chemistry in which 'G' is absence. So you would want to trim those too.
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! JH # Use the Ratio Test to determine whether the series is convergent or divergent.$\displaystyle \sum_{n = 1}^{\infty} \frac {( - 2)^n}{n^2}$ ## Diverges Sequences Series ### Discussion You must be signed in to discuss. Lectures Join Bootcamp ### Video Transcript let's use the ratio test to determine whether this series converges or diverges. Now, the ratio test involves looking at this term here a n So in our case, this is negative two to the end over and squared. So we're interested in the answer to this limit. Here we look at absolute value a n plus one over a n and notice that the plus one, the one is being added to the n, not a n. So let's just go ahead and evaluate numerator and denominator here, so and plus one maybe do this in red, right? This is negative. Two and plus one right over n plus one square. Yeah, and then in blue and all right, there it is down there just using our formula up here for a M. Uh huh. And then now it's maybe clean this up a little bit by writing it as a product, and I could lose the absolute value. And when I do that, I'll get remember these negative signs here. So and I also I'll rewrite this as two times two to the end. Yeah, Now we see by doing this the reason for doing this trick up here is so that we could cancel the to to the ends. And now this is all equal to the limit and goes to infinity two and squared over and plus one square. This limit is of the form infinity over infinity. But you could do some algebra to simplify this. You could even use low petals. Role here have I used low petals rule. So you differentiate top and bottom with respect to end. So the numerator becomes foreign, the denominator becomes too and plus one. And here you'd still have infinity over infinity so you could use Low Patel once more low petals rule. So the numerator becomes just before the denominator becomes the two. That limit is two. However, that's bigger than one. So going on to the next page, we would be able to conclude that this series here is going to diverge. So let's let me write that out on the last page since the limit, How yeah, is bigger than one. The series okay, diverges by the ratio test, and that's our final answer. JH Sequences Series Lectures Join Bootcamp
# 12.1: Everyday Stoichiometry You are in charge of setting out the lab equipment for a chemistry experiment. If you have twenty students in the lab (and they will be working in teams of two) and the experiment calls for three beakers and two test tubes, how much glassware do you need to set out? Figuring this out involves a type of balanced equation and the sort of calculations you would do for a chemical reaction. ## Everyday Stoichiometry You have learned about chemical equations and the techniques used in order to balance them. Chemists use balanced equations to allow them to manipulate chemical reactions in a quantitative manner. Before we look at a chemical reaction, let's consider the equation for the ideal ham sandwich. Our ham sandwich is composed of 2 slices of ham $$\left( \ce{H} \right)$$, a slice of cheese $$\left( \ce{C} \right)$$, a slice of tomato $$\left( \ce{T} \right)$$, 5 pickles $$\left( \ce{P} \right)$$, and 2 slices of bread $$\left( \ce{B} \right)$$. The equation for our sandwich is shown below. $2 \ce{H} + \ce{C} + \ce{T} + 5 \ce{P} + 2 \ce{B} \rightarrow \ce{H_2CTP_5B_2}$ Now let us suppose that you are having some friends over and need to make five ham sandwiches. How much of each sandwich ingredient do you need? You would take the number of each ingredient required for one sandwich (its coefficient in the above equation) and multiply by five. Using ham and cheese as examples and using a conversion factor, we can write: $5 \ce{H_2CTP_5B_2} \times \frac{2 \: \ce{H}}{1 \ce{H_2CTP_5B_2}} = 10 \: \ce{H}$ $5 \ce{H_2CTP_5B_2} \times \frac{1 \ce{C}}{1 \ce{H_2CTP_5B_2}} = 5 \: \ce{C}$ The conversion factors contain the coefficient of each specific ingredient as the numerator and the formula of one sandwich as the denominator. The result is what you would expect. In order to make five ham sandwiches, you would need 10 slices of ham and 5 slices of cheese. This type of calculation demonstrates the use of stoichiometry. Stoichiometry is the calculation of the amount of substances in a chemical reaction from the balanced equation. The sample problem below is another stoichiometry problem involving ingredients of the ideal ham sandwich. Example 12.1.1 Kim looks in the refrigerator and finds that she has 8 slices of ham. In order to make as many sandwiches as possible, how many pickles does she need? Use the equation above. Solution: Step 1: List the known quantities and plan the problem. • Have 8 ham slices $$\left( \ce{H} \right)$$ • $$2 \: \ce{H} = 5 \: \ce{P}$$ (conversion factor) Unknown • How many pickles $$\left( \ce{P} \right)$$ needed? The coefficients for the two reactants (ingredients) are used to make a conversion factor between ham slices and pickles. Step 2: Solve. $8 \: \ce{H} \times \frac{5 \: \ce{P}}{2 \: \ce{H}} = 20 \: \ce{P}$ Since 5 pickles combine with 2 ham slices in each sandwich, 20 pickles are needed to fully combine with 8 ham slices.
# What's the differences between one-dimensional spectrum and two-dimensional spectrum? mostly, we use the one-dimensional spectrum. But sometimes we use two-dimensional spectrum, what's the differences between them? • check out the one at Keck that I use, the documentation might prove more helpful than any else. www2.keck.hawaii.edu/inst/hires – LaserYeti Dec 6 '16 at 7:11 • No problem thumbsup – LaserYeti Dec 7 '16 at 15:16 ## 1 Answer When you place a spectrograph slit on a source, the spectrum recorded can be thought of as lots of images of the slit at different wavelengths. Ordinarily, you would sum up this spectrum along the direction of the slit images to give you a one dimensional spectrum. If however, you leave the image as recorded, then you have a two dimensional spectrum - intensity as a function of wavelength along one axis and as a function of position along the slit in the other. Two dimensional spectra are used when we expect the spectrum to vary with position along the slit. Examples might include a spectrum recorded across a galaxy, or a spectrum of a binary star with the slit placed across both components. An example is shown below. The inset image shows a broad-band image of V458 Vul - a classical nova that is surrounded by (not visible) shells of ionised material. The authors of this particular study lined up a spectrograph slit as shown in the inset and then obtained the two two-dimensional spectra shown in the main image. What you have to imagine is that each position on the slit produces a horizontal spectrum at a vertical position that corresponds to its position on the slit. Therefore we see a bright spectrum across the middle corresponding to the central source, but there are then "knots" of emission at particular wavelengths that are some distance away from the central star. Slitless two dimensional spectroscopy is also possible using integral field spectrographs. Fibers record spectra over a two dimensional area. This can also be referred to as two dimensional spectroscopy. • It would be nice if you put some graphs to explain more clearly. Thanks a lot. – A.Bbom Dec 5 '16 at 8:02 • I understand now, thank you for spending your precious time on my simple question. Thanks again. – A.Bbom Dec 7 '16 at 13:40
# OCR Shop XTR/API Frequently Asked Questions ### Installation and set-up 1. Why is the API not installed with my other Vividata softare? Software using Vividata's older installer was installed by default in /usr/vividata on Linux and /opt/Vividata on Solaris. The new installer used for the API always installs in /opt/Vividata by default. You may control where the API installer places the API by setting the environment variable VV_HOME to the desired directory prior to running the installer. If you use some of Vividata's older software, you may already have the VV_HOME environment variable set in your environment. In this case, you should be careful when you install the API and later when you start the ocrxtrdaemon. If VV_HOME is inconsistent, then you will receive an error. To fix it, make sure VV_HOME matches the API installation directory when you start the ocrxtrdaemon. 2. How do I generate log output? Various levels of log output may be generated on both the client and daemon sides as output to stdout and stderr. To generate log output from your client program, call the function vvLogSetLevel (see an example in vvxtrSample.cc): void vvLogSetLevel( int logLevel ); The log level should be set anywhere from 1 to 1000, where the higher the number, the more output will be generated: • 1 - Error messages only • 250 - Warning messages • 500 - Informational messages • 515 - Debug information • 520 - All information By default, only error messages are printed. Similarly, the OCR Shop XTR/API daemon (ocrxtrdaemon) can generate log output to stdout and stderr. Control the verbosity of the daemon log output by setting the environment variable VV_DEBUG to the desired log level before starting the ocrxtrdaemon process. For example, set VV_DEBUG to 1000 for the maximum debugging output. For both the client and the daemon, you can save the log output to a file by piping it from the command line: clientProgram >& client.log ocrxtrdaemon >& daemon.log 3. How do I customize where temp files are placed? To make sure all temp files are placed in the directory of your choice, set these environment variables: • TMP • TMP_DIR • VV_TMPDIR to the directory where you wish to store the temp files. Make sure that you do this in the shell where you run the daemon program ocrxtrdaemon. 4. How can I set up the API in my client/server environment? The OCR Shop XTR/API itself operates as a client/server system, where your application links statically with the provided communicator library so that it may communicate dynamically with the daemon process. The main daemon process handles communication and can create multiple instances of the OCR engine, serving one or more client programs. As a result of this configuration, you have three basic options for using the OCR Shop XTR/API in your own client/server environment: 1. All OCR and related processing takes place on the server side. You could install the OCR daemon and your client program based on our API on one server. All of your client machines would send images to and receive output from this server, through your own software. The server would have multiple XTR/API licenses installed on it, depending on your anticipated OCR needs. The server should be a multiprocessor machine or cluster of machines if you anticipate running many OCR jobs at once. 2. All OCR and related processing takes place on the client side. You could install the OCR daemon on each client machine, along with your client program based on our API. Each client machine would handle its own OCR requirements. Licenses would be installed on each client machine, or could be installed on one license server (floating licenses). If you install more than one license on each client machine or if you use floating licenses, you could run concurrent OCR processing on each client machine. 3. The OCR daemon runs on the server side and the client program runs on the client side. You could install the OCR daemon on the server, along with all of our licenses. Again, in this case, the server should be a multiprocessor machine or cluster of machines if you anticipate running many OCR jobs at once. The client program would be installed on each client machine, and would communicate with the OCR daemon across your network. Note that scenarios 1 and 3 could require significantly more network traffic in order to transfer the image data back and forth between client and server machines. ### Image input 1. How do I load image data directly from memory into the OCR engine? In order to load image data from memory, you must pass a vvxtrImage structure containing the image data to vvEngAPI::vvReadImageData(const struct vvxtrImage * img). A sample program is available that demonstrates this functionality: vvxtrSample2.cc To compile this sample program, save the source code for vvxtrSample2.cc as a file called "vvxtrSample.cc". Then you can compile it with the same GNUmakefile and supporting files as distributed with the OCR Shop XTR/API, and found in /opt/Vividata/src after your installation. ### Processing and Recognition 1. How can I improve recognition accuracy? Typeset, high-quality printed pages return the best recognition accuracy. The following factors most affect text-recognition accuracy: • Preprocessing Settings • Recognition Parameters • Line Art and Photographic Regions • Document Quality • Scanning Process No single combination of preprocessing settings and recognition parameters always results in the quickest, most accurate recognition job. However, if you use the settings most appropriate to each document, OCR Shop XTR™/API's speed and accuracy will be maximized. OCR Shop XTR™/API may recognize some line-art graphics or areas of photographic regions as text if the artwork is poor and the lines resemble letter strokes. Adjusting the dm_black_threshold parameter may change how the OCR Engine differentiates between photographic regions and text regions. Individual regions can also be manually specified as graphical or textual content. OCR Shop XTR™/API recognizes characters in almost any font in sizes from 5 to 72 points. The engine interprets font size based on the image's dpi, so set the input image dpi carefully in order to guarantee the image's fonts fall within the recognized sizes. Following certain guidelines may improve recognition accuracy: • The print should be as clean and crisp as possible. • Characters should be distinct, separated from each other and not blotched together or overlapping. • The document should be free of handwritten notes, lines and doodles. • Anything that is not a printed character slows recognition, and any character distorted by a mark will be unrecognizable. • Try to avoid highly stylized fonts. For example, OCR Shop XTR™ may not recognize text in the Zapf Chancery® font accurately. • Try to avoid underlined text. Underlining changes the shape of descenders on the letters q, g, y, p, and j. If you have control over the scanning process, you can improve recognition by eliminating skew and background noise. Some paper is so thin that the scanner reads text printed on the back side of the scanned page. Put a black piece of paper between the sheet and the lid of the scanner. By eliminating any need for the OCR Engine to deskew an image, recognition processing speed will improve. 2. How can I improve performance? Here are several items to consider which affect how fast the OCR Shop XTR/API processes your images: • One of the primary benefits of using the API is that the OCR engine does not need to be shut down and restarted between each page. Make sure that you do not needlessly shut down and restart the OCR engine and OCR session. Unless you want to switch languages, you should be able to recognize an unlimited number of pages within one OCR session. • The OCR Shop XTR/API works using a daemon to handle the OCR processing. This allows for the flexibility such that the daemon and client program do not have to run on the same system. However, if the OCR daemon and the client program do reside on the same filesystem, you should tell the daemon this in your client program so that it can optimize communications. Before starting the OCR session, make this function call to the OCR engine: vvERROR(xtrEngine->vvSetHint(vvHintLocalFilesystem)); See vvEngAPI::vvSetHint and vvHintLocalFilesystem. • Preprocessing takes up a large portion of the total OCR processing time, and some of the preprocessing functions are among the most processor intensive functions. Carefully pick and choose which preprocessing options you have turned on if you are concerned about time. And pay attention to the default settings; the defaults provide the best OCR results under most conditions, not the best balance of fast processing time and results in your particular circumstances. For example, if you know all of your images will be properly rotated, turn off dm_pp_auto_orient. If you know the quality of your images is high, turn off degraded image processing dm_pp_autosetdegrade. If you are certain you do not want to use the fax filter, make sure it is off and not set to automatic (dm_pp_fax_filter. See dm_pp_remove_halftone and the other preprocessing options. • One particular preprocessing option that can take up significant time is deskewing. If your images are straight to start with, processing will be faster and you can turn off deskewing. If some or all of your images will be skewed, understand that it will take longer to process them because of the skew. See dm_pp_deskew 3. What is the correct sequence of actions for processing multiple files? When processing multiple input files, the sequence of operations is, for example: Note on vvxtrCreateRemoteEngine: vvEngAPI::vvxtrCreateRemoteEngine is different from vvEngAPI::vvInitInstance, vvEngAPI::vvRecognize, etc. because it is actually starting a new engine. It tells the ocrxtrdaemon to fork a new process that becomes the new engine. The "action" functions such as vvEngAPI::vvInitInstance and vvEngAPI::vvRecognize work within that engine and cause the engine to change state. You can call vvEngAPI::vvxtrCreateRemoteEngine multiple times to create multiple engines that all run concurrently. Each engine has its own state and is used individually. A call to vvEngAPI::vvStartDoc, for example, is made to one specific engine. When to set options: Options for preprocessing and recognition do not have to be set just before the vvEngAPI::vvPreprocess and vvEngAPI::vvRecognize calls. After being set, the values are retained in the engine until the ocr session is ended or vvEngAPI::vvInitValues is called. If you are reading your image data from memory, then you do not need to call vvEngAPI::vvOpenImageFile or vvEngAPI::vvCloseImageFile. You just need to call vvEngAPI::vvReadImageData and vvEngAPI::vvUnloadImage. Ordering of output actions versus input actions: The sequence of actions used to start, write and close the output document do not have to take place in the exact order above -- the output document is flexible with respect the input document. Recognition must take place before vvEngAPI::vvSpoolDoc may be called, but otherwise vvEngAPI::vvStartDoc can be called any time between vvEngAPI::vvStartOCRSes and vvEngAPI::vvSpoolDoc, and vvEngAPI::vvEndDoc may be called any time between vvEngAPI::vvSpoolDoc and vvEngAPI::vvEndOCRSes. vvEngAPI::vvSpoolDoc may be called multiple times to write multiple output pages. See this page for a list of the basic sequence of actions, further description of handling data, input and output, and actions. Further down on the same page, the State table for actions describes in detail how the engine state works. Many of the actions can be considered stack-like: after you start an output document with vvEngAPI::vvStartDoc, you must close it with vvEngAPI::vvEndDoc before you can exit the OCR session; after you load image data into the engine with vvEngAPI::vvReadImageData, you must unload it with vvEngAPI::vvUnloadImage before you can load any new image data into the engine. Actions such as vvEngAPI::vvPreprocess and vvRecognize are a little different; they may be called multiple times and must obey a certain ordering -- vvEngAPI::vvPreprocess must be called before vvEngAPI::vvRecognize, vvEngAPI::vvRecognize must be called before vvEngAPI::vvSpoolDoc. ### Custom regions 1. How do I tell the engine to not divide the image into regions? Before calling vvEngAPI::vvPreprocess, make sure you set the value dm_pp_auto_segment to vvNo. 2. How do I create my own regions? When the engine runs preprocessing on an image during the vvEngAPI::vvPreprocess call, the engine will auto segment the input image if the preprocessing value dm_pp_auto_segment is set to vvYes, or it will not divide the image into regions if this value is set to vvNo. In either case, you can create user defined regions. To get a list of the current regions, get the value of dm_region_ids from the engine by using the vvEngAPI::vvGetValue function. To create a new region: • Set dm_current_region to an unused region id number. • Set up the properties of the new region using the vvEngAPI::vvGetValue function. The minimal set of values you should specify is: • dm_region_uor_string (See What is a UOR?.) • dm_region_uor_count • dm_region_type See the other values starting with "<code>::dm_region</code>" for other region properties you can specify. • Finish setting up this new region in the OCR engine by calling the function vvEngAPI::vvSetRegionProperties. • Now if you query the engine again for the list of dm_region_ids, you should see your new region listed. A sample program to demonstrate creation of a new region is available upon request. Note that regions may also be deleted; see the function vvEngAPI::vvRemoveRegion. 3. What is a UOR? "\ref UOR" stands for "union of rectangles" and is used to describe the bounding box of a region. The value dm_region_uor_string defines the UOR for a region. The UOR for a region may include one or more rectangles. Use of multiple rectangles permits oddly shaped regions, important for documents where text and images appear closely together, in such a way that one rectangle cannot encompass an entire text region without including part of what should be an image region. To set the UOR for a region: Using the function vvEngAPI::vvSetValue, you must set the current region (dm_current_region), then set the UOR definition (dm_region_uor_string) and the number of rectangles (dm_region_uor_count). Finally, the region information is committed in the OCR engine with a function call to vvEngAPI::vvSetRegionProperties. Formatting the UOR string: The UOR string (dm_region_uor_string) must be formatted correctly for your application to work correctly, and the region count (dm_region_uor_count) must be set accurately. In dm_region_uor_string, coordinates are separated by commas; rectangles are separated by semicolons; the string should contain no whitespace. For example, if you want a region to consist of one rectangle with the coordinates (400,800) by (600,1400), then you would set dm_region_uor_string to 400,800,600,1400 and dm_region_uor_count to 1. In general, the format of the dm_region_uor_string should be: x1,y1,x2,y2;x3,y3,x4,y4;x5,y5,x6,y6 to specify three rectangles defined conceptually: rectangle one: (x1,y1) (x2,y2) rectangle two: (x3,y3) (x4,y4) rectangle three: (x5,y5) (x6,y6) ### File Formats 1. What are image PDFs versus text PDFs? A PDF document consists of any combination of text and bitmap images embedded in a PDF file. It may also contain structural information used for formatting and interactive features such as hyperlinks. Because of the flexibility of the PDF file format, a PDF file may be used as an "image" file or as a "text" file. When used as an "image", PDF files are commonly used as optical character recognition (OCR) input, and when used as a "text" document, PDF files are often used as OCR output. OCR is the process of converting image bitmap data into text data, so it should be clear which type of PDF files are appropriate as OCR input formats and OCR output formats. In general, one may identify three types of PDF documents: • Image-only PDF Image-only PDF documents contain only a bitmap of a document and are produced by encapsulating a bitmap image in a "pdf wrapper." The result is an exact representation of the bitmap image. Image-only PDF file size is large because it consists solely of bitmap image data. Image-only PDF documents contain no searchable text; they may not be indexed and the text may not be copied. This format is used for OCR input, because it contains bitmap data and no text data. • Normal PDF Normal PDFs contain text and embedded graphical elements. The text is scalable and can be searched, copied, and indexed. Normal PDF file size is small, because most data is textual and embedded graphics are usually small. Because text data is stored in normal PDF documents, the clarity is good due to scalable text, and the text is searchable. Normal PDFs must be generated by some sort of editor or an OCR application; they can not be generated directly from a scanner. Normal PDFs are a common OCR output format, because they may contain recognized text and approximate the original bitmap image's formatting and embedded graphics. See the output format vvPDFFormatNormal. Normal PDFs are not normally used for OCR input, because they already contain text data and therefore do not need to be recognized. • Image+text PDF Image+text PDFs are a hybrid between Normal PDFs and Image only PDFs and are used because they combine the best features of both. Like Image-only PDFs, Image+text PDFs display the entire original bitmap; everything visible in an Image+text PDF is bitmap data. However, Image+text PDFs also contain an invisible layer of text beneath the visible bitmap. Image+text PDF file size is large, because it contains a full-page bitmap. Image+text PDF text may be searched, copied, and indexed. Image+text PDFs are a common OCR output format, because they contain recognized text, allowing them to be searched and indexed, while at the same time they retain the exact appearance of the original scanned bitmap image. See the output format vvPDFFormatText. Image+text PDFs are not used for OCR input, because they already contain text data and therefore do not need to be recognized. Note that Vividata's OCR applications will accept many Normal PDFs and Image+text PDFs as input. However, this can result in information loss, because the OCR application renders the input PDF file from text into bitmap data, then performs OCR on the bitmap data in order to convert it back to text. As a result, we do not recommend using Vividata's OCR applications to extract text from Normal PDFs and Image+text PDFs, and would instead suggest using a utility such as "pdftotext" to directly pull text data from a text PDF. 2. What is the XDOC format and how do I use it? The XDOC format is a ScanSoft text output format which provides detailed information about the text, images, and formatting in a recognized document. To use XDOC output, set the output document format to one of the following types of XDOC output: The following values are associated with XDOC output and can be set to affect the information written to XDOC output: These files, included with the API in /opt/Vividata/doc, provide specific information about the XDOC format, and enough detail for a user to parse the output. 3. What do the font family abbreviations stand for in the XDOC output? In XDOC output, the font family is represented by one of the following abbreviations: • "H" - sans serif, variable pitch, compare to "Helvetica". • "HC" - sans serif, variable pitch, condensed, compare to "Helvetica condensed", or "Arial Narrow". • "T" - serif, variable pitch, compare to "Times". • "TC" - serif, variable pitch, condensed, compare to "Times condensed". • "C" - serif, fixed pitch, compare to "Courier". Rather than detecting specific fonts, the OCR engine detects the features of a font, such as whether it has serifs, in order to group it in a font family. Generated on Thu Dec 11 09:32:25 2003 for OCR Shop XTR/API User Documentation by 1.3.2
# (Ax-B) is invertible. Show (Ax-B)^2 is also invertible. ###### Question: (Ax-B) is invertible. Show (Ax-B)^2 is also invertible. #### Similar Solved Questions ##### How do you write the nth term rule for the arithmetic sequence with a_7=34 and a_18=122? How do you write the nth term rule for the arithmetic sequence with a_7=34 and a_18=122?... ##### Compute the least acceleration with which a $45-\mathrm{kg}$ woman can slide down a rope if the rope can withstand a tension of only $300 \mathrm{~N}$. The weight of the woman is $m g=(45 \mathrm{~kg})\left(9.81 \mathrm{~m} / \mathrm{s}^{2}\right)=441 \mathrm{~N}$. Because the rope can support only $300 \mathrm{~N}$, the unbalanced downward force $F$ on the woman (i.e., the accelerating force) must be at least $441 \mathrm{~N}-300 \mathrm{~N}=141 \mathrm{~N}$. Her minimum downward acceleration i Compute the least acceleration with which a $45-\mathrm{kg}$ woman can slide down a rope if the rope can withstand a tension of only $300 \mathrm{~N}$. The weight of the woman is $m g=(45 \mathrm{~kg})\left(9.81 \mathrm{~m} / \mathrm{s}^{2}\right)=441 \mathrm{~N}$. Because the rope can support only ... ##### B) 2y" = 3y ,Y(u) =1,y of the given IVP problem tcnna of a Taylor series solution centered at 5.Find the first six nonzero "x' ~y',y(0) = Z,y'(0) = b) 2y" = 3y ,Y(u) =1,y of the given IVP problem tcnna of a Taylor series solution centered at 5.Find the first six nonzero "x' ~y',y(0) = Z,y'(0) =... ##### Pt) Find the minimum distance from the point (1,1,15) to the paraboloid given by the equation z = x2 + yminimum distance(Also be sure that you can give a geometric justification for your answer:) pt) Find the minimum distance from the point (1,1,15) to the paraboloid given by the equation z = x2 + y minimum distance (Also be sure that you can give a geometric justification for your answer:)... ##### Question 2 please い。) 2. Use both the defining and computational formulas to compl sample standard... question 2 please い。) 2. Use both the defining and computational formulas to compl sample standard deviation for the following sent the number of deer at a bird feeder at times: test scores have a mean of 71, a median of 86, a range 120, and a SD of 14.2. Suppose the instructor adds 10...
VIP: Unified Certified Detection and Recovery for Patch Attack with Vision Transformers Junbo Li, Huan Zhang, Cihang Xie ; Abstract "Patch attack, which introduces a perceptible but localized change to the input image, has gained significant momentum in recent years. In this paper, we propose a unified framework to analyze certified patch defense tasks (including both certified detection and certified recovery) using the recently emerged vision transformer. In addition to the existing patch defense setting where only one patch is considered, we provide the very first study on developing certified detection against the \emph{dual patch attack}, in which the attacker is allowed to adversarially manipulate pixels in two different regions. Benefiting from the recent progress in self-supervised vision transformers (\ie, masked autoencoder), our method achieves state-of-the-art performance in both certified detection and certified recovery of adversarial patches. For certified detection, we improve the performance by up to $\app16\%$ on ImageNet without additional training for a single adversarial patch, and for the first time, can also tackle the more challenging dual patch setting. Our method largely \emph{closes the gap} between detection-based certified robustness and clean image accuracy. For certified recovery, our approach improves certified accuracy by $\app2\%$ on ImageNet across all attack sizes, attaining the new state-of-the-art performance." Related Material [pdf] [supplementary material] [DOI]
I can’t remember if I’ve written this down at some point in the past, but I needed to produce a “framed” version of the minipage enviromment in $\LaTeX$ to put some figures on an exam in boxes. Here’s some good code to define such a thing. Maybe there’s an easier way (there always is), but this works well. The code defines a new environment called “fmpage” which takes one argument (the same argument that minipage takes). \newsavebox{\fmbox} \newenvironment{fmpage}[1] {\begin{lrbox}{\fmbox}\begin{minipage}{#1}} {\end{minipage}\end{lrbox}\fbox{\usebox{\fmbox}}} Example: \begin{fmpage}{0.5\linewidth} $$P(t) = 180 \cdot (1.1150)^t$$ Note: Population is measured here in millions and t=0 corresponds to Jan. 1, 2000. \caption{Population Model \#1} \label{fig:figure1} \end{fmpage} Produces:
• Volume 38, Issue 2 June 2017 • Gulmarg, Kashmir, India: Potential Site for Optical Astronomical Observations The site characteristics of Gulmarg, Kashmir at an altitude of about 2743.2 m above sea level is based on analysis of meteorological conditions, cloud cover, temperature, wind speed, wind direction, relative humidity and atmospheric pressure, etc. Analysis and characterization of meteorological conditions suggest that Gulmarg, Kashmir is a potential site for carrying out photometric as well as spectroscopic observations of celestial objects. • Metallicity of Sun-like G-stars that have Exoplanets By considering the physical and orbital characteristics of G type stars and their exoplanets, we examine the association between stellar mass and its metallicity that follows a power law. Similar relationship is also obtained in case of single and multiplanetary stellar systems suggesting that, Sun′s present mass is about 1% higher than the estimated value for its metallicity. Further, for all the stellar systems with exoplanets, association between the planetary mass and the stellar metallicity is investigated, that suggests planetary mass is independent of stellar metallicity. Interestingly, in case of multiplanetary systems, planetary mass is linearly dependent on the stellar absolute metallicity, that suggests, metal rich stars produce massive (≥1 Jupiter mass) planets compared to metal poor stars. This study also suggests that there is a solar system planetary missing mass of ∼0.8 Jupiter mass. It is argued that probably 80% of missing mass is accreted onto the Sun and about 20% of missing mass might have been blown off to the outer solar system (beyond the present Kuiper belt) during early history of solar system formation. We find that, in case of single planetary systems, planetary mass is independent of stellar metallicity with an implication of their non-origin in the host star’s protoplanetary disk and probably are captured from the space. Final investigation of dependency of the orbital distances of planets on the host stars metallicity reveals that inward migration of planets is dominant in case of single planetary systems supporting the result that most of the planets in single planetary systems are captured from the space. • Spectroscopic Variability of Supergiant Star HD14134, B3Ia Profile variations in the ${H}\alpha$ and ${H}\beta$ lines in the spectra of the star HD14134 are investigated using observations carried out in 2013–2014 and 2016 with the 2-m telescope at the Shamakhy Astrophysical Observatory. The absorption and emission components of the ${H}\alpha$ line are found to disappear on some observational days, and two of the spectrograms exhibit inverse P-Cyg profile of ${H}\alpha$. It was revealed that when the ${H}\alpha$ line disappeared or an inversion of the P-Cyg-type profile is observed in the spectra, the ${H}\beta$ line is displaced to the longer wavelengths, but no synchronous variabilities were observed in other spectral lines (CII λ 6578.05 Å, λ 6582.88 Å and HeI λ 5875.72 Å) formed in deeper layers of the stellar atmosphere. In addition, the profiles of the ${H}\alpha$ and ${H}\beta$ lines have been analysed, as well as their relations with possible expansion, contraction and mixed conditions of the atmosphere of HD14134. We suggest that the observational evidence for the non-stationary atmosphere of HD14134 can be associated in part with the non-spherical stellar wind. • Classification of Stellar Spectra with Fuzzy Minimum Within-Class Support Vector Machine Classification is one of the important tasks in astronomy, especially in spectra analysis. Support Vector Machine (SVM) is a typical classification method, which is widely used in spectra classification. Although it performs well in practice, its classification accuracies can not be greatly improved because of two limitations. One is it does not take the distribution of the classes into consideration. The other is it is sensitive to noise. In order to solve the above problems, inspired by the maximization of the Fisher’s Discriminant Analysis (FDA) and the SVM separability constraints, fuzzy minimum within-class support vector machine (FMWSVM) is proposed in this paper. In FMWSVM, the distribution of the classes is reflected by the within-class scatter in FDA and the fuzzy membership function is introduced to decrease the influence of the noise. The comparative experiments with SVM on the SDSS datasets verify the effectiveness of the proposed classifier FMWSVM. • Ratio of the Core to the Extended Emissions in the Comoving Frame for Blazars In a two-component jet model, the emissions are the sum of the core and extended emissions: $S^{\mathrm{ob}}=S_{\mathrm{core}}^{\mathrm{ob}}+S_{\mathrm{ext}}^{\mathrm{ob}}$, with the core emissions, $S_{\mathrm{core}}^{\mathrm{ob}}= f S_{\mathrm{ext}}^{\mathrm{ob}}\delta ^{q}$ being a function of the Doppler factor $\delta$, the extended emission $S_{\mathrm{ext}}^{\mathrm{ob}}$, the jet type dependent factor q, and the ratio of the core to the extended emissions in the comoving frame, f. The f is an unobservable but important parameter. Following our previous work, we collect 65 blazars with available Doppler factor $\delta$, superluminal velocity $\beta _{\mathrm{app}}$, and core-dominance parameter, R, and calculated the ratio, f, and performed statistical analyses. We found that the ratio, f, in BL Lacs is on average larger than that in FSRQs. We suggest that the difference of the ratio f between FSRQs and BL Lacs is one of the possible reasons that cause the difference of other observed properties between them. We also find some significant correlations between $\log f$ and other parameters, including intrinsic (de-beamed) peak frequency, $\log \nu _{\mathrm{p}}^{\mathrm{in}}$, intrinsic polarization, $\log P^{\mathrm{in}}$, and core-dominance parameter, $\log R$, for the whole sample. In addition, we show that the ratio, f, can be estimated by R. • Peculiar Emission Line Generation from Ultra-Rapid Quasi-Periodic Oscillations of Exotic Astronomical Objects The purpose of this article is to alert astronomers, particularly those using spectroscopic surveys, to the fact that exotic astronomical objects (e.g. quasars or active galactic nuclei) that send ultra-rapid quasi periodic pulses of optical light would generate spectroscopic features that look like emission lines. This gives a simple technique to find quasi periodic pulses separated by times smaller than a nanosecond. One should look for emission lines that cannot be identified with known spectral lines in spectra. Such signals, generated by slower pulses, could also be found in the far infra-red, millimeter and radio regions, where they could be detected as objects unusually bright in a single narrow-band filter or channel. The outstanding interest of the technique comes from its simplicity so that it can be used to find ultra-rapid quasi-periodic oscillators in large astronomical surveys. A very small fraction of objects presently identified as Lyman α emitters that do not have other spectral features to confirm the Lyman $\alpha$ redshift, may possibly be quasi-periodic oscillators. However this is only a hypothesis that needs more observations for confirmation. • Coronal Magnetic Field Lines and Electrons Associated with Type III–V Radio Bursts in a Solar Flare We recently investigated some of the hitherto unreported observational characteristics of the low frequency (85–35 MHz) type III–V bursts from the Sun using radio spectropolarimeter observations. The quantitative estimates of the velocities of the electron streams associated with the above two types of bursts indicate that they are in the range ${\gtrsim }0.13c–0.02c$ for the type V bursts, and nearly constant (${\approx }0.4c$) for the type III bursts. We also find that the degree of circular polarization of the type V bursts vary gradually with frequency/heliocentric distance as compared to the relatively steeper variation exhibited by the preceding type III bursts. These imply that the longer duration of the type V bursts at any given frequency (as compared to the preceding type III bursts) which is its defining feature, is due to the combined effect of the lower velocities of the electron streams that generate type V bursts, spread in the velocity spectrum, and the curvature of the magnetic field lines along which they travel. • Preface • Editorial • AstroSat: From Inception to Realization and Launch The origin of the idea of AstroSat multi wavelength satellite mission and how it evolved over the next 15 years from a concept to the successful development of instruments for giving concrete shape to this mission, is recounted in this article. AstroSat is the outcome of intense deliberations in the Indian astronomy community leading to a consensus for a multi wavelength Observatory having broad spectral coverage over five decades in energy covering near-UV, far-UV, soft X-ray and hard X-ray bands. The multi wavelength observation capability of AstroSat with a suite of 4 co-aligned instruments and an X-ray sky monitor on a single satellite platform, imparts a unique character to this mission. AstroSat owes its realization to the collaborative efforts of the various ISRO centres, several Indian institutions, and a few institutions abroad which developed the 5 instruments and various sub systems of the satellite. AstroSat was launched on September 28, 2015 from India in a near equatorial 650 km circular orbit. The instruments are by and large working as planned and in the past 14 months more than 200 X-ray and UV sources have been studied with it. The important characteristics of AstroSat satellite and scientific instruments will be highlighted. • In-orbit Performance of UVIT and First Results The performance of the ultraviolet telescope (UVIT) on-board AstroSat is reported. The performance in orbit is also compared with estimates made from the calibrations done on the ground. The sensitivity is found to be within ∼15% of the estimates, and the spatial resolution in the NUV is found to exceed significantly the design value of 1.8′′ and it is marginally better in the FUV. Images obtained from UVIT are presented to illustrate the details revealed by the high spatial resolution. The potential of multi-band observations in the ultraviolet with high spatial resolution is illustrated by some results. • Soft X-ray Focusing Telescope Aboard AstroSat: Design, Characteristics and Performance The Soft X-ray focusing Telescope (SXT), India’s first X-ray telescope based on the principle of grazing incidence, was launched aboard the AstroSat and made operational on October 26, 2015. X-rays in the energy band of 0.3–8.0 keV are focussed on to a cooled charge coupled device thus providing medium resolution X-ray spectroscopy of cosmic X-ray sources of various types. It is the most sensitive X-ray instrument aboard the AstroSat. In its first year of operation, SXT has been used to observe objects ranging from active stars, compact binaries, supernova remnants, active galactic nuclei and clusters of galaxies in order to study its performance and quantify its characteriztics. Here, we present an overview of its design, mechanical hardware, electronics, data modes, observational constraints, pipeline processing and its in-orbit performance based on preliminary results from its characterization during the performance verification phase. • Large Area X-Ray Proportional Counter (LAXPC) Instrument on AstroSat and Some Preliminary Results from its Performance in the Orbit Large area X-ray propositional counter (LAXPC) instrument on AstroSat is aimed at providing high time resolution X-ray observations in 3–80 keV energy band with moderate energy resolution. To achieve large collecting area, a cluster of three co-aligned identical LAXPC detectors, is used to realize an effective area in access of ∼6000cm2 at 15 keV. The large detection volume of the LAXPC detectors, filled with xenon gas at ∼2 atmosphere pressure, results in detection efficiency greater than 50%, above 30 keV. In this article, we present salient features of the LAXPC detectors, their testing and characterization in the laboratory prior to launch and calibration in the orbit. Some preliminary results on timing and spectral characteristics of a few X-ray binaries and other type of sources, are briefly discussed to demonstrate that the LAXPC instrument is performing as planned in the orbit. • The Cadmium Zinc Telluride Imager on AstroSat The Cadmium Zinc Telluride Imager (CZTI) is a high energy, wide-field imaging instrument on AstroSat. CZTI’s namesake Cadmium Zinc Telluride detectors cover an energy range from 20 keV to >200 keV, with 11% energy resolution at 60 keV. The coded aperture mask attains an angular resolution of 17′ over a 4.6× 4.6 (FWHM) field-of-view. CZTI functions as an open detector above 100 keV, continuously sensitive to GRBs and other transients in about 30% of the sky. The pixellated detectors are sensitive to polarization above ∼100 keV, with exciting possibilities for polarization studies of transients and bright persistent sources. In this paper, we provide details of the complete CZTI instrument, detectors, coded aperture mask, mechanical and electronic configuration, as well as data and products. • Early In-orbit Performance of Scanning Sky Monitor Onboard AstroSat We report the in-orbit performance of Scanning Sky Monitor (SSM) onboard AstroSat. The SSM operates in the energy range 2.5 to 10 keV and scans the sky to detect and locate transient X-ray sources. This information of any interesting phenomenon in the X-ray sky as observed by SSM is provided to the astronomical community for follow-up observations. Following the launch of AstroSat on 28th September, 2015, SSM was commissioned on October 12th, 2015. The first power ON of the instrument was with the standard X-ray source, Crab in the field-of-view. The first orbit data revealed the basic expected performance of one of the detectors of SSM, SSM1. Following this in the subsequent orbits, the other detectors were also powered ON to find them perform in good health. Quick checks of the data from the first few orbits revealed that the instrument performed with the expected angular resolution of 12’ × 2.5 and effective area in the energy range of interest. This paper discusses the instrument aspects along with few on-board results immediately after power ON. • Charged Particle Monitor on the AstroSat Mission Charged Particle Monitor (CPM) on-board the Astrosat satellite is an instrument designed to detect the flux of charged particles at the satellite location. A Cesium Iodide Thallium (CsI(Tl)) crystal is used with a Kapton window to detect protons with energies greater than 1 MeV. The ground calibration of CPM was done using gamma-rays from radioactive sources and protons from particle accelerators. Based on the ground calibration results, energy deposition above 1 MeV are accepted and particle counts are recorded. It is found that CPM counts are steady and the signal for the onset and exit of South Atlantic Anomaly (SAA) region are generated in a very reliable and stable manner. • AstroSat – Configuration and Realization AstroSat is India’s first space-based observatory satellite dedicated to astronomy. It has the capability to perform multi-wavelength and simultaneous observations of cosmic bodies in a wide band of wavelengths. This paper briefly summarizes the challenges faced in the configuration of AstroSat spacecraft, accommodation and sizing of its critical subsystems, their realization and testing of payloads and the integrated satellite. • Planning and Scheduling of Payloads of AstroSat During Initial and Normal Phase Observations On 28th September 2015, India launched its first astronomical space observatory AstroSat, successfully. AstroSat carried five astronomy payloads, namely, (i) Cadmium Zinc Telluride Imager (CZTI), (ii) Large Area X-ray Proportional Counter (LAXPC), (iii) Soft X-ray Telescope (SXT), (iv) Ultra Violet Imaging Telescope (UVIT) and (v) Scanning Sky Monitor (SSM) and therefore, has the capability to observe celestial objects in multi-wavelength. Four of the payloads are co-aligned along the positive roll axis of the spacecraft and the remaining one is placed along the positive yaw axis direction. All the payloads are sensitive to bright objects and specifically, require avoiding bright Sun within a safe zone of their bore axes in orbit. Further, there are other operational constraints both from spacecraft side and payloads side which are to be strictly enforced during operations. Even on-orbit spacecraft manoeuvres are constrained to about two of the axes in order to avoid bright Sun within this safe zone and a special constrained manoeuvre is exercised during manoeuvres. The planning and scheduling of the payloads during the Performance Verification (PV) phase was carried out in semi-autonomous/manual mode and a complete automation is exercised for normal phase/Guaranteed Time Observation (GuTO) operations. The process is found to be labour intensive and several operational software tools, encompassing spacecraft sub-systems, on-orbit, domain and environmental constraints, were built-in and interacted with the scheduling tool for appropriate decision-making and science scheduling. The procedural details of the complex scheduling of a multi-wavelength astronomy space observatory and their working in PV phase and in normal/GuTO phases are presented in this paper. • # Journal of Astrophysics and Astronomy Current Issue Volume 38 | Issue 4 December 2017 • # Continuous Article Publication Posted on January 27, 2016 Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
# Otsu Thresholding : Why 'minimum within class variance' gives the optimum threshold? I was trying to understand the Otsu thresholding algorithm in image processing. For that purpose I found a useful link. I got the algorithm flow, but a fundamental doubt arises. Why 'minimum within class variance' ${\sigma_W}^2$ (notation given in the link) gives the optimum threshold? Please can someone clear my doubt?
## April 5, 2011 ### Homotopy Type Theory, IV #### Posted by Mike Shulman So far in this series we’ve described the correspondence between type theory and homotopy theory and defined some basic notions of homotopy theory in type theory, including equivalences in several ways. We’ve also mentioned a few axioms that we may want to add to intensional type theory, including “function extensionality”, a subobject classifier, and a truncation $\pi_{-1} = \tau_{\le -1}$ into (-1)-types. However, nothing we’ve said so far excludes the possibility that all types are discrete (= 0-types = sets). Intensional type theory plus function extensionality has sound semantics in any locally cartesian closed 1-category; and if the category is regular, then $\tau_{\le -1}$ exists; while if it is a topos, then of course it has a subobject classifier. But today I’ll introduce Voevodsky’s univalence axiom, which is not valid in any 1-categorical semantics—or, indeed, in $n$-categorical semantics for any finite $n$! The univalence axiom is perhaps the easiest and most intuitive way to require that our homotopy type theory is honestly “homotopical”, and it also has other pleasant consquences (including, perhaps surprisingly, function extensionality). In Voevodsky’s original phrasing, the univalence axiom is an augmentation of universes, which are a type-theoretic notion that I haven’t mentioned yet. (If you don’t like universes, that’s okay; carry on reading.) At an informal level, I think type-theoretic universes are not very different from the universes of the Grothendieck sort that you may be more familiar with, and are even more closely related to their categorial analogues. In terms of categorical semantics, a universe is a type $U$, together with a display map $E\to U$ (that is, a type $E(u)$ dependent on $u\in U$). We think of the elements of $U$ as “codes for types”, with $u\in U$ coding for the type $E(u)$. And we require that the type-theoretic operations, such as dependent sums and products, are represented by operations on $U$, so that types of the form $E(u)$ are closed under such operations. Type-theoretically, we usually identify types $E(u)$ with their codes $u\in U$, so that the elements (terms) of $U$ are types. We generally assume every type is contained in some universe, so that we can replace judgments of the form “$A$ is a type” with “$A\in U$” for some universe $U$. In particular, any universe $U=U_0$ must be an element of some universe $U_1$, which must be an element of some universe $U_2$, and so on; we often postulate that every type belongs to one of a specific sequence of universes $U_0 \in U_1\in U_2\in\dots$. Frequently a universe is written as “$Type$” or “$Type_n$”. Thus a universe is a “type of types.” If we regard types as sets, then this is like a set of sets. But if we are category theorists, we know that it’s unnatural to have a set of sets; really we should have a category, or at least a groupoid, of sets. And we should have a 2-groupoid of groupoids, and an $(n+1)$-groupoid of $n$-groupoids, and so on. But the nice thing about $\infty$ is that $\infty = \infty+1$, so that we can expect to have an $\infty$-groupoid of $\infty$-groupoids. Thus, arguably, it’s really in the homotopy context that the notion of universe is “most sensible”. Now it’s all well and good to say we have an $\infty$-groupoid of $\infty$-groupoids, but what is that $\infty$-groupoid? Its objects are of course $\infty$-groupoids, but we also know what its morphisms should be, and its 2-morphisms, and so on: they should be the equivalences, homotopies, and so on between $\infty$-groupoids. However, the basic type-theoretic notion of universe doesn’t tell us anything about what the path-types of the universe are like; this is what the univalence axiom fixes. (It’s analogous to how plain intensional type theory doesn’t tell us anything about when two functions should be considered equal; hence we need function extensionality.) To make things more precise, let $A$ and $B$ be types in some universe $U$; we want to specify what $Paths_U(A,B)$ should be. And we have a natural candidate, namely the type $Equiv(A,B) \coloneqq \sum_{f\colon A \to B} IsEquiv(f).$ of equivalences from $A$ to $B$. (Remember that $IsEquiv(f)$ is a proposition, so it makes sense to think of points of $Equiv(A,B)$ as functions $A\to B$ with the property of “being an equivalence.”) Moreover, we have a natural map $extern_{U,A,B} \colon Paths_U(A,B) \to Equiv(A,B)$ and the univalence axiom for $U$ simply states that this map is an equivalence for any $A$ and $B$, i.e. $UnivalenceAxiom(U) \coloneqq \prod_{A,B\in U} IsEquiv(extern_{U,A,B}).$ How do we define $extern_{U,A,B}$? Remember from the first post that the “elimination rule” for path-types says: • Given a type $C(x,y,p)$ which may depend on two points $x,y\in X$ and a path $p\in Paths_X(x,y)$ between them, if we have a way to produce an element of $C(x,x,r_x)$ for any $x\in X$, then we can “transport” it along any path $p\in Path_X(x,y)$ to produce a canonical element of $C(x,y,p)$ (and in such a way that if we transport it along $r_x$ then it doesn’t change). We’re going to apply this rule with $X=U$, $x=A$, and $y=B$. We’ll take the type $C(A,B,p)$ to be $Equiv(A,B)$, which depends on $A$ and $B$ and (vacuously) a path between them. Now we do have a way to produce an element of $C(A,A,r_A) = Equiv(A,A)$, namely the identity function $1_A\colon A\to A$ (which is an equivalence; I’ll leave proving that as an exercise). Therefore, we can transport the identity $1_A$ along any path $p\in Paths_U(A,B)$ to produce a canonical element of $Equiv(A,B)$ corresponding to $p$. This defines the map $extern_{U,A,B}$. Let’s think first about the semantics of univalence. First of all, in the form I stated it above, it is an axiom about a particular universe $U$. A universe satisfying the univalence axiom is called a univalent universe. We generally assume that all of the specified universes $U_0 \in U_1\in U_2\in\dots$ are univalent. In the “standard” model in $\infty$-groupoids, we obtain a univalent universe from “the $\infty$-groupoid of all $\infty$-groupoids bounded in size by some inaccessible cardinal $\kappa$”. Thus, if there are arbitrarily large inaccessibles, every type will belong to some univalent universe. (I’m not sure whether inaccessibles are necessary here or whether some weaker assumption would suffice.) I believe this is the only model with enough univalent universes that has been constructed in set theory with anything approaching rigor (by Voevodsky). However, I think most people expect that in more general $(\infty,1)$-categorical semantics, we ought to obtain a univalent universe from any object classifier with strong enough closure properties. In particular, in any (Grothendieck) $(\infty,1)$-topos, there ought to be a univalent universe of all “$\kappa$-compact” types, for any inaccessible $\kappa$. Moreover, any “full subuniverse” of a univalent universe will again be a univalent universe, as long as it is closed under the type-theoretic operations. In particular, if $U$ is any univalent universe, then its full subuniverse of $n$-types is again univalent for any $n$, and that subuniverse will itself be an $(n+1)$-type. Thus a univalent universe need not itself be of infinite h-level: we can have a univalent groupoid (1-type) of small sets (0-types), a univalent 2-groupoid (2-type) of small groupoids, and so on. At the bottom, we can have a univalent set (0-type) of small truth values ((-1)-types). In particular, a subobject classifier, if one exists, is also a univalent universe; so to get ourselves out of the world of sets we need at least two univalent universes. Similarly, we can have a sequence of univalent universes $U_0 \in U_1\in U_2\in\dots$ in which $U_0$ contains only (-1)-types and is itself a 0-type, $U_1$ contains only 0-types and is itself a 1-type, and so on. Such a stratification of universes by “categorical dimension” as well as by size does seem to match much of mathematical practice—but only outside of homotopy theory. For homotopy theory, we do really want to have $\infty$-types that aren’t $n$-types for any finite $n$ (such as, for instance, the 2-sphere $S^2$), and an infinite sequence of univalent universes doesn’t seem to be enough to guarantee this. I’ll come back to this later. Vladimir explained the origin of the word “univalent” as follows: • a universal fibration is one of which every other fibration is a pullback in a unique way (up to homotopy). • a versal fibration is one of which every other fibration is a pullback in some way, not necessarily unique. • a univalent fibration is one of which every other fibration is a pullback in at most one way (up to homotopy). Thus the univalence axiom asserts that the structural fibration of the universe is univalent. Now, the principal way we use the univalence axiom is as follows: given an equivalence $f\colon A \to B$, we apply the inverse of $extern_{U,A,B}$ to get a path $\hat{f}\in Paths_U(A,B)$, then apply the above-mentioned “elimination rule” for elements of path-types. Putting this together, we get the following consequence of univalence, apparently first formulated by Peter Lumsdaine and Andrej Bauer. • Given a type $C(A,B,f)$ which may depend on two types $A,B$ and an equivalence $f\colon A\to B$ between them, if we have a way to produce an element of $C(A,A,1_x)$ for any type $A$, then we can “transport” it along any equivalence $f\colon A\to B$ to produce a canonical element of $C(A,B,f)$ (and in such a way that if we transport it along $1_A$ then it doesn’t change). The elimination rule for paths is sometimes called path induction, since it is an instance of the general induction principle for inductively defined types. By analogy, we refer to the above consequence of univalence as equivalence induction. Informally, it means that • Given an equivalence $f\colon A\to B$, we can “identify” $B$ with $A$ along $f$. Specifically, in any construction we can perform, or theorem we can prove, starting only from a type $A$, we can obtain another valid construction or theorem by replacing some copies of $A$ with $B$ and any necessary occurrences of $1_A$ by $f$. Behind the scenes, this replacement uses $f$ and its inverse to silently transfer data back and forth between $A$ and $B$ as necessary. Such “identification” of course a very common thing to do in mathematics, often without even remarking on it! But usually, if it is justified at all, it is “by abuse of notation” or by trusting the reader to do the translation. The univalence axiom formalizes it, makes it happen “automatically” in the background, and makes it “natural/continuous.” Moreover, equivalence induction actually implies the full univalence axiom. For if we apply equivalence induction to the type $C(A,B,f) \coloneqq Paths_U(A,B)$ and the identity path $r_A\in Paths_U(A,A)$, we obtain a way to make any equivalence $f\colon A\to B$ into a path $\hat{f}\in Paths_U(A,B)$. The final condition that transporting along the identity equivalence leaves something unchanged (together with the same property for the identity path) then makes this construction into an inverse of $extern_{U,A,B}$. I’ve checked this in Coq. But it’s not really surprising, because equivalence induction gives the type $Equiv(A,B)$ the “same inductive/universal property” as $Paths_U(A,B)$. (But I don’t know how to state equivalence induction in a way that is evidently a proposition.) Note that equivalence induction makes no reference to a particular universe containing $A$ and $B$, except that the type $C(A,B,f)$ is required to be defined “parametrically” for all $A,B$ in the universe. In particular, this implies that if $U_1$ is a univalent universe and $U_2$ is a “larger” universe, in the sense that every type in $U_1$ also belongs to $U_2$, then $extern_{U_2,A,B}$ is also an equivalence for any $A,B\in U_1$, whether or not $U_2$ itself is univalent. (It can apparently still be the case that a “smaller” non-univalent universe is contained in a “larger” univalent one, however.) So univalence is almost a property of types (or pairs of types) rather than a property of universes. Furthermore, we can make sense of equivalence induction even if there are no universes, if instead we have some sort of “polymorphism” allowing us to make sense of “defining a type parametrically over other types”. (Thanks to Peter Lumsdaine for correcting some errors in the original version of this paragraph; see his comment and ensuing discussion below.) Univalence also implies other useful things, like function extensionality and (maybe, with some help) quotients, but let’s save those for another day. Posted at April 5, 2011 5:30 AM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2380 ## 17 Comments & 2 Trackbacks ### Re: Homotopy Type Theory, IV another great posting, Mike! I especially appreciate the new observation: “… So contrary to its original appearance, univalence can be considered to be a property of types (or, perhaps, pairs of types) rather than a property of universes. Furthermore, we can make sense of equivalence induction even if there are no universes …” As you say, understanding univalence in terms of the associated “equivalence induction principle” corresponds to the usual mathematical practice of “identifying” equivalent structures – and makes it rigorous rather than just careful sloppiness. It’s (i) admissible by a property of type theory, namely being “homotopy invariant” in the sense that anything expressible/definable/provable is stable under a transformation along an equivalence (and this is *proved* by VV’s model in Kan complexes, which shows that adding UA is formally sound); and (ii) it expresses a commitment not to add any new constructions, terms, axioms, that would break that property (not that anyone would contemplate doing such a thing). Already at the level of 1-types, it has a pleasant consonance with mathematical practice not shared by conventional foundations, since it allows us to treat isomorphic structures as “identical” – something that has seemed puzzling from the point of view of set-theoretic foundations, since the language of set-theory doesn’t have the invariance property (i). BTW: you didn’t mention how to actually get a univalent universe from “the $\infty$-groupoid of all $\infty$-groupoids (bounded in size)”. Vladimir’s construction involves the theory of minimal fibrations, well-ordering of the fibers, and other technology. Andre Joyal suggested an alternate construction at Oberwolfach – perhaps Nicola Gambino would be willing to post it to the HoTT site? Posted by: Steve Awodey on April 6, 2011 6:26 PM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV “Already at the level of 1-types, it has a pleasant consonance with mathematical practice not shared by conventional foundations, since it allows us to treat isomorphic structures as “identical” …” better said: at the 1-level of 0-types. Posted by: Steve Awodey on April 6, 2011 7:28 PM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV (ii) it expresses a commitment not to add any new constructions, terms, axioms, that would break that property (not that anyone would contemplate doing such a thing). Plenty of people have contemplated such things, though. And it dismays me a bit that some of them would have to be rejected, at least prima facie. Posted by: Dan Doel on April 7, 2011 6:45 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV it has a pleasant consonance with mathematical practice not shared by conventional foundations, since it allows us to treat isomorphic structures as “identical” Yes, that’s a nice point; the univalence axiom takes the idea of categorical/structural set theory one step further. In structural set theory, we can’t distinguish between isomorphic structures (your invariance property (i)), but we still have to explicitly insert isomorphisms to pass between them. (For other readers: I didn’t mention the fact that univalence, which is directly about equivalences between types, also implies a corresponding fact for structured types, but it’s true.) Univalent type theory takes the lesson from structural set theory (and, arguably, most of 20th century mathematics) that in general, we need to remember “an isomorphism” rather than the mere fact of “being isomorphic”. But then it says that we can treat isomorphisms almost like equalities, coming even closer to the structuralist ideal. Let me take this opportunity to mention, for new readers, another philosophical/terminological issue, of which I was reminded by the fact that you put “identical” in quotes. Where I come from, it’s important to distinguish between equality and isomorphism/equivalence. And while the univalence axiom collapses Martin-Lof’s “identity types” with “equivalences,” it seems to me as though it does it by making the “identity types” behave like spaces of equivalences, rather than the reverse. So I prefer to use the term “path types” and say that “univalence reinterprets the path types of universes to refer to equivalence rather than to equality”. However, a number of people (particularly type theorists) seem to like to say instead that “univalence makes equivalent structures equal”—and I guess that if you come from a world where the behavior of intensional identity types is what you are used to thinking of as “equality”, that makes sense. I don’t know whether it’ll ever sound right to me, though. (-: you didn’t mention how to actually get a univalent universe That’s right; because I haven’t understood the details! I’d love to see it explained, and I’d especially like to free it from things like minimal fibrations, which I expect would be necessary in order to extend it to object-classifiers in more general categories. Posted by: Mike Shulman on April 7, 2011 7:15 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV Very interesting suggestion that equivalence-induction allows one to consider univalence without universes! Morally I agree something like that should hopefully be the case, but I’m not quite convinced it’s true as stated, unless I’m missing something. The problem is that the “eliminator” for equivalence-induction still does refer to a universe U: the type C(A,B,f) one eliminates into must vary over types A, B : U, and equivalences between them. Even though we’re not explicitly referring to paths in U, U appears in the dependency of the target type, and so its path spaces are implicitly involved. In particular, I think one can have types A, B which belong to both a univalent universe U₁ and a non-univalent one U₂. As we discussed at one point, one should be able to get this within the groupoid model, given a notion of “small” (eg <κ): take U₁ to be the groupoid of small groupoids, and U₂ to be the discrete groupoid on the set of small groupoids. So here, extern(U₂,A,B) may not be an equivalence even if extern(U₁,A,B) is? I guess the minimum one needs to state equivalence-induction is something like (a) the ability to have types dependent over type-variables, i.e. some form of polymorphism, whether using a universe or otherwise; and (b) a type Equiv(A,B), for any types A, B? Posted by: Peter LeFanu Lumsdaine on April 7, 2011 3:30 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV Thanks for keeping me honest! It seems to me that you raise two separate issues: (1) what we need from our type theory in order to state equivalence induction without a universe, and (2) if we do have a univalent universe, whether the types in that universe can also belong to another universe and fail to be univalent there. Re: (1), I agree that the issue needs to be addressed. But I’m not convinced that we need something beyond plain universe-free dependent type theory, as long as the latter includes judgments of the form “$A$ is a type” — which it seems as though it had better do, as soon as we have any type constructors. Can’t we then state equivalence-induction as a rule like this? $\frac{\array{X\colon Type, Y\colon Type, w\colon Equiv(X,Y)\vdash C(X,Y,w)\colon Type \qquad X\colon Type \vdash d(X) \colon C(X,X,1_X) \\ \vdash A\colon Type \qquad \vdash B\colon Type \qquad \vdash f\colon Equiv(A,B)} }{ \vdash J(d;A,B,f)\colon C(A,B,f) }$ I could easily be missing something, though; please correct me if so! Re: (2), you’re right that I got overenthusiastic. I think what I should have said was “if $U_1$ is any univalent universe, and $U_2$ is a larger universe than $U_1$, in the sense that all types in $U_1$ are also in $U_2$, then $extern_{U_2,A,B}$ is also an equivalence for any $A,B\in U_1$.” The point being that in order to apply equivalence induction, we need the type $Paths_{U_2}(A,B)$ to be defined parametrically for all $A,B\in U_1$, hence we need all types in $U_1$ to be in $U_2$. Semantically, this means we have to have a map $U_1\to U_2$, which is lacking in your example. So although it is true, externally, in that example, that “every type in $U_1$ is also in $U_2$,” it’s not true type-theoretically, i.e. we don’t have an inference rule like $A\colon U_1 \vdash A\colon U_2$. Does that seem right? Posted by: Mike Shulman on April 7, 2011 4:05 AM | Permalink | PGP Sig | Reply to this ### Re: Homotopy Type Theory, IV Re (1), we need afaics (as your rule shows) not just judgements “A type” — which one has in all variants of M-L Type Theory — but type variables that can occur in contexts, which one doesn’t normally have in M-L TT, except via universes. Allowing type variables as a primitive notion — not via a universe — I think of as being the key idea of polymorphism; but I’m not at all familiar with polymorphic type theories, so I don’t know how much what one could do with this in that setting… Posted by: Peter LeFanu Lumsdaine on April 7, 2011 4:59 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV Polymorphic type theory is wonderful. At the small price of trading powersets for polymorphic indexing, everything you could possibly want to work, works. By “everything”, I mean that polymorphic types form a small complete category, which is not degenerate. Furthermore, polymorphic types have a natural external notion of equality via parametricity, which works ridiculously well. Unfortunately, internalizing parametric equality into polymorphic type theory has been a very challenging problem. I think that homotopy-style ideas would help in making it work. The reason for this hunch is that the usual way of constructing models of polymorphic types is as partial equivalence relations on some universal domain, and I suspect a move to groupoids would be helpful in figuring out what to do. Posted by: Neel Krishnaswami on April 7, 2011 8:15 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV Re (2), I think I agree. In a syntax where the “elements” fibration $E$ is explicit, I guess we want to assume rules something like $x \colon U_1 \vdash coerce(x) \colon U_2 \qquad x \colon U_1 \vdash E_2(coerce(x)) = E_1(x) type$ and then I think I agree with your original statement. Posted by: Peter LeFanu Lumsdaine on April 7, 2011 5:07 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV Allowing type variables as a primitive notion — not via a universe — I think of as being the key idea of polymorphism That makes sense, I guess. So I guess you are right that “some form of polymorphism” is necessary in order to state equivalence-induction. But I think it is a fairly weak form of polymorphism, in that we don’t need to be able to quantify over type-variables, we just need to allow them in contexts. Semantically, I think one ought to be able to make sense of it in any categorical model. Posted by: Mike Shulman on April 7, 2011 5:08 AM | Permalink | Reply to this ### Re: Homotopy Type Theory, IV I’ve edited the post slightly, in an attempt to avoid misleading new readers. Thanks again! I almost feel now as though the polymorphic approach would be better than the one that uses universes. Although I suppose that if you have polymorphism and universes, then the polymorphism form of equivalence-induction wouldn’t imply univalence for the universes, since the path-types of a universe are not defined parametrically for all types, only for those types belonging to the universe. I think there’s something going on there that I don’t understand yet. Posted by: Mike Shulman on April 7, 2011 5:25 AM | Permalink | Reply to this Read the post Homotopy Type Theory, IV Weblog: The n-Category Café Excerpt: Voevodsky's "univalence axiom" for universes in homotopy type theory, and the equivalent principle of "equivalence induction." Tracked: April 7, 2011 5:19 AM ### Perspective Yesterday Bas Spitters was so kind to give me a demonstration of Coq in action, learning and then doing proofs in homotopy theory. Seeing this is quite thrilling. As if a character of a novel one has been reading suddenly walks into the room and stands in front of you, fully embodied. While I understand that the HoTT project is only just getting off the ground and needs more work, I’d be interested in getting a better idea of what one can reasonably expect to be eventually possible here. First of all I gather that the immediate next major goal is to refine the axioms such as to give a complete axiomatization of $\infty$-toposes. I suppose. I am not sure how explicitly and widely this goal has been stated. Seems to me to be the central goal to go for. Bas points out that Steve Awodey mentions it at the very end of his notes Type theory and homotopy . What then? Is it conceivable that one can add further properties to the $\infty$-toposes thus axiomatized? Can we expect to “have a homotopy type theory of locally $\infty$-connected $\infty$-toposes”? Or some such statement? You can probably guess what I am getting at: since I am claiming that plenty of differential geometry/differential cohomology theory can be axiomatized formally in any cohesive $\infty$-topos, I am wondering if the axioms of cohesive $\infty$-toposes in turn can be founded entirely formally in type theory. That would seem be a striking consequence of the striking fact that their axioms are so simple! (We can ignore the extra axioms “pieces have points” and “discrete objects are concrete axioms” for the moment as just extra icing. The core axiom is just the extra left and right adjoint to the terminal geometric morphism. That alone supports the core theory.) If so, that would seem to immediately give a foundation of structures like Chern-Weil theory and Chern-Simons theory (and their higher generalization) in type theory, given that I am claiming that both are axiomatized in any cohesive $\infty$-topos. (Maybe I am not saying the bits about “foundations” and “axiomatized” in the right words, hope you can nevertheless see what I have in mind). Posted by: Urs Schreiber on April 8, 2011 1:03 PM | Permalink | Reply to this ### Re: Perspective Could this be related to Louis Crane’s work on model categories and physics? I don’t claim to understand the work in any detail, but heard him talk at QPL. Bas Posted by: Bas Spitters on April 10, 2011 9:06 PM | Permalink | Reply to this ### Re: Perspective Could this be related to Louis Crane’s work on model categories and physics? I do not see indication that it would. Also I don’t understand what the article you point to has to do with model categories (if that is what you were thinking of), except that the word appears at one point. Do you? Maybe let’s not get into physics here. What I said above lives in pure math: the claim is that in an $\infty$-sheaf $\infty$-topos that is equipped with an extra left adjoint and an extra right adjoint to its terminal geometric morphism, such that the extra left adjoint preserves finite products, the following notions, for instance, have simple axiomatization: de Rham cohomology, ordinary differential cohomology, Chern-Weil theory, Chern-Simons functionals. (This is discussed here). This is in the sense that: there are simple constructions on objects in such an $\infty$-topos just involving $\infty$-(co)limits and the four adjoint $\infty$-functors assumed above, such that 1. these constructions have the general abstract properties of the notions named above, and 2. such that when realized in a specific model of the axioms (discussed here) these reproduce the traditional notions that go by these names. While working on this, I was struck by how much indeed does follow from just the assumption of an $\infty$-topos with such two extra adjoints functors. Originally Lawvere had suggested that this is so for ordinary toposes. But it turns out that the same simple axiom put on an $\infty$-topos implies a pretty comprehensive supply of advanced differential-geometric notions. To me, this seems to provide a foundation of these “advanced differential-geometric notions” quite deeply inside $\infty$-topos theory. Notably all the notions can be expressed with nothing else but the assumption that there is an $\infty$-topos with these two extra adjoint functors. So if there were a homotopy-type-theoretic way to axiomatize not just “$\infty$-topos” but “cohesive $\infty$-topos” that would seem to imply in turn a homotopy-type-theoretic way to axiomatize rather more complex-sounding notions such as “Chern-Weil theory”. Therefore my question: is it conceivable that HoTT can eventually axiomatize not just the notion “$\infty$-topos” (as people seem to expect) but moreover notions of $\infty$-toposes with extra structure and properties, such as “$\infty$-topos that is local, locally $\infty$-connected and $\infty$-connected over another $\infty$-topos”? Posted by: Urs Schreiber on April 11, 2011 10:17 AM | Permalink | Reply to this ### Re: Perspective It seems that I used the wrong link. This is Crane’s work on Model Categories and Quantum Gravity. I merely wanted to suggest the possibility that this may be another possible application of the internal language, not that is related to the application that you mention. The paper seems a bit informal, so it is hard to judge what is going on precisely. Posted by: Bas Spitters on April 11, 2011 12:43 PM | Permalink | Reply to this ### Re: Perspective Hi Bas, thanks for the link, now I see what you mean. I could comment on the content of the link you give, but now it occurs to me that you just meant to point to some possible occurence of homotopy theory in physics. Is that right? In that case I’d stress that there are plenty of well established and well-studied occurences of homotopy theory in physics (fundamental physics, that is), hence of model categories and of $\infty$-categories and $\infty$-toposes, albeit the latter are of course made explicit by fewer authors than use them implicitly. Some such occurences are listed here. But I thought from the point of homotopy type theory what would be interesting are situations where not only some homotopy theoretic concepts appear, but where a situation can entirely be formally axiomatized using the internal language of a certain class of $\infty$-toposes. For in combination this would seem to in principle make such situations accessible to computer-aided proof. So to come back to my questions, starting with the simplest one: it seems that there should be a way to massage the present axioms of HoTT such as to give an axiomatization of the notion of $\infty$-toposes. Is it conceivable that it is possible to massage the axioms further such as to encode $\infty$-toposes over another $\infty$-topos? Posted by: Urs Schreiber on April 11, 2011 9:11 PM | Permalink | Reply to this ### Re: Perspective I’m glad you also had that reaction! Plus, what a great lead-in to the next post. Let’s discuss your second question over there. Posted by: Mike Shulman on April 12, 2011 5:20 AM | Permalink | Reply to this Read the post Homotopy Type Theory, V Weblog: The n-Category Café Excerpt: What's still missing to make homotopy type theory into a complete internal language for (∞,1)-topoi? Tracked: April 12, 2011 5:19 AM Post a New Comment
# Unity in 12.10 comes up behind other windows I've just upgraded from 12.04 to 12.10. For the most part, everything works fine, but I have a few small problems with Unity, or maybe Compiz. When I hit the Super key, or click on the dash launcher, the dash sometimes comes up behind the other windows on the screen. As you can imagine, this makes it somewhat tricky to use. Once it has started coming up behind, no amount of trying again will convince it to come back to the front. Possibly related, the Alt-Tab switcher doesn't show either. It maybe that there isn't one, or maybe that's behind also? Alt-Tab does switch the windows, but there's no visual indicator. When I hit Super-W, the windows do all do the zoom thing, but it's slow and juddery where it used to be smooth in 12.04. I'm using the standard "radeon" driver, same as before, with a triple-head monitor setup (and that works fine). I've not tried the proprietary drivers as I've previously found the multi-monitor support much weaker than the default driver, but maybe that's the way to go now? Video play fine. Even WebGL seems ok. Do other see this problem? Is it a bug? Or have I just got some left-over config from 12.04 in the way? - TL;DR: disable and re-enable unity plugin in CCSM. Walkthrough • (optional) if you have compiz config setting manager then issue apt-get update && apt-get install compizconfig-settings-manager • Launch ccsm (either on command line or through dash) • Then use the search box and type "unity" • click on the plugin • on the left part you can uncheck box to disable unity • re-enable unity by re-checking the box. Note: This answer was in a comment of the lengthy unrelated accepted answer. Note2: This answer works for the current session but won't fix definitively the issue - I believe I have found an answer myself. I'm not sure exactly what solved it, so I'm going to list what I tried and didn't appear to work also. First, I tried the fglrx driver. I installed this using the tab in "Software Sources". When I rebooted, the first thing I noticed was that the Ubuntu splash screen only came up on two of my three monitors. Then the login screen came up similarly on only two monitors. Interestingly, the montors were not "mirrored", as with the xserver-xorg-video-ati driver, but only one screen had the login prompt, and the others just Ubuntu logos. Having logged in I tried all sorts of ways of configuring the display using both the Ubuntu display controls, and the ATI Catalyst Control Centre, but no amount of fiddling could get all three monitors working. It kept claiming that either the monitor wouldn't come on, or that there wasn't enough memory, despite the fact I've been using it triple-head for years. On the plus side, the 3D effects did seem much snappier, and the Unity dash and HUD did come up on top. Some might consider this a fix, but I was still one monitor down. Second, I tried the fglrx-updates driver, also selected in "Software Sources". I observed no apparent differences to the straight fglrx driver. Finially, I restored the xserver-xorg-video-ati driver, and tried playing around with the Compiz Control Centre (actually, I'd tried this before, but failed to fix anything). After much futzing with Compiz Plugins that didn't fix the problem, I eventually disabled the Unity Compiz Plugin. This made all the Unity UI elements vanish. For a while I thought I'd stitched myself up because none of the windows would respond, but then they came back to life, another application-switcher got enabled, although there was still no obvious way to launch new apps. I then re-enabled the Unity Plugin, and everything came back the way it had been, but with the Dash in front of the other windows where before it had been behind. So far, fingers crossed, the problem has not recurred, so I'm considering it solved. :) - Tried to disable and re-enable unity plugin in compiz settings manager and it worked! I'm using opensource ATI drivers. –  igorp1024 Nov 15 '12 at 17:38 Bad news, after a few days the problem came back again. :( This time I also tried Alt-F2 and ran "unity --replace" and that seems to fix the problem also, but again, only temporarily. –  ams Nov 16 '12 at 9:38 I mean disable and then re-enable it in CCSM (CompizConfig Settings Manager) –  igorp1024 Nov 19 '12 at 15:54 This has nothing to do with graphic drivers. I'm on nvidia and have same issue. @igorp1024 had the solution for me. –  vaab Apr 9 '13 at 8:37
# MResetZ operation Namespace: Microsoft.Quantum.Measurement Package: Microsoft.Quantum.QSharp.Core Measures a single qubit in the Z basis, and resets it to a fixed initial state following the measurement. operation MResetZ (target : Qubit) : Result ## Description Performs a single-qubit measurement in the $Z$-basis, and ensures that the qubit is returned to $\ket{0}$ following the measurement. ## Input ### target : Qubit A single qubit to be measured. ## Output : Result The result of measuring target in the Pauli $Z$ basis.
# Nuclear Science and Techniques 《核技术》(英文版) ISSN 1001-8042 CN 31-1559/TL     2019 Impact factor 1.556 Nuclear Science and Techniques ›› 2018, Vol. 29 ›› Issue (3): 42 • LOW ENERGY ACCELERATOR, RAY AND APPLICATIONS • ### Beam dynamics, RF measurement, and commissioning of a CW heavy ion IH-DTL Heng Du 1,2 • You-Jin Yuan 1 • Zhong-Shan Li 1,2 • Zhi-Jun Wang 1 • Peng Jin 1 • Xiao-Ni Li 1,2 • Guo-Zhu Cai 1 • Wen-Wen Ge 1,2 • Guo-Feng Qu 1,2 • Yuan He 1 • Jia-Wen Xia 1 • Jian-Cheng Yang1 • Xue-Jun Yin 1 1. 1 Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China 2 University of Chinese Academy of Sciences, Beijing 100049, China • Contact: Xue-Jun Yin E-mail:[email protected] • Supported by: This work was supported by the National Natural Science Foundation of China (Nos. 11375243 and 11405237) and the Guangdong Innovative and Entrepreneurial Research Team Program (No. 2016 ZT06G373). PDF ShareIt Export Citation Heng Du, You-Jin Yuan, Zhong-Shan Li, Zhi-Jun Wang, Peng Jin, Xiao-Ni Li, Guo-Zhu Cai, Wen-Wen Ge, Guo-Feng Qu, Yuan He, Jia-Wen Xia, Jian-Cheng Yang, Xue-Jun Yin. Beam dynamics, RF measurement, and commissioning of a CW heavy ion IH-DTL.Nuclear Science and Techniques, 2018, 29(3): 42 Citations Altmetrics Abstract: A 53.667 MHz CW (continuous-wave) heavy ion IH-DTL has been designed for the SSC-LINAC injector of HIRFL-CSR (Heavy Ions Research Facility at Lanzhou-Cooling Storage Ring). It accelerates ions with maximum mass-to-charge ratio of 7.0 from 143 to 295 keV/u. Low-power RF measurement of the IH-DTL1 has been taken to investigate the RF performance and the quality of the electric field distribution on the beam axis. The measured Q0 value and the shunt impedance are 10,400 and 198 MX/m, respectively. The electric field distributions on and around the beam axis were evaluated and compared with the design value. By a new approach, the dipole field component is also estimated. The beam dynamics simulation using measured field distribution was presented in this paper. Based on the dynamics analysis in both transverse and longitudinal phase space, the field distribution can meet the design requirement. Finally, the RF conditioning and very first beam commissioning on the IH-DTL1 were finished. The beam test results agree well with the simulation results; what’s more, the property of the variable output beam energy about the separated functions DTL was verified.
Search: MSC category 37C85 ( Dynamics of group actions other than ${\bf Z}$ and ${\bf R}$, and foliations [See mainly 22Fxx, and also 57R30, 57Sxx] )
+0 # Polynomials 0 42 1 +210 Let $$f(x) = x^4-3x^2 + 2$$ and $$g(x) = 2x^4 - 6x^2 + 2x -1$$. What is the degree of $$f(x) \cdot g(x)$$ Apr 18, 2021
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T04: Neutrino Physics JUNO potential in non-oscillation physics A.S. Göttel*  on behalf of the JUNO collaboration Full text: pdf Pre-published on: February 28, 2022 Published on: May 12, 2022 Abstract The Jiangmen Underground Neutrino Observatory (JUNO) is a next-generation liquid scintillator experiment being built in the Guangdong province in China. JUNO's target mass of 20 kton will be contained in a 35.4 m acrylic vessel, itself submerged in a water pool, under about \SI{700}{\meter} of granite overburden. Surrounding the acrylic vessel are 17612 20'' PMTs and 25600 3'' PMTs. The main goal of JUNO, whose construction is scheduled for completion in 2022, is a 3-4$\sigma$ determination of the neutrino mass ordering (MO) using reactor neutrinos within six years, as well as a precise measurement of $\theta_{12}$, $\Delta m_{21}^2$, and $\Delta m_{31}^2$. JUNO's large target mass, low background, and dual calorimetry, leading to an excellent energy resolution and low threshold, allows for a rich physics program with many applications in neutrino physics. The large target mass will allow for high-statistics solar-, geo-, and atmospheric neutrino measurements. JUNO will also be able to measure neutrinos from galactic core-collapse supernovae, detecting about 10,000 events for a supernova at \SI{10}{\kilo\parsec}, and achieve a 3$\sigma$ discovery of the diffuse supernova neutrino background in ten years. It can also study non-standard interactions e.g. proton decay, indirect dark matter searches, and probe for Lorentz invariance violations. This paper covers this extensive range of non-oscillation topics on which JUNO will be able to shed light. DOI: https://doi.org/10.22323/1.398.0229 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
# A basis for a tensor product space where the tensor elements are linearly dependent Say I have a space $V^{(1)}$ with basis $\{a_i \}$ and $V^{(2)}$ (with dimensions $d_1$, $d_2$ respectively) with basis $\{b_j\}$. Clearly the vectors $\{a_i\otimes b_j\}$ are a basis for $V^{(1)}\otimes V^{(2)}$. Lets call this a product basis''. Note that if you construct the sets given by the first and second tensor elements of this basis, you trivially recover $\{a_i\}$ and $\{b_j\}$ and these are linearly independent sets by construction. Is there a basis of $V^{(1)}\otimes V^{(2)}$ one can construct, where by the sets you get when you take the first (second) tensor elements of the vectors are linearly dependent? I.e. a set of vectors $\{s^(1)_i\otimes s^{(2)}_i \}$ where $i=1,\dots, d_1d_2$ where $\{s^{(1)}_i\}$ is a set of linearly dependent vectors, and similar for $\{s^{(2)}_i\}$? For example, take $V^{(1)} = V^{(2)} = \Bbb R^2$, and let $\{e_1,e_2\}$ denote the canonical basis. We can take the basis $$\{ e_1 \otimes e_1, (e_{1} + e_2) \otimes e_1, (e_1 - e_2)\otimes e_2, e_2 \otimes e_2 \}$$ By your definition, $\{s_i^{(1)}\}$ is a set of $4$ elements in $\Bbb R^2$, so it is clearly linearly dependent. Both at the same time: $$\{ e_1 \otimes e_1, (e_{1} + e_2) \otimes (e_1+e_2), (e_1 - e_2)\otimes (e_1 + e_2), e_2 \otimes e_2 \}$$
• October 17th 2012, 07:44 AM Cecelia91 2 1/4 + 5 5/6 Thanks, Liv :) • October 17th 2012, 09:12 AM Soroban Hello, Cecelia91! Quote: I don't understand how to do this question. . . . $2\tfrac{1}{4}+ 5\tfrac{5}{6}$ Exactly what is it that you don't understand? If you can't add fractions at all, . . you need more help than we can provide. • October 17th 2012, 09:23 PM Prove It Quote: Originally Posted by Cecelia91 2 1/4 + 5 5/6 Thanks, Liv :) Convert each to an improper fraction. Get a common denominator. $2\frac{1}{4}+5\frac{5}{6}=2+\frac{1}{4}+6-\frac{1}{6}=8+\frac{3}{12}-\frac{2}{12}=8+\frac{1}{12}=8\frac{1}{12}$
Question by Student 201383227 Sir , I have a doubt in understanding the slip line . Is slip line is like shear layer flow ? . And is it formed to make the flow to balance the pressure between the two shock wave's. 11.24.13 A shear layer is a viscous flow phenomenon. Here we are solving inviscid flow. In inviscid flow, the slip line is the boundary between two flow regions. The two flow regions have the same pressure and their velocity vectors point in the same direction. However, the temperature, density, and the magnitude of the velocity may differ. Question by Student 201793107 Hello Sir, In Assignment 1, Question No.3, I am getting all the values correctly except the maximum attainable velocity by the gas. I am getting, $$V_{2,max}=672.185\, m/s$$ instead of $647\,m/s$ as given by you. I assumed $$M_{2}=1$$and hence $$V_{2,max}=\sqrt{\gamma R T}$$ Sorry if I am wrong for your precious time. 09.19.17 But if $M_2=1$, then $T_2$ will have a different value than the one you computed.. 09.20.17 Question by Student 201793101 Good morning professor, I'd like to confirm if I understood a point on supersonic nozzles. If the exit pressure decreases (or back pressure increases) after the shockless case and oblique shock waves start to appear. Further decreasing exit pressure, weak shock waves start to collapse into strong oblique shock waves, which in turn become a normal shock wave centered on the flow middle line. The limit is therefore defined by the minimum value to achieve a strong solution. Is the reasoning correct? Thank you. 12.09.17 I guess you refer here to the last problem of the last assignment. If so, then yes your reasoning is correct. The only thing missing in your reasoning is that you need to first assess whether the first oblique shock is weak or strong by looking at the wave pattern.. If there is a reflected shock, then for sure the first oblique shock is weak because a strong oblique shock would lead to a subsonic flow, hence preventing a shock reflection. Question by Student 201783220 in mid term exam question 3, how can i assume for isentropic between 1 to 3? I think flow can be produced at least single shock wave in converging section. 12.17.17 In 1D (as schematized in the midterm exam Q3), it's definitely possible to have an inlet with no shock. In 2D, you could design the inlet so that the flow slows down isentropically to Mach 1 through a Prandtl Meyer compression fan. In practice this is hard to achieve because such a design would only work for one specific Mach number and lead to major losses in performance at slightly different Mach numbers. Thus, inlets generally have several shocks to allow for some variation in geometry. But this is not mentioned (or schematized) in the question statement, so you can assume the flow isentropic through the inlet here. Question by Prasanna Professor, I think the answer for Problem #1 of Assignment #1 should be 0.746 bar, 277.609 K, 83.06 kg/s, 586.67 km/hr when using the temperature and pressure values at 12000 feet from 1976 U.S. Standard Atmosphere table and for $m_{\infty}$=0.5 and $A_i=0.6\ m^2$. 09.26.18 I corrected the answers. Check again. Question by Student 201893243 Good afternoon, professor. I'm curious about the lack of explanations in the assignments 1 I submitted. I guess that the short comment on the shape of the nozzle after obtaining the Mach number and pressures of the entry and exit in Question 2 was not enough explain. If you tell me what approach was lack in the process of problem solving, I will supplement what I misunderstand. Sincerely thank you Sir. 09.27.18 It's best if you come visit me in my office for this. We'll look through your quiz together. 1,  2    Next  •  PDF 1✕1 2✕1 2✕2  •  New Question $\pi$
# Filling a volume (pool) at a given rate 1. May 17, 2006 ### whoknows123 TO fill a child's inflatable wading pool, you use a garden hose with a diameter of 2 cm. Water flows from this hose with a speed of 1.5 m/s. How long will it take to fill the pool to a depth of 25 cm if it is circular and has a diameter of 2.1 m? how would I solve this problem? 2. May 17, 2006 ### Curious3141 Volume of cylinder = $$\pi r^2 h = \pi h \frac{d^2}{4}$$ Volume of water required to fill child's pool to that depth = ... Speed of water coming out of hose = 1.5 m/s Length of cylindrical water column over one second interval (height of cylinder of water) = ... Volume of cylindrical water column coming out of hose over one second = ... Number of these cylindrical water columns it would take to fill the pool to the required volume = ... Hence time taken = ... (answer) Work out all the "..." and you'll get it. 3. May 17, 2006 ### whoknows123 Volume of Cylinder = 86.59 Volume of water required? is is the same as the volume of cylinder? speed of water coming out = 1.5 m/s Length of cylindrical water column over one second interval=?? how would i find this? 4. May 17, 2006 ### nrqed Curious gave all the steps but I will be a bit more explicit. There are two distinct parts. A) How much water do you need to fill the pool. This is the volume of water needed to fill the pool to the depth needed. Can you calculate this? (this does not require anything related to the hose). B) Now, you must calculate how much water comes out of the hose in one second (for this part, the size or depth of the pool is irrelevant). Think of the hose as a very long (let's say infinite) cylinder. Since the water flows at 1.5 m/s, you know that in one second, all the water within 1.5 meter of the muzzle of the hose will come out . Therefore, the volume of water expelled by the hose in 1 second is the volume of water contained in a cylinder 1.5 meter long and with a diameter of 0.02 m (it is safer to put everything in meters). Now, you know how much water (that is how many meter cube of water) is expelled by second and you know the total number of meter cube needed to fill the pool. A simple ratio will give you how long it will take. 5. May 17, 2006 ### whoknows123 a) would it be the area * velocity, which is 129.885? 6. May 17, 2006 ### whoknows123 area = pi*height*(d^4/2) = pi*23cm*(2.1m^4/2)? 7. May 17, 2006 ### nrqed I am confused.. Are you answering part a or part b? Part a does not require the speed at all (or anything related to the hose). Also, ALWAYS include the units in your answer. AN dmake sure that you are consistent in the unites (use cm everywhere or meters which is usually safer). But the amount of water ejected by the hose per second comes out to be the area of the opening of the hose times the ejection speed, indeed. But I tried to explain where this comes from...but it is the correct formula indeed. 8. May 17, 2006 ### nrqed Always be consistent with the units (everything in cm or everything in meters) aAlso, nothing is raised to the fourth power! It would be Pi * .25 m * (2.1 m/2)^2 or Pi* 0.25 m * (2.1 m)^2/4 9. May 17, 2006 ### whoknows123 Area of the hose would be = .00031459 m^2, multiply this by 1.13 m/s and i get .000355 area of the pool is .8659 m^3 then I divide the two? 10. May 17, 2006 ### nrqed I thought you said it was 1.5 m/s? You mean *volume* of the pool. Once you write your units, it will be obvious what to do in order to get a result in second. That's one reason why it is important to write the units. 11. May 17, 2006 ### whoknows123 yes. Thank you, and yes units are important!
# Archives: Stories #### How to Write Number in Expanded form? Get the Expanded form of a number in math