text
stringlengths 104
605k
|
---|
# Problem with alltt package when defining new command
I have following problem. In the code below the second parameter of new command 'chunk' is supposed to output text 'as-is' using alltt package (I need alltt because I will output some other new commands). What instead happens is that all text is displayed in one line (my guess is => parameter causes this).
\documentclass[a4paper,12pt,openany]{book}
\usepackage[croatian]{babel}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{blindtext}
\usepackage[a4paper, inner=1.5cm, outer=3cm, top=2cm, bottom=3cm, bindingoffset=1cm]{geometry}
\usepackage{amsmath}
\usepackage{microtype}
\usepackage{alltt}
\newcommand{\chunk}[2]{
\label{#1}
$\langle\textit{#1}\ \rangle\equiv$
\begin{alltt}
#2
\end{alltt}
}
\begin{document}
\chunk{Example}{
some ex ample
\{
som e t e xt
te xt
\}
}
\end{document}
Now, the quick fix I came up with would remove usage of alltt from the chunk command and instead use alltt in the document directly like this
\chunk{Example}
\begin{alltt}
some ex ample
\{
som e t e xt
te xt
\}
\end{alltt}
but this approach is ugly and leads to code repetition so I would like to avoid it. Any help is appreciated.
-
The LaTeX style for a command taking large chunks of text, especially if (as here) you need to change the scanning conventions is to use an environment not a command. If you did
\newenvironment{chunk}[1]
{%
\label{#1}%
$\langle\textit{#1}\ \rangle\equiv$%
\begin{alltt}}
{\end{alltt}}
then line endings in the environment would be preserved. Also note that you need % at the end of lines in macro definitions or you will get spurious spaces in the output.
\begin{chunk}{Example}
some ex ample
\{
som e t e xt
te xt
\}
\end{chunk}
-
Thank you, just what I needed (will upvote when I earn 15 rep). Looks like I have some reading about environments to do. – Marin Jul 12 '12 at 20:33 |
# zbMATH — the first resource for mathematics
Additive fuzzy measures and integrals. I. (English) Zbl 0516.28006
##### MSC:
28A99 Classical measure theory 28A10 Real- or complex-valued set functions 28A05 Classes of sets (Borel fields, $$\sigma$$-rings, etc.), measurable sets, Suslin sets, analytic sets 05A17 Combinatorial aspects of partitions of integers
Full Text:
##### References:
[1] Bourbaki, N, Intégration, (1970), Hermann Paris · Zbl 0213.07501 [2] Butnariu, D, Solution concepts for n-person fuzzy games, (), 339-359 [3] Butnariu, D, Fuzzy games, A description of the concept, Fuzzy sets and systems, 1, 181-192, (1978) · Zbl 0389.90100 [4] Goguen, J, L-fuzzy sets, J. math. anal. appl., 18, 145-147, (1967) · Zbl 0145.24404 [5] {\scE. P. Klement}, Construction of fuzzy σ-algebras using triangular norms, J. Math. Anal. Appl., to appear. · Zbl 0491.28003 [6] Parthasarathy, K.R, Introduction to probability and measures, (1978), Springer-Verlag Berlin/New York · Zbl 0376.60052 [7] Rose, A; Rosser, J, Fragments of many-valued statement calculi, Trans. amer. math. soc., LXXXVII, 1-53, (1958) · Zbl 0085.24303 [8] Sugeno, M, Theory and applications of fuzzy integrals, () · Zbl 0316.60005 [9] Zadeh, L.A, Fuzzy sets, Inform. and control, 8, 338-352, (1965) · Zbl 0139.24606
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
# Wave
This tag concerns posts about linear and nonlinear wave equations.
### The Nirenberg Trick and Wave Maps
The "Nirenberg trick" or "Nirenberg transformation" is the observation1 that the semilinear wave equation …
### Simulating Closed Cosmic Strings
One of my current research interests is in the geometry of constant-mean-curvature time-like submanifolds of Minkowski space. A special …
### Shooting particles with Python
Numpy simulation of how classical and quantum particles interact with potential barriers.
### Blow-up of QNLW with Small Initial Data à la Christodoulou
Lecture slides that Gustav Holzegel and I used for the January 13-14, 2014 OXPDE Workshop at Oxford University.
### A Cute Little Wave Estimate
Let us consider first the linear wave equation $\Box u = 0$ on (1+d)-dimensional Minkowski space. A well known property of the wave … |
Classify all groups of order 3825
I am trying to classify all groups of order $3825=3^2 \cdot 5^2 \cdot 17$. The Sylow theorems indicate that the number of Sylow p-subgroups for each p rime are $n_{17}=1$, and $n_{3}=1,25,85$ and $n_5=1,51$. Moreover, I know that the Sylow $3-$ and $5-$ subgroups $P_3$ and $P_5$ are abelian, as they are of the form $|G|=p^2$. I have classified all abelian groups using the structure theorem/fundamental theorem.
How do I find out the non-abelian groups of this order? In particular how do I realise $G$ as a semi-direct product with $P_{17}$ as one of the factors, as done while classifying groups? As $P_3$ and $P_5$ are not known to be normal, I cannot form something like $P_3 P_5$ as a subgroup. Counting arguments are not strong enough and don't tell me anything about $n_5$ and $n_3$.
First prove that Sylow $17$- subgroup is central (using N-C theorem). Then this will imply $n_3=25$, $n_5=1$ since $17 | N_G(S_3)$ and $17 | N_G(S_5)$, where $S_p$ denotes Sylow $p$-subgroup. Since $S_5$ and $S_17$ are normal, we can make a group $H$ generated by these two subgroups and $|H|=5^2 *17$ .Now $G$ is semi-direct product of $H$ with $S_3$.Thus, there exists a homomorphism $\phi$ from $S_3$ to $Aut(H)$. By divisibility conditions, $ker$ $\phi = S_3$ or $ker$ $\phi = Z_3$. Therefore if $ker$ $\phi = S_3$, $S_3$ centralizes $H$. Since $S_3$ is abelian, $S_3$ is central, contradicting $n_3=25$.Therefore, all such $G$ are abelian. If $ker$ $\phi = Z_3$, and $S_5=C_5^2$ then $G$ is a semi-direct product of a unique subgroup $T=Hker\phi$ of order $3*5^2 *17$ with $Z_3$.
This answer concerns only non abelian groups $G$ only, the abelian case can be easily derived. From the Sylow theorems it follows that $n_{17} = 1$ so there exists a normal subgroup $N$ of order $17$ of $G$. Let's see which groups $H = G/N$ of order $225$ we can construct. For those abelian ones none of their automorphism groups contains a factor $17$ so the only groups $G$ that can be made out of them are direct products with $N$. What non-abelian groups can be made of order $225$. Using the Sylow theorems again we find that $H$ has a normal subgroup $M$ of order $25$. Whe have either $M = C_5 \times C_5$ or $M = C_{25}$. In the latter case $\operatorname{Aut}(M)$ has order $20$ so only a direct product group is possible, but in the former case we have $\operatorname{Aut}(M) = \operatorname{GL}(2,5)$ which contains elements of order $3$ that are all conjugate so this gives us two possibilties for $H$ being $(C_5 \times C_5) \rtimes C_9$ and $((C_5 \times C_5) \rtimes C_3)\times C_3$, so there are up to isomophism two non abelian groups of order 3825. |
# Cech cohomology of a quasi-coherent sheaf on an affine scheme and Leray acyclicity Theorem.
Let $$X$$ be an affine scheme, $$\mathcal{F}$$ a quasi-coherent sheaf on $$X$$. Let $$\mathcal{U}=\{U_i\}_{i \in I}$$ be an affine covering of $$U$$ (not necessarely made up of principal open subsets). Moreover, let $$\mathcal{V}=\{V_j\}_{j \in J}$$ be a covering of $$X$$ made up of principal open subsets. Of course we can assume that both $$J$$ and $$I$$ are finite. Consider the covering $$\mathcal{V}_i:=\{V_j \cap U_i\}_{j \in J}$$ of $$U_i$$. Let $$H^q(\mathcal{V}_i,\mathcal{F}|_{U_i})$$ be the $$q$$-th Cech cohomology group of $$\mathcal{F}|_{U_i}$$ with respect to the covering $$\mathcal{V}_i$$. Can we say that $$H^q(\mathcal{V}_i,\mathcal{F}|_{U_i})=0$$ for each $$q \geq 1$$?
If the covering $$\mathcal{V_i}$$ is made up of principal open subset, then the answer is yes. But i think that $$\mathcal{V}_i$$ is not necessarely such. I am trying to understand Theorem 5.2.19 of Liu's Algebraic Geometry and Arithmetic Curves, which uses Theorem 5.2.12 (Leray's Theorem) of the same book. |
# Fast computation of abelian runs
3 TIBS - LITIS - Equipe Traitement de l'information en Biologie Santé
LITIS - Laboratoire d'Informatique, de Traitement de l'Information et des Systèmes
Abstract : Given a word $w$ and a Parikh vector $\mathcal{P}$, an abelian run of period $\mathcal{P}$ in $w$ is a maximal occurrence of a substring of $w$ having abelian period $\mathcal{P}$. Our main result is an online algorithm that, given a word $w$ of length $n$ over an alphabet of cardinality $\sigma$ and a Parikh vector $\mathcal{P}$, returns all the abelian runs of period $\mathcal{P}$ in $w$ in time $O(n)$ and space $O(\sigma+p)$, where $p$ is the norm of $\mathcal{P}$, i.e., the sum of its components. We also present an online algorithm that computes all the abelian runs with periods of norm $p$ in $w$ in time $O(np)$, for any given norm $p$. Finally, we give an $O(n^2)$-time offline randomized algorithm for computing all the abelian runs of $w$. Its deterministic counterpart runs in $O(n^2\log\sigma)$ time.
Document type :
Journal articles
Domain :
https://hal.archives-ouvertes.fr/hal-01956124
Contributor : Thierry Lecroq Connect in order to contact the contributor
Submitted on : Friday, December 14, 2018 - 9:07:14 PM
Last modification on : Wednesday, March 2, 2022 - 10:10:10 AM
### Citation
Gabriele Fici, Tomasz Kociumaka, Thierry Lecroq, Arnaud Lefebvre, Elise Prieur-Gaston. Fast computation of abelian runs. Theoretical Computer Science, Elsevier, 2016, 656, pp.256-264. ⟨10.1016/j.tcs.2015.12.010⟩. ⟨hal-01956124⟩
Record views |
# CSE 322 Spring 2021 Lab 8
Basic XLAT usage
//Code
.MODEL SMALL
.STACK 100h
.DATA
array1 db 10,11,12,13,14,15,16,17,18,19,20
.CODE
MAIN PROC
mov ax, @data
mov ds, ax
lea bx, array1
mov al, 5
xlat ;XLAT uses the initial value of al as n and fetches the nth element from the array and stores it on al.
exit:
mov ah, 4ch
int 21h
ENDP MAIN
END MAIN
Encoding and decoding message using XLAT
//Code
//Available on video
### Task #3 (Program listing 11.1, 11.2, 11.3)
//Code
//Available on video
String reverse print.
//Code
//Available on video
### Task #5 (Program listing 11.4)
Counting number of vowels and numbr of consonants using scan string instruction
//Code
//To be done by students
### Homework
1. Change the code in task 2 such that the encoding and decoding works for both small letter and capital letter.
2. Change the code in task 2 such that a letter is replaced by three letters ahead. For example, A becomes D and B becomes E
3. Chapter 9 Problem 9
4. Chapter 9 Problem 10
5. Chapter 9 Problem 11
6. Chapter 9 Problem 12
7. Chapter 10 Problem 7
8. Chapter 10 Problem 9
9. Chapter 10 Problem 10
10. Change the read_str and print_str procedures of problem 1 so that there is no necessity of counter of length of string. Simply add a \$ at the end of string to identify end of string |
# When does a polynomial generate a radical ideal?
A polynomial in a polynomial ring in one variable over a field generates a radical ideal iff it has no multiple roots. Is there a sufficient condition for a polynomial in several variables to generate a radical ideal? Like an ideal generated by a polynomial is prime if and only if it is irreducible.
-
Look up "square free" – Bill Cook Dec 18 '11 at 1:12
@Bill Cook: I know square free monomial ideals are radical and radical monomial ideals are square free. I was looking for conditions on polynomials that are not monomial. – Gene Simmons Dec 18 '11 at 1:36
@GeneSimmons, you should really look up square free! :D – Mariano Suárez-Alvarez Dec 18 '11 at 2:29
@MarianoSuárez-Alvarez: I tried various searches and looked up over a 100 articles but could not find anythIng close to the answer to my question. Perhaps I don't really understand the hint. I found some criteria for zero dimensional ideals in terms of square free polynomials, but not much else. If anyone has a pointed reference I would prefer that to random google searches. – Gene Simmons Dec 18 '11 at 4:20
@Gene: Prove it! If $f$ is squarefree and $f|g^n$, then for every irreducible factor $p$ of $f$, $p|g^n$, hence $p|g$. Therefore (since distinct irreducible factors are relatively prime), the product of all distinct irreducible factors of $f$ divides $g$; but this product is (an associate of) $f$, because $f$ is squarefree. So if $f$ is squarefree, $g^n\in (f)\Rightarrow g\in (f)$, so $(f)$ is radical. Conversely, if $f$ is not squarefree, then the squarefree root of $f$ has a power that lies in $(f)$ but does not itself lie in $f$. – Arturo Magidin Dec 18 '11 at 5:09
Let $k$ be a field and consider the polynomial ring $A = k[x_1,...,x_n]$.
Claim: Given $f \in A - \{0\}$, (f) is radical if and only if $f$ factors into a product of irreducibles of multiplicity $1$.
Proof:
$\Leftarrow$: We know $A$ is a UFD. So, let $f = f_1...f_m$ be a product of $f$ into irreducible factors such that for all $i \neq j$, $(f_i) \neq (f_j)$. Then $(f) = (f_1...f_m) = (f_1) \cap ... \cap (f_m)$ (I am using unique factorization for the second equality). Thus, $(f)$ is an intersection of prime ideals of $A$ and hence radical.
$\Rightarrow$: Suppose $(f)$ is radical. Again, let $f = {f_1}^{e_1}...{f_m}^{e_m}$ be a product of $f$ into irreducibles where $i \neq j$ $\Rightarrow$ $(f_i) \neq (f_j)$.
Our goal is to show that each $e_i = 1$. Well, suppose not. Then there exists $e_i$ such that $e_i > 1$. Then $({f_1}^{e_1}...{f_i}^1...{f_m}^{e_m}) \subset (f) \subset ({f_1}^{e_1}...{f_i}^1...{f_m}^{e_m})$. The first inclusion is because ${f_1}^{e_1}...{f_i}^1...{f_m}^{e_m} \in Rad((f)) = (f)$, and the second inclusion follows from the fact that ${f_1}^{e_1}...{f_i}^1...{f_m}^{e_m}|f$.
But, this means that there is some $u \in A^*$ such that ${f_1}^{e_1}...{f_i}^1...{f_m}^{e_m} = u{f_1}^{e_1}...{f_i}^{e_i}...{f_m}^{e_m}$, which contradicts unique factorization.
-
Thanks. One question, why is the ideal generated by the product of the irreducible factors equal to the intersection of the ideals generated by the individual factors? – Gene Simmons Dec 18 '11 at 5:20
$\subset$ is elementary, and $\supset$ follows from unique factorization. – Rankeya Dec 18 '11 at 5:22
Thanks. One follow up question. This doesn't extend to non-principal ideals right? – Gene Simmons Dec 18 '11 at 5:23
What is the precise statement you are trying to make for non-principal ideals? – Rankeya Dec 18 '11 at 5:27
Are ideals generated by several square free polynomials radical? – Gene Simmons Dec 18 '11 at 5:33
Let $(p_1),\dots,(p_n)$ be distinct prime ideals of a unique factorization domain, and let $k_1,\dots,k_n$ be positive integers. Then the radical of $$(p_1^{k_1}\cdots p_n^{k_n})$$ is clearly $$(p_1\cdots p_n).$$
- |
Grid Cell - Maple Help
Maplets[Elements]
GridCell
specify an entry in a row of a grid layout
Calling Sequence GridCell(opts)
Parameters
opts - equation(s) of the form option=value where option is one of halign, height, hscroll, hweight, valign, value, vscroll, vweight, or width; specify options for the GridCell element
Description
• The GridCell layout element specifies an entry in a row of a grid layout.
• The contents of the cell are specified by using the value option.
• The GridCell element features can be modified by using options. To simplify specifying options in the Maplets package, certain options and contents can be set without using an equation. The following table lists elements, symbols, and types (in the left column) and the corresponding option or content (in the right column) to which inputs of this type are, by default, assigned.
Elements, Symbols, or Types Assumed Option or Content always, as_needed, or never hscroll and vscroll options left or right halign option string or symbol value option top or bottom valign option
• A GridCell element can contain BoxLayout, GridLayout, or window body elements to specify the value option.
• A GridCell element can only be contained in a GridRow element.
• The following table describes the control and use of the GridCell element options.
An x in the I column indicates that the option can be initialized, that is, specified in the calling sequence (element definition).
An x in the R column indicates that the option is required in the calling sequence.
An x in the G column indicates that the option can be read, that is, retrieved by using the Get tool.
An x in the S column indicates that the option can be written, that is, set by using the SetOption element or the Set tool.
Option I R G S halign x height x hscroll x hweight x valign x vscroll x vweight x value x width x
• The opts argument can contain one or more of the following equations that set Maplet application options.
halign = left, center, right, or full
Horizontal alignment of the cell contents. By default, the value is inherited from either the parent GridRow, or if unset, the parent GridLayout, or if unset then defaults to center. The value full, rather than aligning the cell contents, attempts to horizontally stretch the cell contents to fill the cell.
height = posint
The number of grid rows the cell spans. By default, the value is 1.
Note: The height parameter does not control the height of its content, only its position.
hscroll = never, as_needed, or always
This option determines when a horizontal scroll bar appears in the grid cell. By default, the value is never.
hweight = integer in the range 0..10
This option specifies the amount of stretch to apply to the column the GridCell appears in when the GridLayout is horizontally resized. A value of 0 (the default) corresponds to no stretching, while a value of 10 (the maximum) corresponds to full stretching. The stretch value for a column in a GridLayout is taken to be the maximum stretch value for all GridCell elements in that column.
valign = top, center, bottom, or full
Vertical alignment of the cell contents. By default, the value is inherited from either the parent GridRow, or if unset, the parent GridLayout, or if unset then defaults to center. The value full, rather than aligning the cell contents, attempts to vertically stretch the cell contents to fill the cell.
value = window body, BoxLayout, or GridLayout element, or reference to such an element (name or string)
The Maplet application element that appears in the grid cell.
vscroll = never, as_needed, or always
This option determines when a vertical scroll bar appears in the grid cell. By default, the value is never.
vweight = integer in the range 0..10
This option specifies the amount of stretch to apply to the row the GridCell appears in when the GridLayout is vertically resized. A value of 0 (the default) corresponds to no stretching, while a value of 10 (the maximum) corresponds to full stretching. The stretch value for a row in a GridLayout is taken to be the maximum stretch value for all GridCell elements in that row.
width = posint
The number of grid cells that the cell spans. By default, the value is 1.
Note: The width parameter does not control the width of its content, only its position.
Examples
Alignment example:
> $\mathrm{with}\left(\mathrm{Maplets}\left[\mathrm{Elements}\right]\right):$
> \mathrm{maplet}≔\mathrm{Maplet}\left(\mathrm{GridLayout}\left(\mathrm{halign}=\mathrm{left},\mathrm{inset}=3,\mathrm{GridRow}\left("Spacing:",\mathrm{GridCell}\left("-----------------"\right)\right),\mathrm{GridRow}\left("Right alignment:",\mathrm{GridCell}\left(\mathrm{Button}\left("Sample",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right),\mathrm{halign}=\mathrm{right}\right)\right),\mathrm{GridRow}\left("Center alignment:",\mathrm{GridCell}\left(\mathrm{Button}\left("Sample",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right),\mathrm{halign}=\mathrm{center}\right)\right),\mathrm{GridRow}\left("Left alignment:",\mathrm{GridCell}\left(\mathrm{Button}\left("Sample",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right),\mathrm{halign}=\mathrm{left}\right)\right),\mathrm{GridRow}\left("Full alignment:",\mathrm{GridCell}\left(\mathrm{Button}\left("Sample",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right),\mathrm{halign}=\mathrm{full}\right)\right)\right)\right):
> $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$
Using height and width:
> $\mathrm{maplet}≔\mathrm{Maplet}\left(\mathrm{GridLayout}\left(\mathrm{GridRow}\left(\mathrm{GridCell}\left(\mathrm{height}=3,\mathrm{TextBox}\left(\mathrm{width}=5,\mathrm{height}=3\right)\right),\mathrm{GridCell}\left(\mathrm{width}=3,\mathrm{TextBox}\left(\mathrm{width}=15,\mathrm{height}=1\right)\right)\right),\mathrm{GridRow}\left(\mathrm{GridCell}\left(\mathrm{height}=2,\mathrm{width}=2,\mathrm{TextBox}\left(\mathrm{width}=10,\mathrm{height}=2\right)\right),\mathrm{GridCell}\left(\mathrm{height}=2,\mathrm{TextBox}\left(\mathrm{width}=5,\mathrm{height}=2\right)\right)\right),\mathrm{GridRow}\left(\mathrm{GridCell}\left(\mathrm{width}=4,\mathrm{TextBox}\left(\mathrm{width}=20,\mathrm{height}=1\right)\right)\right)\right)\right):$
> $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$
Similar layout using bottom row for vertical resize and center column for horizontal resize with multi-line buttons and using 'full' alignment
> \mathrm{maplet}≔\mathrm{Maplet}\left(\mathrm{GridLayout}\left(\mathrm{halign}=\mathrm{full},\mathrm{valign}=\mathrm{full},\mathrm{GridRow}\left(\mathrm{GridCell}\left(\mathrm{height}=3,\mathrm{Button}\left("l\ne\nf\nt",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right)\right),\mathrm{GridCell}\left(\mathrm{Button}\left("top",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right)\right),\mathrm{GridCell}\left(\mathrm{height}=3,\mathrm{Button}\left("r\ng\nh\nt",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right)\right)\right),\mathrm{GridRow}\left(\mathrm{GridCell}\left(\mathrm{height}=2,\mathrm{hweight}=10,\mathrm{Button}\left("OK",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right)\right)\right),\mathrm{GridRow}\left(\mathrm{GridCell}\left(\mathrm{width}=3,\mathrm{vweight}=10,\mathrm{Button}\left("done",\mathrm{onclick}=\mathrm{Shutdown}\left(\right)\right)\right)\right)\right)\right):
> $\mathrm{Maplets}\left[\mathrm{Display}\right]\left(\mathrm{maplet}\right)$ |
Octeract Engine
Octeract Engine is a deterministic solver to find globally optimal solutions for mixed-integer nonlinear programs (MINLPs). Distinguishing highlights of Octeract are its powerful and flexible symbolic reformulation engine and its parallelism capabilities. For more detailed information, we refer to the Octeract Math Blog.
# Usage
The following statement can be used inside your GAMS program to specify that Octeract should be used:
Option SOLVER = OCTERACT;
The above statement should appear before the Solve statement. If Octeract was specified as the default solver during GAMS installation, the above statement is not necessary.
GAMS/Octeract by default uses CPLEX to solve LP/MIP problems, if licensed, and CLP/CBC otherwise. By setting options LP_SOLVER or MILP_SOLVER, other solvers can be selected:
• If a license for GAMS/CPLEX, GAMS/OsiCPLEX, GAMS/CPLEX-link is available, then CPLEX can be used.
• If a Gurobi license is installed on the machine, then Gurobi can be used.
• If a FICO Xpress license is installed on the machine, then Xpress can be used. A GAMS/XPRESS license is currently not sufficient.
GAMS/Octeract by default uses CPLEX to solve QP/QCQP problems, if licensed, and Ipopt otherwise. By setting options QP_SOLVER or QCQP_SOLVER, Ipopt can be selected.
## Specification of Octeract Options
GAMS/OCTERACT supports the GAMS parameters reslim, iterlim, nodlim, workspace, optcr, optca, and threads. The interpretation of threads differs from its original meaning in the sense that also on systems with hyperthreading enabled only the number of actual processor cores is taken into account. Further, GAMS/OCTERACT is limited to the use of at most 16 cores and distributed parallelization is not available.
Options can be specified by a solver options file. A small example for an octeract.opt file is:
LP_SOLVER HIGHS
MILP_SOLVER HIGHS
STARTING_POINT_IPOPT VARIABLE_MID_POINT
It causes Octeract to use HiGHS to solve LP or MIP problems and to use a different starting point strategy for calls of Ipopt.
# List of Octeract Options
In the following, we give a detailed list of all Octeract options.
## Octeract Options
Option Description Default
APPLY_CB This enables or disables constraint probing for domain reduction.
Range: boolean
0
BIG_COEFF_TOLERANCE Sets the maximum acceptable absolute value of a coefficient in the lower bounding problem.
If a coefficient is greater than this number, the solver will use other, less effective methods instead of solving the lower bounding linear problem (LP). Coefficients are naturally improved through branching. However, if the problem is not numerically well-behaved, increasing this tolerance will force the solver to solve the ill-posed lower bounding problems. This is small-ish by default because the global solution guarantee can be compromised otherwise, but in practice it’s perfectly fine to increase this most of the time. Your clue that there might be a problem relating to this is that an (MI)LP relaxation was generated and you get a gap that never improves.
Range: [0, ∞]
1e+09
BIG_GAP_TOLERANCE If the difference between a node’s upper bound and the parent’s lower bound is smaller than BIG_GAP_TOLERANCE, then solve the relaxation, otherwise skip.
Range: [0, ∞]
1e+60
BINARY_REFORMULATION_PROCEDURE If REFORMULATE_BINARIES is enabled, this option determines which method to use in the reformulation of binary variables.
All of these methods transform the binary variables to continuous, either by adding constraints, or simply by dropping integrality ( Continuous_Relaxation ). The constraint adding methods tend to suck, and obviously dropping integrality means you’re now solving a different problem, so you should only really fiddle with this if you’re reaaaally desperate or you really know what you’re doing.
BOUND_VIOLATION_TOLERANCE Bound violation tolerance is used to confirm whether solutions returned by third-party solvers (IPOPT, CPLEX, Gurobi, CBC, etc) are indeed feasible within that tolerance.
Relaxing this tolerance can help in problems where finding a feasible solution is difficult, or where the global solution is eliminated due to slight constraint violations.
Range: [1e-12, 1]
1e-06
BQCQP_REFORMULATION_METHOD Type of reformulation used for BQCQPs.
The default option is SSLINEARBQCQP, which linearises the BQCQP following Sherali and Smith, “An improved linearization strategy for zero-one quadratic programming problems”, 2007. The CONVEXIFY option tries to convexify the problem in a similar way as the convexification for BQPs. See also the Octeract docu.
Range: SSLINEARBQCQP, CONVEXIFY
SSLINEARBQCQP
BQP_REFORMULATION_METHOD This option forces the reformulation method applied to Binary Quadratic Problems.
Note that this applies to the preprocessing state machine, i.e., if your problem is reformulated to a BQP by the preprocessor, you can use this option to force how that BQP will be reformulated. See also the Octeract docu.
AUTOMATIC
BQP_SPARSITY_TOLERANCE The maximal sparsity for a BQP to be reformulated as a linear problem.
Range: [0, 1]
0.8
BRANCHING_STRATEGY The heuristic for selecting the variable to branch in the branch and bound method when solving a problem.
Range: AUTOMATIC, MOST_VIOLATED_TERM, HYBRID_INTEGER_LEAST_REDUCED_AXIS, MAX_SEPARATION_DISTANCE, STRONG_BRANCHING, MOST_FRACTIONAL_VARIABLE, MODIFIED_MOST_NONCONVEX_VARIABLE
AUTOMATIC
CALCULATE_LB_LARGE_COEFFS If this option is enabled the Engine will treat constraints with large coefficients as every other constraint.
This check happens when solving a relaxation to find the lower bound at every node, and by default the engine will modify the pathological constraints to a safer form. However, this safety-first approach enlarges the feasible region, which means that your LLB may get stuck or improve much more slowly than it otherwise could. If you observe this behaviour, try setting this to true.
Range: boolean
0
CONSTRAINT_VIOLATION_TOLERANCE This is the acceptable constraint violation when qualifying a solution as feasible.
Relaxing this tolerance can help in problems where finding a feasible solution is difficult, or where the global solution is eliminated due to slight constraint violations.
Range: [0, 1]
1e-06
CONVERGENCE_TOLERANCE This sets a tolerance to determine whether the algorithm has converged to global optimality.
Note that this is used as both absolute and a relative gap tolerance. See also the Octeract docu.
Range: [0, ∞]
min(GAMS optcr, GAMS optca)
CP_MAX_ITERATIONS This option sets the limit for the number of passes performed by Constraint Propagation (CP) per node.
More of this means potentially better domain reduction per node, at the expense of speed and numerical correctness. See also the Octeract docu.
Range: {0, ..., ∞}
5
CP_NUMBER_COMPARISON_TOLERANCE This is the tolerance for comparison of two values within the Constraint Propagation (CP) algorithm.
CP is numerically unsafe by nature, so we have a metric ton of fail-safes and tolerances built into the implementation to make sure it doesn’t eliminate the global solution. This tolerance specifies when CP can safely consider whether a number is definitely larger/smaller/equal than another very similar one. Decreasing this can improve the stability of the algorithm, and increasing it can improve the quality of domain reduction by allowing more numerically unsafe operations. See also the Octeract docu.
Range: [0, 1]
1e-06
CP_SCALING Enabling this option may help avoid numerical issues for extremely badly scaled problems, i.e., constraints containing coefficients of widely different scales, wide range of scales for variable bounds, etc.
For MINLPs this can also be the case when you have fractions in your formulation without having properly contained the denominator range. This option sucks for regular problems, but it might help you in cases where the global solution is getting incorrectly fathomed. See also the Octeract docu.
Range: boolean
0
CP_VOLUME_IMPROVEMENT_FACTOR This sets the required volume improvement factor for iterations of constraint propagation.
More of this gives you more aggressive (and expensive) CP. See also the Octeract docu.
Range: [0, 1]
0.999
CUT_POOL_MAX_SIZE This sets the maximum number of cuts to be kept in the cut pool.
Range: {0, ..., ∞}
10000
FBBT_MAX_ITERATIONS This option sets the maximum number of variable bisections when performing Feasibility Based Bounds Tightening (FBBT).
More of this means more more domain reduction, but this can be really expensive, especially if you have dense nonlinear functions in your model. As with all domain reduction-related settings, doing more of it increases the odds of numerics going wrong, so use with caution. See also the Octeract docu.
Range: {0, ..., ∞}
5
FBBT_TIMEOUT Sets a working limit, in seconds, for Feasibility Based Bounds Tightening (FBBT).
This is typically applied per node of the branch-and-bound tree. See also the Octeract docu.
Range: [0, ∞]
0.3
FBBT_TOLERANCE This is the epsilon-termination condition for the width of a variable box when performing Feasibility Based Bounds Tightening (FBBT).
Range: [0, 1]
0.0001
FIRST_FEASIBLE_SOLUTION Setting this to true will force the engine to exit the moment a feasible solution is found, even if it’s a very bad one.
Range: boolean
0
FORCE_EXPANSION If USE_AUTOMATIC_EXPANSION is disabled, this setting gives you control over symbolic expansion.
Setting this to true will force the engine to symbolically expand all formulas in your problem, and setting this to false will force the engine to process the formulas exactly as you input them.
Range: boolean
0
HEUR_CONSTRAINT_PENALTY Controls the constraint penalty heuristic.
Range: OFF, NONLINEAR_CONSTRAINTS, ALL_CONSTRAINTS
OFF
HEUR_CONSTRAINT_PENALTY_COEFF This sets the penalty coefficient for the Constraint Penalty primal heuristic.
Higher values will make the solver prioritise feasibility over optimality, but will also increase the likelihood of numerical instability.
Range: [0, ∞]
1000
HEUR_CS Enables or disables the Constraint Satisfaction (CS) heuristic.
This can be quite expensive, so try turning this off it the solver is spending too much time trying to find a feasible solution.
Range: boolean
1
HEUR_FIXING Enables or disables a basic fixing heuristic.
Range: boolean
1
HEUR_GUIDED_DIVING Enables or disables the Guided Diving heuristic.
Can be quite expensive, so it’s off by default.
Range: boolean
0
HEUR_INEQUALITY Enables or disables the Inequality heuristic.
Range: boolean
1
HEUR_LB Enables or disables a simple Lower Bound (LB) heuristic.
Range: boolean
1
HEUR_MINLP_DIVING Enables or disables all Diving heuristics.
Range: boolean
1
HEUR_NL_FEASIBILITY_PUMP Enables or disables the Nonlinear (NL) Feasibility Pump heuristic.
Range: boolean
1
HEUR_PENALTY Enables or disables a simple penalty heuristic.
Range: boolean
1
HEUR_QPDIVING Enables or disables the QP Diving heuristic.
This can be very good for convex MI problems, but it’s pretty expensive so it’s off by default.
Range: boolean
0
HEUR_SAP Enables or disables the Shift and Propagate (SAP) heuristic.
Range: boolean
1
HEUR_SUPREME Primal heuristic that can be useful for some unbounded problems.
Range: boolean
0
HEUR_ZERO Enables or disables the Zero heuristic.
Range: boolean
1
INFINITY Default bound for unbounded variables.
Note that if unbounded variables are bounded this way, optimal solutions beyond these bounds may be cutoff, thus global optimality is no longer guaranteed. The returned model status and dual bound may NOT be valid in this case.
Range: [0, ∞]
1e+07
INTEGER_REFORMULATION_VAR_RANGE_LIMIT This option sets the max integer variable range up to which which integers will be reformulated to binaries.
Range: {0, ..., 100000}
5000
INTEGRALITY_VIOLATION_TOLERANCE Integrality violation tolerance is the acceptable violation before the solver determines that a variable is no longer an integer.
Relaxing this tolerance can make it much easier for the solver to find feasible solutions for discrete problems, especially problems with many integers or highly non-linear discrete functional forms.
Range: [0, 1]
0.001
IPOPT_INITIAL_VALUE When STARTING_POINT_IPOPT is set to CUSTOM_CONSTANT_VALUE, this option determines the value of the initialisation of the primal variables given to IPOPT.
Range: real
1
LLB_TOLERANCE Tolerance used to decide whether solutions from lower and upper problems are the "same".
Range: [0, ∞]
0.001
LOCAL_SEARCH If enabled, the engine will run in local optimisation mode.
This mode skips all expensive preprocessing, and uses specialised local search algorithms to return a good feasible solution as quickly as possible. Highly recommended for folks who dislike waiting.
Range: boolean
0
LP_SOLVER This sets the solver which will be used to solve Linear Programming (LP) problems.
Range: IPOPT, OSICLP, OSICBC, CPLEX, GUROBI, XPRESS, HIGHS
OSICLP
MAX_CB_DEPTH This sets the maximum depth in the branching tree at which constraint probing should be adding additional constraints.
If this is set to 0, depth is unlimited.
Range: {0, ..., ∞}
1
MAX_SOLVER_ITERATIONS This sets the maximum number of solver iterations in serial mode.
This option is ignored in parallel mode.
Range: {1, ..., ∞}
GAMS iterlim
MAX_SOLVER_MEMORY This option sets the memory limit allowed during the solving process in MB.
If the memory used exceeds this limit the solving process is terminated. Note that when running MPI this is the memory consumed by the main process. Before you ask, yes, we could make this include the workers too but (i) getting precise memory consumption for the OS is iffy at best, and (ii) these system calls are actually quite expensive.
Range: {0, ..., ∞}
GAMS workspace
MAX_SOLVER_TIME Sets the timeout for the solver in real time seconds.
Range: [1, ∞]
GAMS reslim
MILP_LB_MAX_NODES This sets the maximum number of nodes that the MILP solver will be allowed to explore when solving a lower bounding problem.
By default it’s infinite (-1), but setting this to a finite number can improve the performance of MILP relaxations per node, at the expense of the quality of the lower bound per said node.
Range: {-1, ..., 2000000000}
-1
MILP_LB_TIMEOUT This option sets the timeout (in real-time seconds) for the MIP solver that it is used to solve MILP lower bounding problems at every node of the branch-and-bound tree.
Giving the MIP solver more time means that the lower bound per node can be superior, at the expense of computing time. Increase the timeout if: Your MIP relaxation starts clustering when it gets close to the global solution Decrease the timeout if: A lot of time is spent solving MIP problems but you don’t see the bounds improving even after a lot of nodes have been explored.
Range: [1, ∞]
5
MILP_SOLVER Sets the solver which will be used to solve MILP problems.
Range: OSICBC, CPLEX, GUROBI, XPRESS, HIGHS
OSICBC
MIP_SOLVER Umbrella option to use the specified MIP solver to solve all types of sub-problems it supports.
If set, it will overwrite LP_SOLVER and MILP_SOLVER. If set to CPLEX, it will also override QP_SOLVER and QCQP_SOLVER.
Range: CPLEX, GUROBI, XPRESS, OSICBC, HIGHS
NUM_CORES Number of MPI processes to spawn.
If the problem is reformulated to a MIP, the MPI workers go to sleep and this setting determines the number of threads for the MIP solver.
Range: {-∞, ..., 16}
NUMBER_COMPARISON_TOLERANCE This tolerance is used to determine whether two floating point numbers are the same.
Its use in the engine is context sensitive, therefore changing this can have unforeseen consequences, for the better or worse. Reducing this tolerance smaller can accelerate convergence, as the solver becomes more aggressive in domain reduction, at the risk of eliminating otherwise acceptable solutions. Increasing this tolerance can slows down convergence, as the solver adopts a more conservative approach, but may help in cases where the global solution is being incorrectly fathomed.
Range: [0, 1]
1e-06
OBBT_MAX_DEPTH This sets the maximum depth in the branch-and-bound tree at which Optimality Based Bounds Tightening (OBBT) should be applied.
Range: {1, ..., ∞}
OBBT_MAX_ITERATIONS This sets the maximum number of passes for OBBT.
Range: {1, ..., ∞}
1
OUTPUT_FREQUENCY The solver will print output every OUTPUT_FREQUENCY iterations.
This only has meaning in serial mode, as there are no iterations in MPI mode.
Range: {1, ..., ∞}
1
OUTPUT_TIME_FREQUENCY The solver will print output every OUTPUT_TIME_FREQUENCY seconds.
Range: {1, ..., ∞}
1
PRESOLVE Enables or disables the presolver.
Range: boolean
1
PURGE_CUTS Disable this if you want the solver to save all cuts since the beginning of time in its cut pool.
This will override all other cut management options, including the limit for max cuts. Is it not recommended to change this option.
Range: boolean
1
QCQP_SOLVER Sets the solver that will be used to solve QCQP sub-problems.
Setting the MIP_SOLVER option overrides this option.
Range: CPLEX, IPOPT
IPOPT
QP_SOLVER Sets the solver that will be used to solve QP sub-problems.
Setting the MIP_SOLVER option overrides this option.
Range: CPLEX, IPOPT
IPOPT
REDUCE_LINEAR_CONSTRAINTS If this option is enabled, the Engine will try to reduce the size of the linear constraints by eliminating a variable in the constraint if its coefficient is close to zero.
Range: boolean
0
REFORMULATE_BINARIES Enables or disables the reformulation of binary variables into continuous nonlinear constraints according to the BINARY_REFORMULATION_PROCEDURE option.
Range: boolean
0
REFORMULATE_INTEGERS Enables or disables the reformulation of integer variables as binary.
Range: boolean
0
REFORMULATE_INTEGERS_IN_MIQCQP Enables or disables the reformulation of integer variables as binary in MIQCQP problems.
Range: boolean
1
SOLVE_CONTINUOUS_RELAXATION Setting this option to true will relax all discrete variables to continuous.
This can be useful if you want to investigate the lower bound of large problems with a lot of integer or binary variables.
Range: boolean
0
STARTING_POINT_IPOPT Set the strategy that the engine will use to generate starting points for IPOPT.
Change this if you are having trouble getting IPOPT to converge, or if it’s being too slow. See also the Octeract docu.
Range: CONSTANT_VALUE_ONE, CUSTOM_CONSTANT_VALUE, VARIABLE_MID_POINT, VARIABLE_MAX_POINT, VARIABLE_MIN_POINT
CONSTANT_VALUE_ONE
STRENGTHEN_LINEAR_CONSTRAINTS If enabled, for each linear constraint that contains binary or integer variables, the engine will try to find a tighter constraint by changing the coefficients of said variables.
Range: boolean
0
STRONG_BRANCHING_DEPTH When BRANCHING_STRATEGY = STRONG_BRANCHING, this option sets the maximum tree depth up to which to apply strong branching.
The solver will then revert back to AUTOMATIC.
Range: {0, ..., ∞}
0
STRONG_BRANCHING_VARIABLE_COUNT When BRANCHING_STRATEGY = STRONG_BRANCHING , this option sets the (maximum) number of variables to “test” while strong branching.
Range: {0, ..., ∞}
100
UB_FREQUENCY This option sets how often the solver should solve local optimisation problems to find upper bounds.
Small values for this option can be helpful in problems where finding feasible upper bounds is challenging. For instance, UB_FREQUENCY = 1 will instruct the solver to calculate an upper bound on every node processed in the branch and bound algorithm, increasing the probability that a feasible solution is going to be located. This, of course, comes with a trade-off of increased solution time, i.e. the smaller the UB_FREQUENCY, the more time is spent every iteration solving the upper bounding problem. The value of this option highly affects solver performance, especially in problems where a lot of time is spent solving primal heuristics. Note that the value of this option is more of a hint for the solver of how much to focus on solving UB problems rather than a hard-coded value, as it will load balance automatically when/which primal heuristics are solved.
Range: {1, ..., ∞}
4
USE_AUTOMATIC_EXPANSION This option enables or disables the dual expansion algorithm for mathematical expressions.
If enabled, the engine will create an alternative expanded version of the problem where functions like (x-2)2 will be replaced by x2-4x+4. The engine will then use a heuristic to select the most promising model to solve between the original (unexpanded) and alternative (expanded) version.
Range: boolean
1
USE_CONVEXITY_CUTS Enables or disables convexity cuts.
Range: boolean
1
USE_CP Turns constraint propagation on or off.
Range: boolean
1
USE_FBBT Whether to use Feasibility Based Bound Tightening.
Range: boolean
1
USE_MILP_RELAXATION Whether to use mixed-integer linear relaxations.
Range: boolean
1
USE_NONLINEAR_RELAXATION Turns nonlinear relaxations on or off, regardless of problem type.
Range: auto, true, false
auto
USE_NONLINEAR_RELAXATION_FOR_CONVEX_MINLP Control the use of nonlinear relaxations specifically for convex MINLPs.
Note that “MINLP” here is refers to all discrete problem classes up to and including DMINLP (so BQP, MIQCP, MBNLP, etc). In other words, if your problem is nonlinear and has discrete variables, this option applies.
Range: auto, true, false
auto
USE_OBBT Whether to use Optimisation Based Bound Tightening.
Range: boolean
1
USE_PROBING Enables or disables probing techniques for domain reduction .
Range: boolean
1
USE_REDUCED_COST Enables or disables reduced cost domain reduction.
Range: boolean
1
USE_REFORMULATION_LINEARIZATION Enables or disables the use of RLT to create redundant constraints in order to tighten the formulation.
Range: boolean
1
USE_SIMPLIFICATION Enables or disables standard reformulation rules for nonlinear and discontinuous functions.
Range: boolean
1
USE_STRUCTURE_DETECTOR Whether linear variables in objective function that are defined by a linear equation should be replaced if possible.
Range: boolean
1 |
# Math Help - Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
1. ## Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
Does anyone know how to find g?
I tried the following:
sqrt(x) results in x + 3 which is wrong...
sqrt(x) + 2 results in x + 4sqrt(x) + 7 which is wrong...
etc...
2. ## Re: Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
Nevermind, sqrt(x + 2) is the right answer. I finally found it!
3. ## Re: Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
Originally Posted by Lotte1990
Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
Does anyone know how to find g?
Can you solve ${y^2} + 3 = 5 + x$ for $y~?$
4. ## Re: Find g for f(g(x)) = 5 + x when f(x) = x^2 + 3
Originally Posted by Lotte1990
Nevermind, sqrt(x + 2) is the right answer. I finally found it!
Looks like g can be the negative of this as well. |
Solved
# How do I find my DLL's PublicKeyToken if the project / assembly was not strongly named?
Posted on 2011-09-12
549 Views
I'm having trouble with finding the PublicKeyToken of my project, created in VS 2010 ASP .NET 3.5 (C#). I've run some tools to find the publickeytoken but now i find out that the project / assembly was not strongly named. i didn't know i was goin to need to strongly name it. is there a way to strongly name it or is there a way to find the publickeytoken for it. It is causing my web.config to fail / error out.
0
Question by:mikesExpertExchange
• 2
LVL 75
Accepted Solution
käµfm³d 👽 earned 500 total points
ID: 36523506
If the assembly is not strongly named, then there is no public key token! You can create a new key file to sign the assembly. This won't be a key file in the sense of one attained from the likes of Verisign, et. al., but it will allow you to strongly name the file. You'll find the option to create a key file on the "Signing" tab of your project's properties:
Once you check the box, you can click the dropdown and there will be an option for "<New...>". If you select this, a new dialog will open which allows you to name the key file. You can optionally specify a password for that file as well:
Once you click "OK", your assembly should be strongly named at that point. You can use the Strong Name utility "sn.exe" to inspect the value of the public key token. Open a VS Command Prompt and enter the following command:
sn.exe -T C:\path\to\your\compiled.dll
The PKT should print to the console window.
untitled2.PNG
0
LVL 1
Author Closing Comment
ID: 36524332
thank you
0
LVL 75
Expert Comment
ID: 36524441
NP. Glad to help = )
0
## Featured Post
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
### Suggested Solutions
ASP.Net to Oracle Connectivity Recently I had to develop an ASP.NET application connecting to an Oracle database.As I am doing it first time ,I had to solve several problems. This article will help to such developers to develop an ASP.NET client…
This demonstration started out as a follow up to some recently posted questions on the subject of logging in: http://www.experts-exchange.com/Programming/Languages/Scripting/JavaScript/Q_28634665.html and http://www.experts-exchange.com/Programming/…
Two types of users will appreciate AOMEI Backupper Pro: 1 - Those with PCIe drives (and haven't found cloning software that works on them). 2 - Those who want a fast clone of their boot drive (no re-boots needed) and it can clone your drive wh…
Email security requires an ever evolving service that stays up to date with counter-evolving threats. The Email Laundry perform Research and Development to ensure their email security service evolves faster than cyber criminals. We apply our Threat…
#### 790 members asked questions and received personalized solutions in the past 7 days.
Join the community of 500,000 technology professionals and ask your questions. |
# Cystic Fibrosis Foundation Pilot and Feasibility award
## Introduction
Cystic Fibrosis Foundation Pilot and Feasibility award will be provided to those scientists who are working passionately to conduct painstaking research related to cystic thrombosis. The scientist must have the burning desire to make a difference to the world by their pioneering and assiduous endeavor in obliterating a menace named as cystic thrombosis. The postdoctoral program gives financial help to the scientists for a noble cause and it is fighting against the dreaded malady called Cystic Fibrosis.
The research work by the skilled scientists will unveil the causes of the disease and design the remedy of the disease providing relief to the patients who are suffering from the lethal disease and in turn will usher a future, which is free from the shackles of Cystic Thrombosis. The laudable mission will definitely be accomplished in the future due to the dual efforts of the scientists and the fellowship program.
## Eligibility for obtaining the Cystic Fibrosis Foundation Pilot and Feasibility award
The candidates need to be studying MD or PHD in the preliminary or advanced level in order to be eligible for the award. However, the program is open to all the candidates throughout or all over the world, no matter the candidate is an American applicant or not.
## Address of Cystic Fibrosis Foundation Pilot and Feasibility award
• Cystic Fibrosis Foundation, 6931 Arlington Road, Suite 200, Bethesda, Maryland 20814.
• Phone: 301 − 951 − 4422
• Fax: 301 − 951 − 6378
• Email: [email protected] |
Tag Info
What metrics would indicate a house bubble rather than genuine market values?
The first metric to look at is house-price-to-rent ratios. Rental prices capture the value of the housing (and housing-linked) services provided by a property, including things like how safe a ...
• 6,265
What is the importance of Epstein-Zin preferences?
This is only a quick answer, unfortunately. The key intuitive insight for Epstein-Zin is that they separate two distinct properties of preferences: risk aversion ("I'd prefer less uncertainty to more ...
• 1,048
Accepted
What is the importance of Epstein-Zin preferences?
I think CompEcon covered most of the points that I was going to mention. Just a few last thoughts: 1) Why are Epstein-Zin preferences important? The preferences are important because they allow you ...
• 1,583
Accepted
• 416 |
On the Generalized d'Alembert's and Wilson's Functional Equations on a Compact group Let $G$ be a compact group. Let $\sigma$ be a continuous involution of $G$. In this paper, we are concerned by the following functional equation $$\int_{G}f(xtyt^{-1})\,dt+\int_{G}f(xt\sigma(y)t^{-1})\,dt=2g(x)h(y), \quad x, y \in G,$$ where $f, g, h \colonG \mapsto \mathbb{C}$, to be determined, are complex continuous functions on $G$ such that $f$ is central. This equation generalizes d'Alembert's and Wilson's functional equations. We show that the solutions are expressed by means of characters of irreducible, continuous and unitary representations of the group $G$. Keywords:Compact groups, Functional equations, Central functions, Lie, groups, Invariant differential operators.Categories:39B32, 39B42, 22D10, 22D12, 22D15 |
Tutorial
#### Superposition
The H gate takes a pure qubit — $\ket{0}$ or $\ket{1}$ — and splits it putting the qubit in a quantum state that now contains both pentagons and triangles — a quantum state that is a superposition of two different qubelets.
Splitting two qubits considerably thickens the plot. Unlike classical computers where operations on one bit doesn't affect others in the system, in quantum computing an action on one influences the states of others. To handle this characteristic of quantum bits, we must think of all possible states of the quantum bits at once instead of the classical way of thinking of one-bit-at-a-time approach. To help us get comfortable with this shift in mindset, consider the following circuit in which two qubits are split as follows:
Each H gate splits the $\ket{0}$ qubit. But, in quantum computing, you don't treat these split pentagons and triangles as individual units. Rather, each shape in the top qubit pairs up with each shape in the bottom qubit forming the four combinations as shown below:
These combinations, called qubelet combinations, form a mega-qubit.
If we feed the mega-qubit to other gates, then those gates operate on all the combinations in the mega-qubit simultaneously.
When more qubits are split, the mega-qubit automatically holds the combinations formed from the pentagons and triangles from all the split qubits. For example, when 3 qubits are split by the H gate, the resulting mega-qubit is shown below:
Each combination is formed by taking a shape from each of the three qubits in turn. This gives $2 \times 2 \times 2 = 2^3 = 8$ combinations in the mega-qubit on the right in the above figure.
In general, as the number of qubits you split becomes large, the number of combinations in the mega-qubit becomes astronomical and well beyond the means of classical computers. But, this gigantic increase in complexity has no impact for quantum computers which is hard-wired to deal with all combinations simultaneously regardless of how many there are.
This ability to simultaneously deal with all combinations or possible quantum states of the qubits offers a spectacular way to routinely solve industrial-scale applications that even today's supercomputers find impossible to tackle: in a quantum computer all Boolean operations are applied to possible states at the same time unlike a classical computer in which only one combination of states at a time are operated on.
One of these combinations, the solution to your problem, is lurking in the mega-qubit. The goal in designing a quantum algorithm is to apply quantum gates to the qubits to tease this solution out from the all the other combinations.
In the next section, you'll learn that the qubelets model explains entanglement, another quantum phenomena that has no classical counterpart. |
Prime Rings with Generalized Derivations
Received:May 12, 2006 Revised:October 12, 2006
Key Words: prime ring Lie ideal generalized derivation.
Fund Project:
Author Name Affiliation HUANG Shu-liang Department of Mathematics, Chuzhou University, Ahnui 239012, China School of Mathematics and Computer Science, Nanjing Normal University, Jiangsu 210097, China FU Shi-tai School of Mathematics and Computer Science, Nanjing Normal University, Jiangsu 210097, China
Hits: 3356
The concept of derivations and generalized inner derivations has been generalized as an additive function $\delta:R \longrightarrow R$ satisfying $\delta(xy)=\delta(x)y+xd(y)$ for all $x,y\in R$, where $d$ is a derivation on $R$. Such a function $\delta$ is called a generalized derivation. Suppose that $U$ is a Lie ideal of $R$ such that $u^{2}\in U$ for all $u\in U$. In this paper, we prove that $U\subseteq Z(R)$ when one of the following holds: (1) $\delta([u,v])=u\circ v$ (2) $\delta([u,v])+u\circ v=0$ (3) $\delta(u\circ v)=[u,v]$ (4) $\delta(u\circ v)+[u,v]=0$ for all $u,v\in U$. |
# Magnitude and direction angle
1. Jan 26, 2004
### freespirit
My problem is to determine the magnitude and direction of the resultant force FR=F1+F2 and it's direction, measured counterclockwise from the positive x direction.
f1=250 lb @ 60 degrees from x
f2= 375 lb @ -45 degrees from x
Ok I got the magnitude by doing this:
(360-2(255))/2=-75 degrees
fr=sqroot of (250^2+375^2-2(250)(375)cos(75)
fr=393.188~ 393
then I got the angle by this:
375/sin x = 393.188/sin 75
x=67.1088
how do i get the resultant angle, what do I need to add to the 67 degrees?
2. Jan 26, 2004
### nautica
breakdown x and y components of each by the following
X= r cos(angle)
Y= r sin(angle)
Add the two x and y compents to get the x and y of the resultant. Use the pathagorean theroum to get the resultant magnitude and use tan^(-1) (y/x) to get resultant directions.
nautica
3. Jan 26, 2004
### himanshu121
Preview Rough fig in attachment so
$$tan{\alpha} = \frac{ysin{\theta}}{x+ycos{\theta}}$$
$$tan{\alpha} = \frac{ysin{\theta}}{x+ycos{\theta}}$$
#### Attached Files:
• ###### untitled.jpg
File size:
9.9 KB
Views:
208
Last edited: Jan 26, 2004
4. Jan 26, 2004
### freespirit
Thank You
Thank you both for your help. |
# Quick Messages Plugin
quick_message_user_preference = true enables a new user level option under your user avatar menu > interface tab…
So if you enable the user_preference then each user has to set that user preference.
You can also use the quick message required badge option to restrict who has access to quick messages, for example limiting only to staff or perhaps only to licensed or certified to encourage users to compete the discobot new or advanced user tutorial.
@jameshahnii @Sevosik would be great if you could go into admin > Settings and search for quick and screenshot all of the values that show up (as they should all he related to quick messages).
3 Likes
Thanks @bletch. We’re not seeing the envelope at all. It just doesn’t show up. I’ve been up all night systematically removing every plugin I installed all day yesterday hoping to find a conflict, but so far I’m coming up with nothing. And I’m running out of plugins to test.
I’ve tried that button off and the other on, that on and the other off, both on, and all that good stuff. Nothing… I haven’t done any theme customizing. Standard light theme out of the box over here. I’m on a MacBook Pro Retina ~2014. Haven’t touched a line of CSS so far.
Think we’re going to have to wait for @angus to weigh in on this one.
Appreciate you taking a shot!
1 Like
@bletch Thanks, that warning should be fixed.
https://github.com/angusmcleod/discourse-quick-messages/commit/31244c6b445b7f4cbc8b0e2bddd86ddc01e9a596
@Sevosik @jameshahnii Sorry you’re having an issue guys.
As the installation of Quick Messages on my sandbox is working normally, we’ll (@ryanerwin is working with me on this plugin) need your help to get to the bottom of it. Please post
1. The current values of these site settings:
• quick message enabled
• quick message user preference
• quick message icon
2. A list of all the plugins installed on your instance.
3. Any messages in the web console that appear when you load your site (@Sevosik thanks, I see you’ve posted this already).
4. A link to your site (if possible).
You’ve tested some of this already (changing settings and removing plugins), but there may be some clues there that can help us. Thanks.
3 Likes
Sorry for the delay. Fell asleep working on this, woke up, and haven’t stopped since.
This is wild. In going through the exercise to take the screenshots and send over, I tried this setting again…
and it worked.
Seems there is a conflicting plugin out there and it’s one of 14. Going to start adding them back in one-by-one to see if I can uncover the culprit. Will keep you posted.
Thank you!
2 Likes
We have a winner! Or loser, depending on how you look at it. Turns out it was .
In addition to killing QMP, it makes your Plugins >> Plugins, Akismet, Chat Integrations, & Patreon pages blank. If anyone knows @Kasper, might want to let him know.
.
2 Likes
You should try the official discourse plugin Discourse Math Plugin
That might work.
2 Likes
Thanks man. I’m on it.
I’ve noticed something today: when user receives a group message, Quick Message button in header shows new message indicator, but it does not show on the list:
It’s somewhat confusing, could it be easily changed, @angus?
1 Like
I wish I could use this plugin just like you Still not working, even after a couple updates.
@MakaryGo I’ve been working with @angus on this…
I haven’t been using group messages so I haven’t tested that, but I’m pretty sure I know where that’s happening. I’ll make sure I can reproduce, and add it to the bug list for a fix.
@Sevosik Could you clarify what you mean by “not working”? I have run the plugin on a variety of systems… that’s not to say it’s flawless though.
Are you getting an error message?
Is the plugin showing up in your admin settings under plugins?
Have you enabled it in the admin settings?
Are you using any other plugins? Which ones?
How is your Discourse hosted?
We need some details to be able to help you.
Screenshots are often especially useful.
Thanks for reporting… I’ll look into this tonight and see if I can reproduce.
2 Likes
hi guys, i hope you are well, Thanks @angus and @ryanerwin for this great plugin. I just added it and it works great.
i just noticed a little detail when adding a picture i have an error and the logs are below.
@Jeremie_Leroy could you set your language to english and post that again? (verbatim error message always the easiest to research)
If not, at least a translation would be good since my french is severely lacking…
Thank you sir
@davidkingham I’ve not had time to repro yet… Hopefully tomorrow…
1 Like
@ryanerwin sure, it says this :
The log is this one :
@davidkingham It looks like your error might be coming from the discourse category lockdown plugin. Try disabling that plugin temporarily to see if it fixes it.
@Jeremie_Leroy We’ve got your issue in hand. We’ll take a look at it soon.
https://github.com/angusmcleod/discourse-quick-messages/issues/47
3 Likes
I haven’t had a chance to test this because I don’t want to have the downtime, I’ll be setting up a sandbox soon to get around this. Another more fatal error has popped up recently; when I click on one of the messages it does not open the message box and this is in the console:
1 Like
@angus, apologies if this has been discussed before - but is there a way to speed up loading of the message notifications when you open the menu? The spinner shows up for a few seconds every time the menu is opened.
Thanks! |
# Vector¶
Methods:
class com::leapmotion::leap::Vector
The Vector struct represents a three-component mathematical vector or point such as a direction or position in three-dimensional space.
The Leap Motion software employs a right-handed Cartesian coordinate system. Values given are in units of real-world millimeters. The origin is centered at the center of the Leap Motion Controller. The x- and z-axes lie in the horizontal plane, with the x-axis running parallel to the long edge of the device. The y-axis is vertical, with positive values increasing upwards (in contrast to the downward orientation of most computer graphics coordinate systems). The z-axis has positive values increasing away from the computer screen.
Since
1.0
Public Functions
float angleTo(Vector other)
The angle between this vector and the specified vector in radians.
The angle is measured in the plane formed by the two vectors. The angle returned is always the smaller of the two conjugate angles. Thus A.angleTo(B) == B.angleTo(A) and is always a positive value less than or equal to pi radians (180 degrees).
If either vector has zero length, then this function returns zero.
float angleInRadians = Vector.xAxis().angleTo(Vector.yAxis());
// angleInRadians = PI/2 (90 degrees)
Return
The angle between this vector and the specified vector in radians.
Since
1.0
Parameters
Vector cross(Vector other)
The cross product of this vector and the specified vector.
The cross product is a vector orthogonal to both original vectors. It has a magnitude equal to the area of a parallelogram having the two vectors as sides. The direction of the returned vector is determined by the right-hand rule. Thus A.cross(B) == -B.cross(A).
Vector crossProduct = thisVector.cross(thatVector);
Return
The cross product of this vector and the specified vector.
Since
1.0
Parameters
float distanceTo(Vector other)
The distance between the point represented by this Vector object and a point represented by the specified Vector object.
Vector aPoint = new Vector(10f, 0f, 0f);
Vector origin = Vector.zero();
float distance = origin.distanceTo(aPoint); // distance = 10
Return
The distance from this point to the specified point.
Since
1.0
Parameters
Vector divide(float scalar)
Divide vector by a scalar.
Vector quotient = thisVector.divide(2.5f);
Since
1.0
float dot(Vector other)
The dot product of this vector with another vector.
The dot product is the magnitude of the projection of this vector onto the specified vector.
float dotProduct = thisVector.dot(thatVector);
Return
The dot product of this vector and the specified vector.
Since
1.0
Parameters
boolean equals(Vector other)
Compare Vector equality component-wise.
boolean vectorsAreEqual = thisVector == thatVector;
Since
1.0
float get(long index)
Index vector components numerically.
Index 0 is x, index 1 is y, and index 2 is z.
float x = thisVector.get(0);
float y = thisVector.get(1);
float z = thisVector.get(2);
Return
The x, y, or z component of this Vector, if the specified index value is at least 0 and at most 2; otherwise, returns zero.
Since
1.0
float getX()
The horizontal component.
Since
1.0
float getY()
The vertical component.
Since
1.0
float getZ()
The depth component.
Since
1.0
boolean isValid()
Returns true if all of the vector’s components are finite.
If any component is NaN or infinite, then this returns false.
boolean vectorsIsValid = thisVector.isValid();
Since
1.0
float magnitude()
The magnitude, or length, of this vector.
The magnitude is the L2 norm, or Euclidean distance between the origin and the point represented by the (x, y, z) components of this Vector object.
float length = thisVector.magnitude();
Return
The length of this vector.
Since
1.0
float magnitudeSquared()
The square of the magnitude, or length, of this vector.
float lengthSquared = thisVector.magnitudeSquared();
Return
The square of the length of this vector.
Since
1.0
Vector minus(Vector other)
Subtract vectors component-wise.
Vector difference = thisVector.minus(thatVector);
Since
1.0
Vector normalized()
A normalized copy of this vector.
A normalized vector has the same direction as the original vector, but with a length of one.
Vector normalizedVector = otherVector.normalized();
Return
A Vector object with a length of one, pointing in the same direction as this Vector object.
Since
1.0
Vector opposite()
A copy of this vector pointing in the opposite direction.
Vector negation = thisVector.opposite();
Return
A Vector object with all components negated.
Since
1.0
float pitch()
The pitch angle in radians.
Pitch is the angle between the negative z-axis and the projection of the vector onto the y-z plane. In other words, pitch represents rotation around the x-axis. If the vector points upward, the returned angle is between 0 and pi radians (180 degrees); if it points downward, the angle is between 0 and -pi radians.
float pitchInRadians = thisVector.pitch();
Return
The angle of this vector above or below the horizon (x-z plane).
Since
1.0
Vector plus(Vector other)
Add vectors component-wise.
Vector sum = thisVector.plus(thatVector);
Since
1.0
float roll()
The roll angle in radians.
Roll is the angle between the y-axis and the projection of the vector onto the x-y plane. In other words, roll represents rotation around the z-axis. If the vector points to the left of the y-axis, then the returned angle is between 0 and pi radians (180 degrees); if it points to the right, the angle is between 0 and -pi radians.
Use this function to get roll angle of the plane to which this vector is a normal. For example, if this vector represents the normal to the palm, then this function returns the tilt or roll of the palm plane compared to the horizontal (x-z) plane.
float rollInRadians = thatVector.roll();
Return
The angle of this vector to the right or left of the y-axis.
Since
1.0
void setX(float value)
The horizontal component.
Since
1.0
void setY(float value)
The vertical component.
Since
1.0
void setZ(float value)
The depth component.
Since
1.0
Vector times(float scalar)
Multiply vector by a scalar.
Vector product = thisVector.times(5.0f);
Since
1.0
String toString()
Returns a string containing this vector in a human readable format: (x, y, z).
Since
1.0
Vector()
Creates a new Vector with all components set to zero.
Since
1.0
Vector(float _x, float _y, float _z)
Creates a new Vector with the specified component values.
Vector newVector = new Vector(0.5f, 200.3f, 67f);
Since
1.0
Vector(Vector vector)
Copies the specified Vector.
Vector copiedVector = new Vector(otherVector);
Since
1.0
float yaw()
The yaw angle in radians.
Yaw is the angle between the negative z-axis and the projection of the vector onto the x-z plane. In other words, yaw represents rotation around the y-axis. If the vector points to the right of the negative z-axis, then the returned angle is between 0 and pi radians (180 degrees); if it points to the left, the angle is between 0 and -pi radians.
float yawInRadians = thisVector.yaw();
Return
The angle of this vector to the right or left of the negative z-axis.
Since
1.0
Public Static Functions
Vector backward()
The unit vector pointing backward along the positive z-axis: (0, 0, 1)
Vector backwardVector = Vector.backward();
Since
1.0
Vector down()
The unit vector pointing down along the negative y-axis: (0, -1, 0)
Vector downVector = Vector.down();
Since
1.0
Vector forward()
The unit vector pointing forward along the negative z-axis: (0, 0, -1)
Vector forwardVector = Vector.forward();
Since
1.0
Vector left()
The unit vector pointing left along the negative x-axis: (-1, 0, 0)
Vector leftVector = Vector.left();
Since
1.0
Vector right()
The unit vector pointing right along the positive x-axis: (1, 0, 0)
Vector rightVector = Vector.right();
Since
1.0
Vector up()
The unit vector pointing up along the positive y-axis: (0, 1, 0)
Vector upVector = Vector.up();
Since
1.0
Vector xAxis()
The x-axis unit vector: (1, 0, 0)
Vector xAxisVector = Vector.xAxis();
Since
1.0
Vector yAxis()
The y-axis unit vector: (0, 1, 0)
Vector yAxisVector = Vector.yAxis();
Since
1.0
Vector zAxis()
The z-axis unit vector: (0, 0, 1)
Vector zAxisVector = Vector.zAxis();
Since
1.0
Vector zero()
The zero vector: (0, 0, 0)
Vector zeroVector = Vector.zero();
Since
1.0 |
# [Tex/LaTex] Title of the theorem
formattingtheorems
in my paper the text inside theorem is italic while the word Theorem 1 is bold and not italic. I would like to have a title of the theorem also not italic. The example is here:
Theorem 1. (The title) Let the set…
How can I do it? I tried to type
\begin{theorem}{title here}
but it doesn't work.
\begin{theorem}[The title] |
# How to prove the tensor product of two copies of $\mathbb{H}$ is isomorphic to $M_4 (\mathbb{R})$?
How to prove the tensor product over $\mathbb{R}$ of two copies of the quaternions is isomorphic to the matrix algebra $M_4 (\mathbb{R})$ as algebras over $\mathbb{R}$? More precisely, the problem is to show the isomorphism $\mathbb{H} \otimes_\mathbb{R} \mathbb{H} \cong M_4 (\mathbb{R})$.
On the book "Spin Geometry" by Lawson and Michelsohn, page 27, there is an isomorphism defined by sending $q_1 \otimes q_2$ to the real endomorphism of $\mathbb{H}$ which is given by $x \mapsto q_1 x \bar{q_2}$, but I don't know how to deduce that this real algebra homomorphism is in fact an isomorphism.
-
Related to, but not a duplicate of math.stackexchange.com/questions/77178/… – rschwieb Dec 12 '13 at 14:13
It's a basic fact (here's a proof in the second proposition on page 157) that the tensor product of two central simple algebras is another central simple algebra. A proof should be available wherever central simple algebras are discussed.
By simplicity of the ring, the kernel of your (nonzero) ring homomorphism is automatically $\{0\}$, showing it is injective.
Finally, since the image and codomain are both 16-$\Bbb R$-dimensional, they are equal, showing the must also be surjective.
-
Dear @DietrichBurde : Sure, but as you can see at the slight cost of difficulty, we get a simple solution to this problem and a useful piece of knowledge about tensor products. This seems better than just plodding through a verification for this particular mapping. Besides, one can immediately find this proof in any text on central simple algebras, so it is relatively accessible. Win-win! Regards. – rschwieb Dec 5 '13 at 20:16
More resources added to complete the post. I don't think it's necessary to reproduce a proof this common in print in this case. – rschwieb Dec 5 '13 at 20:17
OK, this is a nice answer ! – Dietrich Burde Dec 5 '13 at 20:21
Thank you very much! – Lao-tzu Dec 6 '13 at 0:34
The Brauer group of $\mathbb R$ is $Br(\mathbb R)\cong\mathbb Z/2\mathbb Z$; one way to see this is to note that there are exactly two (iso)classes of central simple f.d. real algebras, in view of Frobenius' theorem which classifies them, so the there is no choice for the group structure. It follows that the tensor square of every central simple real algebra is isomorphic to a matrix algebra. In particular, $\mathbb H\otimes_{\mathbb R}\mathbb H$ is a real matrix algebra of dimension $16$, so must be isomorphic to $M_{4}(\mathbb R)$.
-
I first took this path, but then I couldn't see a quick way to decide between $M_2(\Bbb H)$ and $M_4(\Bbb R)$. Is there a quick way to choose between them? – rschwieb Dec 5 '13 at 21:34 |
# Unable to read citations from bibliography file
I am using a latex file to write a report. The said latex file requires me to have different chapters and plots in different folders. When I try to generate a pdf output from this latex file, I encounter errors which say Citation 'citation name' on page 1 undefined.. In short, the latex compilation fails to read entries of my bibliography file (references.bib).
This is a skeleton of the said latex file (test.tex)
\include{macpap2}
\documentclass[12pt]{ociamthesis}
\usepackage{amsfonts,amssymb}
\usepackage{amsmath,graphics}
\usepackage{import}
\usepackage[pdftex]{graphicx}
\usepackage{booktabs}
\usepackage{wrapfig}
\usepackage{appendix}
\usepackage[nottoc]{tocbibind}
\usepackage[sort]{natbib}
\usepackage[biblabels]{authorindex}
\usepackage{minitoc}
\setcounter{minitocdepth}{3}
\title{Title Name}
\author{Firstname Lastname}
\college{College Name}
\renewcommand{\submittedtext}{\textit{Write something.}}
\degree{Degree name}
\degreedate{December 2018}
\renewcommand{\crest}{\beltcrest}
\begin{document}
\baselineskip=18pt plus1pt
\begin{frontpages}
\setcounter{secnumdepth}{1}
\setcounter{tocdepth}{2}
\maketitle
\end{frontpages}
\begin{romanpages}
\tableofcontents
\end{romanpages}
\input{chapter/chapter1}
%% BIBLIOGRAPHY %%
%
%\citeindexfalse % to stop indexing citations
%uncommnent next line to change bibliography name to references
%\renewcommand{\bibname}{References}
%\begin{flushleft} % This is to left-justify the bibliography
\bibliography{references.bib} % Full path to bibtex bibliography file (e.g. "references.bib")
% \adjustmtc %! To prevent tocbibind interfering with minitoc
%\bibliographystyle{agsm} % This one is modeled on the standard year/date referencing standard
%\end{flushleft}
\end{document}
This file has only one chapter (chapter1.tex). This is the skeleton of chapter1.tex.
chapter{\label{ch:chapter1}Chapter1}
\section{Preliminaries}
I want to cite these: \citep[][]{Caldwell2002, Daly2003}
\begin{figure}
\includegraphics[width=0.41\textwidth]{Plots/dino2.png}
\includegraphics[width=0.41\textwidth]{Plots/dino2.png}
\caption{Two dinos.}
\label{fig:Dinos}
\end{figure}
My bibliography file (references.bib) is given underneath.
@article{Caldwell2002,
title = "A phantom menace? Cosmological consequences of a
dark energy component with super-negative equation of state",
journal = "Physics Letters B",
volume = "545",
number = "1",
pages = "23 - 29",
year = "2002",
issn = "0370-2693",
doi = "https://doi.org/10.1016/S0370-2693(02)02589-3",
url = "http://www.sciencedirect.com/science/article/pii/S0370269302025893",
author = "R.R Caldwell"
}
@article{Daly2003,
author={Ruth A. Daly and S. G. Djorgovski},
title={A Model-Independent Determination of the Expansion and Acceleration Rates of the Universe as a Function of Redshift and Constraints on Dark Energy},
journal={The Astrophysical Journal},
volume={597},
number={1},
pages={9},
url={http://stacks.iop.org/0004-637X/597/i=1/a=9},
year={2003},
abstract={abstract.}
}
I am using Texmaker 4.5. All files (*.tex, *.bib, *.cls, figures etc) used by this latex package can be found here.
What am I doing wrong? How can I make this latex package read and execute the contents of my bibliography file?
• does removing the .bib extension help? i.e.: \bibliography{references}. When you run Bibtex, what are the error messages that you get?
– Troy
Dec 21 '18 at 23:14
• @Troy : I tried your suggestion. Unfortunately, it didn't help. This is the message that I get when when I run Bibtex on test.tex : Citation 'Caldwell2002' on page 1 undefined . In the subsequent line of the message, I see this: Citation 'Daly2003' on page 1 undefined . Dec 22 '18 at 0:09
• Even though it was not the main issue here, you should follow Troy's advice and write \bibliography{references} without file extension. This is the officially supported syntax that can be guaranteed to work. Some systems are more forgiving when you include the .bib extension, but this is not guaranteed. My MikTeX system (on Win 10) will not process files if the .bib extension is included in the \bibliography call. Dec 22 '18 at 7:07
• @moewe : I tried your suggestion. It works. Thanks. Dec 22 '18 at 16:47
To produce a bibliography with bibtex the source file should specify (1) at least one bib file, and (2) the bibliography style to be used (e.g., \bibliographystyle{plainnat}). The \bibliographystyle{...} instruction is missing. |
# How do you solve the inequality 1/3x-4<-10?
Oct 2, 2016
$x < - 18$
#### Explanation:
$\frac{1}{3} x - 4 < - 10$
Add 4 to both sides: $\to \frac{x}{3} < - 6$
Multiply through by 3: $\to x < - 18$ |
## rebecca1233 Group Title Find the distance between the points (2, -3) and (5, -4) one year ago one year ago
1. hartnn Group Title
Distance between points (x1,y1) and (x2,y2) is $$\huge d=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$$
2. rebecca1233 Group Title
3. hartnn Group Title
work needs to be done by you....
4. mayankdevnani Group Title
|dw:1353937644917:dw|
5. rebecca1233 Group Title
i need help and if u arent going to help then bye and be rude.
6. TranceNova Group Title
Hi Rebecca - the aim of openstudy is to teach, which means people aren't going to give you answers, and you need to interact to find the answer. hartnn provided a formula to find the answer - if you don't understand it, please ask for clarification. |
QUESTION
# If $\left| z \right| = 2$, then the points representing the complex numbers $- 1 + 5z$ will lie on${\text{A}}{\text{.}}$ Circle${\text{B}}{\text{.}}$ Straight line${\text{C}}{\text{.}}$ Parabola${\text{D}}{\text{.}}$ None of these
Hint- Here, we will proceed by letting any complex number $z' = x + iy = - 1 + 5z$ and then taking the modulus on both the sides and then solving using the formulas $\left| {az} \right| = a\left| z \right|$ and $\left| z \right| = \sqrt {{u^2} + {w^2}}$ for any complex number $z = u + iw$.
Given, Modulus of complex number z is $\left| z \right| = 2{\text{ }} \to {\text{(1)}}$
Let $z' = x + iy$ be any complex number which lies on the locus represented by the complex numbers $- 1 + 5z$
Given, $z' = - 1 + 5z \\ \Rightarrow z' + 1 = 5z \\$
By taking the modulus of the complex numbers on both the sides, we get
$\Rightarrow \left| {z' + 1} \right| = \left| {5z} \right|$
Using the formula $\left| {az} \right| = a\left| z \right|$ where a is any real number and z is
any complex number in the above equation, we get
$\Rightarrow \left| {z' + 1} \right| = 5\left| z \right|$
Using equation (1) in the above equation, we get
$\Rightarrow \left| {z' + 1} \right| = 5 \times 2 \\ \Rightarrow \left| {z' + 1} \right| = 10 \\$
By substituting $z' = x + iy$ in the above equation, we get
$\Rightarrow \left| {x + iy + 1} \right| = 10 \\ \Rightarrow \left| {\left( {x + 1} \right) + iy} \right| = 10{\text{ }} \to {\text{(2)}} \\$
For any complex number $z = u + iw$, the modulus of this complex number is given by
$\left| z \right| = \sqrt {{u^2} + {w^2}} {\text{ }} \to {\text{(3)}}$
Using the formula given by equation (3) in equation (2), we get
$\Rightarrow \sqrt {{{\left( {x + 1} \right)}^2} + {y^2}} = 10$
Squaring both sides of the above equation, we get
$\Rightarrow {\left( {\sqrt {{{\left( {x + 1} \right)}^2} + {y^2}} } \right)^2} = {\left( {10} \right)^2} \\ \Rightarrow {\left( {x + 1} \right)^2} + {y^2} = 100{\text{ }} \to {\text{(4)}} \\$
As we know that the general equation of a circle having $\left( {{x_1},{y_1}} \right)$ and
having radius r is given by
${\left( {x - {x_1}} \right)^2} + {\left( {y - {y_1}} \right)^2} = {r^2}{\text{ }} \to {\text{(5)}}$
Representing the equation (4) in the same form as given by equation (5), we have
$\Rightarrow {\left( {x - \left( { - 1} \right)} \right)^2} + {\left( {y - 0} \right)^2} = {\left( {10} \right)^2}{\text{ }} \to {\text{(6)}}$
The above equation (6) is the equation of a circle with the centre at $\left( { - 1,0} \right)$
and having a radius of 10.
Therefore, the points representing the complex numbers $- 1 + 5z$ will lie on a circle having
a centre at $\left( { - 1,0} \right)$ and having a radius of 10.
Hence, option A is correct
Note- Any complex number $z = u + iw$ contains u as the real part of the complex number z and w as the imaginary part of the complex number. Also, all the real numbers are complex numbers because any real number b can be written as $b = b + i\left( 0 \right)$ where the real part of the corresponding complex number is b itself and the corresponding imaginary part is zero. |
My blood pressure has been getting to higher levels than usual (I have low to normal pressure) and I have been eating a lot of licorice. Are the two linked. |
# [Xy-pic] Problem with skewing(?)
Richard Lewis xypic at rtf.org.uk
Thu Apr 8 01:32:43 CEST 2004
Ross Moore <ross at ics.mq.edu.au> writes:
> Hello Richard,
>
> On 24/03/2004, at 9:58 PM, Richard Lewis wrote:
>
>> Hi everyone,
>>
>> I am trying to define a command \SliceObj (that will take three
>> arguments: top, label and bottom) to typeset a vertical arrow,
>> framed with parentheses (...),
>> whose reference point should be the middle of the label.
>>
>> Something like
>>
>> / \
>> | {#1} |
>> | | |
>> | |{#2} |
>> | | |
>> | V |
>> | {#3} |
>> \ /
>>
>> with the reference in the middle of {#2}
>>
> Provided that I understand correctly what you are trying to do,
> I find 3 "errors" in your coding...
>
>> \newcommand{\SliceObj}[3]{%
>> %%% vertical arrow: #1=source, #2=target, #3=LABEL
>> %% 1. c is where we start. We draw an arrow starting 0.6cm above
>> c, and
>> %% ending 0.6cm under c.
>> \POS+<0cm,0.6cm>*+{#1}="TOP"+<0cm,-1.2cm>*+{#3}="BASE"
>> \POS\ar"TOP";"BASE"^-{#2}="LABEL"
>> %% 2. join all 3 bits together, reference is at "LABEL"
>> \POS"LABEL"."TOP"."BASE"
>> %% 3. add brackets (' and )' around the joined thing
>> *\frm{(}*\frm{)}="joined-thing",
>> %% 4. Add margin, using !C to ensure we get equal space all around
>> the
>> %% joined-thing
>> %% [I am using *+\frm{.} so we can see where the object at
>> %% "SOBJ-with-margin" extends to---when i get \SliceObj to work i
>> %% will want to use *+\frm{}, to get a margin (but no frame)]
>> "joined-thing"!C*+\frm{.}="SOBJ-with-margin",
>> %% 5. now set the reference point
>> "SOBJ-with-margin"*{\circ}, %we are here before skew
>> "joined-thing"*{\oplus}, %want reference to be here
>> %% try and skew reference of SOBJ-with-margin to joined-thing
>> "SOBJ-with-margin"!"joined-thing"="here"*{\otimes},
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^----------- see below
>> "here" %here is where our reference ended
>> }
>
>
> Firstly:
>
> %% try and skew reference of SOBJ-with-margin to joined-thing
> "joined-thing"."SOBJ-with-margin"="here"*{\otimes},
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Doesn't this do what you want ?
aha! that does indeed get the \otimes into the place i wanted it,
thanks very much.
Ah...in the refernece manual, it says (table on page 8) that you can do
<POS>!<COORD>
and <COORD> can be "<ID>", according to the table on page 8. Should
it say
<POS>!<VECTOR>
[When i try
\begin{xy}
0;<2cm,0pc>:<0pc,2cm>::0,
0*{0},
(1,1)*+{\oplus}="ref",
(0,2)*+{\Omega}="omega",
"omega"!"ref"*{Z}
\end{xy}
It puts the Z at (1,3), which is "omega" shifted by the vector (1,1),
rather than at (1,1) which the manual seems to suggest
>
> The reference point of a merged object is at the coords
> of the reference-point of the 1st <POS> in the merge.
>
> The skew (!) operator is not the appropriate thing for
> merging different <POS>s. It is for shifting the base-point
> within a given <POS>.
>
>
> Now the \otimes and \oplus should coincide for each of
> your SOBJs. --- call these the (x+)-points.
yes indeed, thanks very much
>
>>
>> %\sobj gives a stand alone' \SliceObj
>> \newcommand{\sobj}[3]{\xygraph{[]!{\SliceObj{#1}{#2}{#3}}}}
>>
>
> Secondly:
>
> While this \SliceObj is a valid command for adding to
> an Xy-pic diagram, it does not create just a single <object>.
>
> To make a single <object>, with your current approach,
> you need a slight modification, using \xybox :
>
> \newcommand{\SliceObj}[3]{\drop\xybox{%
> ....
> ....
> }% end of \xybox
> }% end of \SliceObj
>
AHA! (this one i can't blame on the manual!)
>
>> \section{More complicated Diagrams}
>> More complicated diagram: We \emph{should} get a corner' and then $x$
>> and $y$ on top of each other in the bottom-left position\ldots
>
> This doesn't happen because the <POS> before your [d] hop
> is further to the right than you want it to be, since
> \SliceObj has placed several <object>s, at different places.
>
> Using \xybox, so that there is just one compound <object>
> placed, you now get the 'x' and 'y' coinciding.
yep
>
>
> However, the place where they coincide may not be where
> you want them to be...
>
> ... that is, the arrows between your SOBJs do not
> emanate from the (x+)-points, and the 'xy' does
> not align with the (x-)-points.
>
>
> Thirdly:
>
> the \xybox does not allow control over *where*,
> inside the <object> that it builds,
> the reference point is to be located.
>
> Accordingly, I've just devised a variant that builds
> the same kind of compound <object>, but also sets
> its reference-point to be at the <coord> of the
> last <POS> within the box; i.e., the <coord> for
> the <object> that has been built is at the current
> <POS> when the Xy-pic parsing has been completed.
> The LRUD extents are the size of the complete box;
> i.e., *not* the extents of the final <POS>.
>
> Here is coding that should go in your document's
> preamble -- eventually it should be added to xy.tex
>
<snip definition of \xyobjbox>
Thanks very much, i have been experimenting with \xyobjbox and it is
very useful for several things
> Now, if your definition starts:
>
> \newcommand{\SliceObj}[3]{\drop\xyobjbox{%
> ^^^^^^^^^^^^^^
>
> then you should get the correct (?) alignments.
>
yep, thanks very much |
## $\chi^2$ Value Question - COSMOMC
Use of Healpix, camb, CLASS, cosmomc, compilers, etc.
Lawrence Kazantzidis
Posts: 2
Joined: August 28 2019
Affiliation: University of Ioannina
### $\chi^2$ Value Question - COSMOMC
Hi everyone,
I am currently working on a project using cosmomc and I have successfully performed some runs including CMB (TT+lowTEB+lensing), BAORSD, and DES likelihoods and obtained . I would like to ask a few questions regarding the obtained $\chi^2$ value.
1) Usually $\chi^2$ is normalized by dividing it with the degree of freedoms(dof). What is the dof of the CMB likelihoods (TT+lowTEB+lensing)
in order to have a qualitative hint of my fit?
2) I am not quite sure why there is an errorbar in the total $\chi^2$ value. Do I have to also include the value of the prior?
Antony Lewis
Posts: 1596
Joined: September 23 2004
Affiliation: University of Sussex
Contact:
### Re: $\chi^2$ Value Question - COSMOMC
I think the normalization for lowTEB is fairly arbitrarily. It's clearer with the 2018 likelihoods.
The chi2 values you quote are presumably the mean and standard deviation of the chi2 for each point in the sampled parameter space. You can do a minimization run if you actually want a best-fit value. |
# Related rates and a lamp
1. Sep 7, 2005
### whkoh
A lamp is located on the ground 10 m from a building. A man 1.8 m tall walks from the light toward the building at a rate of 1.5 m s⁻¹. What is the rate at which the man's shadow on the wall is shortening when he is 3.2 m from the building ? Give your answer correct to two decimal places. [0.58 m/s]
----
I tried using similar triangles first to find the height of the lamp but nothing came out of that.
[Broken]
let h be the height of the lamp
$${h \over{x+10}} = {1.8 \over {x+3.2}}$$
$$h = { {1.8x+18} \over {x + 3.2}}$$
How is the velocity of the man related to the height of the shadow? Or, what else should I find?
Also, shouldn't the shadow of the man on the wall become taller as he approaches the building?
Last edited by a moderator: Apr 21, 2017 at 7:32 PM
2. Sep 7, 2005
### HallsofIvy
Staff Emeritus
What is x? The distance the man is from the wall or the distance the man is from the lamp?
First rule for a problem like this is to draw a picture and label it carefully. I took x as the distance the man is from the lamp, because that gave simpler distances. Drawing a line from the lamp to the top of the man's head to the wall (i.e. the light ray that gives the top of the shadow) I see two similar right triangles. the smaller has the man as opposite side, so height 1.7m and base x. The larger has the wall as opposite side. Taking h as the height of the shadow, the height of this right triangle is h and the base is 10m. Since they are similar triangles, $$\frac{h}{10}= \frac{1.8}{x}$$.
You appear to have tried the same thing but have "10+ x" and "3.2+ x". I see no reason to add the distance from the lamp to the wall, especially since the man is standing between the lamp and the wall. If your x is the distance from the man to the wall (rather than the lamp) then the formula would be $\frac{h}{10}= \frac{1.8}{10-x}$.
I finally got around to looking at your picture. You have the lamp on the opposite side of the wall from the man!? How is the light getting through the wall? Where is it casting the shadow?
Also, the "3.2 m" is (related to) the specific value of x at which you want to find the rate of change. It has nothing to do with the general formula connecting h and x.
Once you have a general "static" formula for two quantities, you can find a formula relating their "rates of change" by differentiating with respect to t (using the chain rule since there is no t in the formula itself).
With my formula $\frac{h}{10}= \frac{1.8}{x}$, the derivative of $\frac{h}{10}$ with respect to x is $\frac{1}{10}\frac{dh}{dt}$ and the derivative of $\frac{1.8}{x}= 1.8x^{-1}$ is $-1.8x^{-2}\frac{dx}{dt}= \frac{-1.8}{x^2}\frac{dx}{dt}$.
That is, from $\frac{h}{10}= \frac{1.8}{x}$ we get $\frac{1}{10}\frac{dh}{dt}=\frac{-1.8}{x^2}\frac{dx}{dt}$. Now, set x= 10- 3.2= 6.8 (since my x is the distance from the lamp to the man, not from the wall to the man), set $\frac{dx}{dt}= 1.5 m/s$ and solve for [itex]\frac{dh}{dt}[\itex].
And, yes, the man's shadow gets smaller as he walks toward the wall. When he is right at the wall, his shadow is exactly his height. When he is very close to the lamp, he is blocking a lot of the light and his shadow is huge.
Last edited: Sep 7, 2005
3. Sep 7, 2005 |
# Toolbar
## General
The Gutenberg Toolbar is visible at the top of the Gutenberg component when selected.
Not all toolbar options are available for each component. At every option the linked components are show in a label.
## Text alignment
Using the alignment drop-down from the toolbar, you are able to align the whole paragraph text to the left, make it center-aligned or orient it to the right.
### Options:
• Align text left
• Align text center
• Align text right
### Examples:
Left aligned text (default).
Center aligned text.
Right aligned text.
## Bold
Used quite frequently, Bold formatting has their own buttons on the Toolbar. The shortcuts are CTRL + b (or CMD + b) for bold.
### Example:
This a paragraph with bold text.
## Italic
Used quite frequently, Italics formatting has their own buttons on the Toolbar. The shortcuts are CTRL + i (or CMD + i) for italics.
### Example:
This a paragraph with italic text.
Use the chain link icon to insert a hyperlink to your highlighted text. Or use the CTRL + k (or Command+k) keyboard shortcut.
### Example:
This a paragraph with a link.
## Highlight
Using the “Highlight” option allows you to change the color for selected text and its background.
Check the color contrast when creating a highlight!
### Example:
This a paragraph with highlighted text.
## Inline code
Use the inline code feature to format code snippets within your text differently. Not only that, but Inline code formatting also prevents the code to be executed instead of displayed.
### Example:
This a paragraph with `inline` code.
## Inline image
The inline image feature allows you to add an image to your paragraph. It has one option: enter the desired pixel width for your image.
### Example:
This a paragraph with an inline image.
## Keyboard input
Using the “Keyboard Input” option allows you to add the `<kbd>` tag to selected text.
### Example:
This is a paragraph with a keyboard tag.
## Strikethrough
### Example:
This is a paragraph with strikethrough text.
## Subscript
The “Subscript” option allows you to add subscript to your highlighted text.
### Example:
This is a paragraph with subscript text. |
# Rural Areas with Good Internet Speeds (for Streaming, Gaming, Remote Work)
Ever wonder how remote you can go without sacrificing internet connectivity?
A couple months back, I mashed together some publicly available federal data to answer just that. By cross-referencing the FCC’s broadband deployment data against county level statistics for population density, I generated a ranking of counties with symmetric gigabit fiber sorted by remoteness.
The winner?
It’s a tie actually. Carter County, Montana and Kenedy County, Texas. The dataset calculates a population density of 0.3 people per square mile for both.
Or so I thought, until I realized I could cheat and look up more precise numbers on Wikipedia to break the tie. Drum roll, please…
## The results
Kenedy County Texas is #1! With a mere 416 people populating almost 2,000 square miles, they somehow manage to have gigabit fiber available through VTX Communications.
Range Telephone Cooperative in Carter County Montana is #2, with ~1,100 people on ~3,300 square miles.
The final ranking ended up with more than 2600 entries, unfortunately, so it’s kind of overwhelming but I’ve reproduced it in the table below. You can also download the data set by clicking here.
[table id=2 /]
Whenever you’re done gorging yourself on the heady rush that is sorted data, I’ll talk specifics about a couple of especially promising rural areas with good internet…
## EPB in Tennessee
Chattanooga has been branding itself as “the Gig city,” ever since its regional electrical company, EPB, ran gigabit fiber to its customer base in 2010. This created the US’s largest fiber grid, spanning ~600 square miles. Meaning, don’t let the word “city” here turn you off. Sure, it covers Chattanooga, but their network extends out into the country and even across state lines into neighboring Georgia. (This is the map of their entire coverage area.)
Now, it’s pretty damn cool that they serve all of this territory symmetric gigabit but it’s cooler still that they do it at a cost to subscribers of less than $70/month. The craziest thing, though, is what EPB did to their grid in 2015: they upgraded Chattanooga from “the Gig city” to “the 10 Gig city,” along with the rest of their network. That’s right. EPB will sell you a symmetric 10Gb/s internet pipe and they’ll run it way out into the country. It’s$300/mo but, hey, they’re the only residential provider offering it in the United States at all. The rest of the country’s best is a tenth the speed.
Insane. EPB and the rest of you crazy bastards in Tennessee, I salute you. Can’t wait to hear about what y’all come up with next.
## Missouri’s Co-Mo.net
I initially stumbled onto EPB while looking for raw land without building codes that was also served gigabit fiber. The bit of EPB’s grid that ventures into Sequatchie County, TN, is just such a region. Unfortunately, it’s the bourgeois expensive area known as “Signal Mountain.” Folk are asking for 10 grand/acre, which is hefty for Appalachia. Too rich for me. I had to expand my search.
The result: Morgan County, MO, and their ISP, Co-Mo.net. Co-Mo actually covers more than just Morgan county but what’s cool about Morgan county is that it’s blissfully free of building codes, as long as your property isn’t on the lake. Plus, if you own more than 5 acres, you can choose to waive even septic permitting and compost your own waste, like via a humanure system.
It’s more rural than EPB and it’s cheap, too. You can sometimes find parcels for \$1500/acre. (Most of the cheapest listings are in the neighborhood called Ivy Bend and I’ve heard from locals that this area is sort of sketchy. I’m not sure whether they mean “weird and poor neighbors” sketchy or “theft and violent crime” sketchy. It’s no real problem either way, as I pretty frequently see cheap acreage for sale in the “nicer”, Eastern part of the county too.)
I’ve written before, too, about the particular promise of the Ozarks for homesteading, like in my guide on selecting the best place to buy homesteading land.
## How to Find More Rural Areas With Good Internet
If none of the above are what you’re looking for, the next thing you’ll want to do is scroll through this list of rural internet cooperatives and make a list of the companies operating in your preferred region. Then, Google each cooperative from your list and make notes on things like:
• Extent of service area
• Status of current project. Is the grid running and complete? Partially online and still expanding? Not up yet and under construction? Only just announced?
• Price, speed.
• If you’re planning on developing raw land, be sure you determine what is required to hook a new parcel to the co-op’s grid. Sometimes it’s impossible or tens of thousands of dollars. Don’t get burned!
A similar tact you can try, if you’re unable to find anything in your area that way, is to start your initial list by Googling “ internet cooperative” and then digging more into whatever that search finds.
Finally, be sure to check out my Cheap Houses, Great Internet page. It may have what you’re looking for, too.
## You've read this far—want more?
Subscribe and I'll e-mail you updates along with the ideas that I don't share anywhere else. |
Eigenvalue and similar matrices
if $$A$$ and $$B$$ are two $$n\times n$$ matrices with same eigenvalues such that each eigenvalue has same algebraic and geometric multiplicity. Does $$A$$ and $$B$$ are similar?
If $$A$$ is diagnalizable then the claim is true. But does it true even when sum of geometric multiplicity is not $$n$$.
• Could someone help to clarify which answer is correct plz. Answers contradict each other – Cloud JR Jun 19 at 12:59
• Do you mean "there exists an eigenvalue having same algebraic and geometric multiplicity", or "for each eigenvalue, the algebraic and geometric multiplicity are the same"? Your wording makes it sound like you are asking the former, but this is so trivially false that one might assume that you are asking the latter. – Acccumulation Jun 19 at 15:11
• @javadba , this is not my homework problem, i know similar matrices have same eigenvalues , with same multiplicity for each eigenvalue, and i think about the converse, but I can't prove it, so i posted it here – Cloud JR Jun 20 at 22:23
• @Acccumulation, i am asking the latter, and i will edit it to make it precise. I'm sorry for late reply, btw – Cloud JR Jun 20 at 22:24
• @CloudJR apologies for the confusion due to my initially incorrect answer, I hope that your question has been answered. – pre-kidney Jun 21 at 3:31
No, it's famously false. The usual counterexample is $$A=\begin{pmatrix}0&1&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\end{pmatrix}, B=\begin{pmatrix}0&1&0&0\\ 0&0&1&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}$$ for both of which $$0$$ has algebraic multiplicity $$4$$ and geometric multiplicity $$2$$.
The result that holds is that two matrices $$A, B\in \overline{\Bbb F}^{n\times n}$$, where $$\overline{\Bbb F}$$ is an algebraically closed field, are similar if and only if, for all $$\lambda \in \overline{\Bbb F}$$ and for all $$m\in\Bbb N$$, $$\dim\ker (A-\lambda id)^m=\dim\ker(B-\lambda id)^m$$. This condition trivializes to yours as soon as $$n\le 3$$.
If $$A,B\in \Bbb F^{n\times n}$$ and $$\Bbb F$$ isn't algebraically closed, then the same result holds, in the sense that they are similar if and only if they are similar as matrices in $$\overline{\Bbb F}^{n\times n}$$ or, equivalently, if and only if $$\dim\ker p(A)^m=\dim\ker p(B)^m$$ for all $$m\in\Bbb N$$ and for all irreducible polynomials $$p\in\Bbb F[x]$$.
• I think a similar matrix has same eigenvalues, eigenvectors , algebraic and geometric multiplicities but the converse is not true. Your example is for the converse part. Am I right ? – nmasanta Jun 19 at 6:29
• Your suggestion is wrong. – Saucy O'Path Jun 19 at 6:31
• Look at the link "math.stackexchange.com/questions/8339/…" , specially in the accepted answer. – nmasanta Jun 19 at 6:33
• @nmasanta $\small{\begin{bmatrix}1&0\\0&0\end{bmatrix}}$ and $\frac12\small{\begin{bmatrix}1&1\\1&1\end{bmatrix}}$ are similar but do not have the same eigenvectors. – amd Jun 19 at 7:03
• @SaucyO'Path "Your suggestion is wrong". While this is generally a good answer that comment is not helpful : pls specify why. – javadba Jun 19 at 14:45
[Begin Edit]
My initial answer was incorrect, but I believe it is interesting to explain how and why it is incorrect and provide some comments elaborating upon the correct answers posted here (I have left my initial incorrect answer unedited below).
The Jordan normal form classifies all matrices up to similarity transformations. It shows that matrices have a two step decomposition. The first step consists of the eigenvalues themselves, and the second step consists of the Jordan blocks corresponding to a given eigenvalue.
For the question under consideration here (how much information is revealed by knowing the algebraic and geometric multiplicities), distinct eigenvalues may be treated separately from one another and thus the first step in the decomposition is not essential to the question. Thus, one may focus on a single eigenvalue, and furthermore shift the eigenvalue to $$0$$. This leads to considering nilpotent matrices. A matrix is nilpotent if its characteristic polynomial is $$x^n$$, which in particular implies that it is an $$n\times n$$ matrix.
Any such matrix has eigenvalue $$0$$ with algebraic multiplicity $$n$$. Moreover, the eigenspace of $$0$$ coincides with the kernel of the matrix, from which one can see that the eigenspace is equal to the direct sum of the kernels of the Jordan blocks. In particular, the geometric multiplicity (=dimension of eigenspace) is equal to the number of Jordan blocks, since each has a kernel of dimension $$1$$.
Thus we see there are two cases where knowing the algebraic and geometric multiplicity is sufficient to reconstruct the matrix up to similarity: either when the algebraic and geometric multiplicities coincide (equivalent to diagonalizability), or when the geometric multiplicity is $$1$$.
[End Edit]
Yes, the knowledge of all the algebraic and geometric multiplicities of all eigenvalues of a matrix is sufficient to determine the matrix up to similarity transformations. This follows (and is equivalent to the existence of) the Jordan normal form.
• See the other two answers. – amd Jun 19 at 6:57
• @MiguelBoto knowing that the geometric multiplicity of each eigenvalue is $1$ implies that each eigenvalue consists of a single Jordan block with size given by the algebraic multiplicity. This is sufficient to determine the Jordan normal form, and hence recover the matrix up to similarity. – pre-kidney Jun 20 at 4:40
• @MiguelBoto no, it is not. For example, consider the matrix $a_{i,i+1}=1$ and everywhere else $0$ (i.e., the $1$'s are right above the diagonal). This is not diagonalizable since the algebraic multiplicity of $0$ is $n$ but the geometric multiplicity is $1$. – pre-kidney Jun 20 at 5:18
counter example: $$\begin{bmatrix} -1 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & -1 \end{bmatrix}$$
$$\begin{bmatrix} -1 & 1 & 0 & 0\\ 0 & -1 & 1 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\\ \end{bmatrix}$$
these two matrices have the same eigen values and same geometric multiplicity and are not similar. The geometric multiplicity of the eigen value only tells you the number of blocks in the Jordan Normal form, the size of the largest block for each eigenvalue is the first exponent $$k$$ such that $$\dim[N(A-\lambda I)^k]=m$$ where $$m$$ is the algebraic multiplicity of the corresponding eigenvalue $$\lambda$$ |
# Math Help - Inverse matrix
1. ## Inverse matrix
Find the inverse of the matrix
A = |-1 0 0 0|
|0 -1 3 0|
|0 0 2 0|
|-3 3 -3 2|
Thanks!
2. also
if you take the matrix A and use a sequence of row operations to reduce it to I
then those same operations applied to I will give you A inverse
since you're basically doing (P1)(P2)...(Pn)A = I
which implies (P1)(P2)...(Pn) = A^-1
an easy way to do this is to take [A|I] and reduce the left part to I
what remains on the right will be A inverse
example:
1 1
0 1
you take
1 1 1 0
0 1 0 1
then your row operations to reduce the left part into I is simply R1-R2
==>
1 0 1 -1
0 1 0 1
so the inverse is
1 -1
0 1
now you can do it for the larger matrix
3. "Online Matrix Calculator"
Thank you for that! Now, I'm on an old computer, so the instructions doesn't show up. But if I did it right, the inverse of the matrix should be:
-1 0 0 0
0 -1 3/2 0
0 0 1/2 0
-3/2 3/2 -3/2 1/2
Is this right?
4. Originally Posted by gralla55
"Online Matrix Calculator"
Thank you for that! Now, I'm on an old computer, so the instructions doesn't show up. But if I did it right, the inverse of the matrix should be:
-1 0 0 0
0 -1 3/2 0
0 0 1/2 0
-3/2 3/2 -3/2 1/2
Is this right?
Yes, it's correct. I used Matlab and ended up with the same answer as you.
Code:
A =
-1 0 0 0
0 -1 3 0
0 0 2 0
-3 3 -3 2
EDU>> inv(A)
ans =
-1.0000 0 0 0
0 -1.0000 1.5000 0
0 0 0.5000 0
-1.5000 1.5000 -1.5000 0.5000 |
# 山东大学数学学院信息安全研究所王明强教授简介
## 王明强
1970年生;教授、硕士生导师。
• ### 学术经历
• 2011—, 山东大学数学学院,教授
• 2007—2011,山东大学数学学院,副教授
• 2004—2007,曲阜师范大学,副教授
• 2004—2006,山东大学计算机科学与技术学院,博士后
• ### 学习经历
• 2004.7,山东大学,博士学位
• 1998.7,山东大学,硕士学位
• 1995.7, 曲阜师范大学,学士学位
• ### 项目
• 2013-2017, 相关数学问题研究在密码分析和设计中的应用, 国家科技部973重大研究计划, 子课题负责人
• 2013-2016, 椭圆曲线上与密码算法相关的计算问题, 国家自然基金面上项目, 负责人
• 2009-2012, 椭圆曲线密码算法中的计算问题,教育部博士点新教师基金,负责人
• 2008-2011, 椭圆曲线上双线性对的快速实现及其相关问题的研究,山东省自然科学基金,负责人
• 2007-2012,数论与代数安全计算,国家科技部973重大研究计划,子课题负责人
• 2007-2009,公钥密码算法中因子分解问题与离散对数问题的研究,山东省博士后专项基金,负责人
• ### 论著
1. (with Haiyang Xue, Tao Zhan) Fault Attacks on Hyperelliptic Curve Discrete Logarithm Problem over Finite Fields, China Communications, 2012, 9(11): 150-161.
2. (with Tao Zhan) Analysis of the fault attack ECDLP over prime field, Journal of Applied Mathematics, Vol 2011, doi:10.1155/2011/580749, 1-11.
3. (with Xiaobo Feng) Attack on the cryptosystem based on DLP, CIS 2011, 896-899.
4. (with Xiaoyun Wang, Tao Zhan and Yuliang Zheng) Skew-Frobenius Map on Twisted Edwards Curves, ICIC Express Letter, 5(6), 2011, 2089-2094.
5. (with Cai Jie) Constructing pairing friendly curves with small p, ICICIS 2011, 195-198.
6. (with Haifeng Zhang) A new method of constructing a lattice basis and its applications to cryptanalyse short exponent RSA, Mathematical problems in engineering, 2010, 1-11.
7. (with Shang He) Some optimal pairing, CIS 2010, 11-14 Dec. 2010 On page(s): 390 – 3934.
8. (with Leibo Li) A Note on Twin Diffie-Hellman Problem, CIS 2009, 451-454.
9. (with Qin Jing, Zhao Huawei) Non-interactive oblivious transfer protocols, Proceedings – 2009 International Forum on Information Technology and Applications, IFITA 2009, v 2, 2009, 120-124.
10. (with Xiaoyun Wang, Guangwu Xu, Lidong Han) Fast Scalar Multiplication on a Family of Supersingular Curves over \$\mathbb{F}_{2^m}, The 4th International Conferences on Information Security and Cryptology, 2008.
11. (with Qin Jing) A note on a provable secure encryption scheme, ProvSec 2008, J. Shanghai Jiaotong Univ. (Sci.)13(2), (2008), 655-658.
12. On the Sum of a Prime the Square of a Prime and the \$k\$-th Power of a Prime, Indian J. Pure and Appl. Math, 39(3),(2008), 251-271.
13. (with Xu Lingling) New id-based signatures without trusted PKG, 2008 Workshop on Knowledge Discovery and Data Mining, 2008, (2008), 589 – 593.
14. (with Wei Puwen, Wang Wei) A Note on Shacham and Waters Ring Signatures, International Conference on Computational intelligence and Security 2007, (2007), 652-656.
15. (with Chunyan Song, Jianliang Xu, Shenhua Li) A Role-Based Secure Workflow Model, The Sixth International Conference on Grid and Cooperative Computing(GCC 2007), (2007), 764-774.
16. (with Meng Xianmeng) On additive problem with prime numbers of Special type, Demonstratio Mathematica. vol(XL), (2007), 271-287.
17. Hua’s theorem in thin subsets, (Chinese) J. Math. (Wuhan) 27 (2007), no. 1, 65-72.
18. (with Wang Xiaoyun, Zheng Shihu) A New Fuzzy Digital Signature Scheme, Computer Engineering. 32(2006), no. 23, 40-42.
19. (with Meng Xian Meng ) The exceptional set in the two prime squares and a prime problem, Acta Math. Sin. (Engl. Ser.) 22 (2006), no. 5, 1329-1342.
20. (with Meng Xianmeng) A hybrid of theorems of Goldbach and Piatetski-Shapiro, Chinese Ann. Math. Ser. B 27 (2006), no. 3, 341-352.
21. (with Ji Jia Hui; Li Da Xing) New proxy multi-signature, multi-proxy signature and multi-proxy multi-signature schemes from bilinear pairings, (Chinese) Chinese J. Comput. 27 (2004), no. 10, 1429-1435.
22. On the sum of a prime and two prime squares, (Chinese) Acta Math. Sinica (Chin. Ser.) 47 (2004), no. 5, 845-858.
23. The sum of a prime and the square of a prime, (Chinese) Acta Math. Sinica (Chin. Ser.) 47 (2004), no. 4, 695-702.
24. (with Liu Tao) On the sum of a prime and a \$k\$-power of a prime, Adv. Math. (China) 33 (2004), no. 3, 363-368.
25. (with Yu Jiguo) On the sum of a prime and a \$k\$-power of a prime, Pure Appl. Math. (Xi’an) 19 (2003), no. 1, 62-67.
26. Sums of a prime, and a square of prime, and a \$k\$-power of prime, Northeast.Math. J. 18 (2002), no. 4, 283-286.
27. (with Shi Ke Fu) A problem related to the sum of each digit’s \$k\$th-power of a natural number, (Chinese) Qufu Shifan Daxue Xuebao Ziran Kexue Ban 27 (2001), no. 2,40-41.
www.sdzk.org
## 联系我们
### 全国统一报名电话:
• 0531-86266576 |
# Can observation change entropy?
I don't know whether this even makes any sense, but if 'observation' can be considered as 'recieving and reading information', can an act of observation (of a system) change (increase or decrease) its entropy?
• Take a closed box filled with oxigen. Open it up to observe it. What happened to the entropy of the box? Feb 9, 2016 at 19:08
• do you want a fundamental example where entropy is not changed ? a little tricky but it does fit the case
– user46925
Feb 9, 2016 at 19:42
• @igael Yes, of course. Feb 10, 2016 at 3:14
• passive observations of the black body radiation are still observations. Let's observe a galaxy at 7 billion ly and assume the worst case where the entropy produced by the observation may change the entropy of a near galaxy, meaning that its effect is not neglectable. In an expanding universe , the device can't send today a signal to the 7 billion ly galaxy and then can't never influence its entropy, In a static universe, the same at any distance lets a delay between a passive observation and a possible change.
– user46925
Feb 10, 2016 at 4:07
• Don't get hung up on the information aspect. It's not essential. The essential property of observations are that they are irreversible, which answers your question in a single word. Yes, observations do change entropy. Feb 10, 2016 at 4:41
There are two notions of "state" in statistical mechanics. A "microstate" contains 100% of all the information about a physical system at a given point in time. A "macrostate" is a probability distribution of microstates. If a macrostate assigns every every microstate $$i$$ has a probability $$p_i$$, the entropy is $$S = -k_B \sum_i p_i \log(p_i)$$. If a macrostate assigns an even probability $$1/\Omega$$ to $$\Omega$$ microstates, we can see the entropy is $$S = k_B\log \Omega$$, which is the more traditional expression.
On a fundamental level there is only one true microstate your system is in at a given moment. Entropy comes from uncertainty over which microstate is the true one.
The point is that different observers might describe reality with different macrostates, For example, say your room is very messy and disorganized. This isn’t a problem for you, because you spend a lot of time in there and know where everything is. Therefore, the macrostate you use to describe your room contains very few microstates and has a small entropy. However, according to your mother who has not studied your room very carefully, the entropy of your room is very large.
So yes, the entropy of a system will decrease upon measurement, assuming your measurement is good and your macrostate becomes better resolved. However, global entropy is, with overwhelming probability, always going to increase anyway, even if the entropy of a subsystem decreases.
Addendum: Introductory discussions of entropy are often abstract and confusing. It can be helpful to look at concrete examples. Many statistical mechanics textbooks literally "count" the number of microstates of a box of gas with $$N$$ particles in a volume $$V$$ with total energy $$U$$. This is given by the Sakur-Tetrode equation. (Perhaps there is a good derivation on the internet somewhere. I like the one in Dan Schroeder's textbook too.) Here the microstates are labelled by all the positions and momenta of all $$N$$ particles, and the number of microstates is given by an integral over the volume of microstates with integration measure
$$\frac{1}{h^{3N}}d^3 x_1 \ldots d^3 x_N d^3 p_1 \ldots d^3 p_N$$ Hopefully that should give you a concrete way to think about entropy.
• Excellent. In general, popularised explanations are simplified to the point of being confused or even self-contradictory and the only way to keep to sense is to keep in contact with the first principles at all times. Whenever an eminent physicist writes about the arrow of time, and starts treating entropy as if it were a thing in itself, you know that it is time to bin the book and discard the physicist. Jun 14, 2019 at 7:01
There are a lot of definitions of entropy. I was just trying to get an answer to the same question. I studied physics and the concept of entropy was horrible as the definition kept changing. The old historical Entropy is not a physical thing; it started as some number that seemed to explain why you could not reverse most processes.
Now in the modern world you can define the Maximum or Equilibrium Entropy by the amount of information you need to describe a process. This Maximum doesn't change. But as I write this message in English I'm not making use of the information capacity; because of that some people will state that the actual entropy is lower. If we have temperature the bits in this message will switch around and we will end with a message you can't describe with less than the maximum information.
If you already have information about the process, the remaining information you need to know is lower. The big question is whether this is a physical property.
In a quantum system where Alice and Bob each have a brain capacity of one and Bob knows a number between 0,1 and Alice knows nothing both have a maximum Entropy of 1. Bobs used entropy is in this case is 1, the used entropy of Alice 0. If Alice asks Bob about his number and remembers it she has also a used entropy of 1. Now with all additional information we know the entropy of [Alice + Bob] is 1 as both contain the same information. The maximum entropy of Alice + Bob is 2.
Nobody ever defined Entropy in a way that allows us to give a clear definition of the Amount of Entropy the system Alice+Bob has is a coupled state and while it might play a role in their physical behavior we don't have a way to measure how much they know from each other.
That makes the term "Entropy" a somewhat confusing term.
By the way; if you believe in Realism in its strict form the answer is NO; your observation can't change a physical property. Or as I concluded when I learned about entropy; NO, according to most descriptions entropy can't be a physical property.
If you abandon realism the actual used entropy could be a physical property. Any physical law which depends on your knowledge would be the end of Realism.
• "That makes the term "Entropy" a completely useless and confusing term." Tell that to Clausius! [Even $I$ thought that the concept of entropy vastly increased the predictive power of thermodynamics!] May 11, 2019 at 22:49
• Yes, it did increase it but it wasn't clear what it was and with that the definition has changed about 20 times in the past 100 years. I don't mean that it is useless to thing about it. It is useless because you don't know what someone means when he says it. The maximum entropy is useful and can be defined. But the "entropy" below maximum has a subjective meaning. Nut I changed it a little :-)
– Daan
May 11, 2019 at 22:59
I’m going to go out on a limb here and “answer” instead of comment because I struggled with the concept of entropy for so long until I came to this simple conceptual definition: entropy is the measure of how far away a system is from equilibrium. From a Physics 101 point of view, observation cannot change this. If a box of gas is at equilibrium, no amount of observation is going to change it to a state less in equilibrium. Your observation might give you perfect information about the microstate the box of gas is in, but that does not make it more likely to transition to a microstate not in equilibrium. Here it is helpful to consider equilibrium in two ways. One is that in equilibrium you cannot extract work from the system. A box of gas in equilibrium cannot be made to move a piston or anything else. Two is to consider that in equilibrium the system is in a macrostate that has many more microstates similar to it (in equilibrium) than any macrostate not in equilibrium. Though the movements of the gas molecules are random, the sheer number of possible equilibrium microstates makes it highly probable that the system will evolve to another equilibrium microstate rather than into a non-equilibrium microstate, of which there are far, far fewer. With information theory there are other considerations, but these are beyond the physics 101 level, so if you’re just starting out and are confused by entropy (like I was) and just want a simple way to think about what it describes, then the above is hopefully helpful.
In my mind, to unravel this question, we need to consider the simplest scenario possible. So, let us disregard quantum systems and small systems and consider only statistical mechanics of big classical systems.
Entropy is defined as the log of the number of microstates at a given energy E (up to the Botlzmann constant). Namely, for a classical Hamiltonian, we have to find all the configurations that correspond to the same desired energy value E. This implies that the only information you have about the system is it's energy and the Hamiltonian governing its equilibrium properties. Now, suppose that you have $$\Omega$$ such allowed microstates. Introducing a measurement, or gaining information about the system, forces you to consider more constraints about your microstates. This can only reduce the number of allowed microstates or leave them unchanged.
I hope it helps.
In unobserved condition, the system has maximum entropy but when an observer is introduced, the entropy will suddenly drop to minimum because of the definite perception of the observer. Then it will gradually increase on subjection to public. :)
• What do you mean by "subjection to public"??? Jun 21, 2017 at 11:20 |
# Draw animating grid with terrain deformation and variable colors
I'm trying to draw a grid around the cursor that represents buildable area.
this cursor grid, aligns to the global grid. all building placements must be on the grid and so this visual queue must aid in understanding that.
I also will have deforming terrain.
here's what I got so far, this is a grid drawing method that uses GL.Lines :
(by the way I'm using the Cartesian Coordinate system so that's why I iterated over x and y instead of x and z don't let that throw you off, y is not 3D height in the following)
private void drawSize50Grid(int entryX, int entryY)
{
if (!mat)
{
Debug.LogError("Please Assign a material on the inspector");
return;
}
int amountOfGridToDraw = 50;
GL.Begin(GL.LINES);
mat.SetPass(0);
float currentX = entryX;
float currentY = entryY;
for (int x = 0; x < amountOfGridToDraw; x++)
{
for (int y = 0; y < amountOfGridToDraw; y++)
{
//divisionFactor is float which makes the grid the size I want (smaller than the unity visual queue when you have nothing in a scene)
float nextX = divisionFactor/100f + currentX;
float nextY = divisionFactor/100f + currentY;
// fixed height of 0.0001f, this is temporary, I'll pass height when I get there.
var point = new Vector3(currentX, 0.0001f, currentY);
var drawPoint2 = new Vector3(nextX, 0.0001f, currentY);
var drawPoint3 = new Vector3(currentX, 0.0001f, nextY);
GL.Color(Color.red);
GL.Vertex(point);
GL.Vertex(drawPoint2);
if (currentY != amountOfGridToDraw)
{
GL.Vertex(point);
GL.Vertex(drawPoint3);
currentY += divisionFactor/100f;
}
}
currentY = 0f;
currentX += divisionFactor/100f;
}
GL.End();
}
here's my point conversion methods :
public Vector3 GetNearestPointOn3DGrid(Vector3 position)
{
position -= transform.position;
Vector3 result = new Vector3(
// width
magicFormula(position.x),
// length
magicFormula(position.z));
result += transform.position;
return result;
}
private float magicFormula (float entryPoint)
{
float decimalPart = entryPoint % 1;
float intergerPart = entryPoint - decimalPart;
return Mathf.RoundToInt(decimalPart / (1 / divisionFactor)) * (1 / divisionFactor) + intergerPart;
};
If I call my grid in this way :
void OnRenderObject()
{
drawSize50Grid(100f, 100f);
}
Then, when I run the game, my grid shows up starting at (LeftHand coordinate system) point 100f,0.0001f, 100f.
But OnRenderObject() is called only once.
This lifecycle will not allow me to have a grid that follows the cursor.
Also it seems that this is a GL.Line hard limit : they cannot be redrawn at runtime.
I realize drawing and redrawing tons of segments as the mouse zooms around is bound to be taxing to the CPU & GPU but I've tampered my expectations :
// I guess I would call drawGridAroundCursor() inside Update()?
// right now doing this does nothing because GL.Lines refuse to draw after OnRenderObject()
public void drawGridAroundCursor()
{
if (camera.GetCurrentDistanceToGround() < 60f && isShiftCurrentlyHeldDown)
{
}
}
• I'm only doing 50x50 grid (that's 4950 lines) (actually I'm willing to go down further than that 15x15 might cut it)
• I'm only rendering the grid if the camera is close to the ground (where it shows up)
• and only showing it when holding down shift.
But if there's even more things I can do to optimize I'm open to ideas, so long as it doesn't involve sacrificing correct placement of the lines.
here's what my above code WOULD look like if it rendered at real time : (camera viewing straight down at the terrain, also not at all to scale, the cursor is GIGANTIC in this mock-up)
And in the end here's what I'm going to do (draw from -25 -25 relative to grid-adjusted to cursorRayHit and chose a more and more transparent color the further the point are to the center of the grid) :
so obviously when the cursor moves the faded out borders follow it around but the grid points will stay in place (and go transparent when the cursor is too far away)
This effect was inspired by this :
And after I get to that step I want to add height variation to match the terrain and then add color input per cell of the grid depending on whether or not the terrain is constructible or not.
So the final solution would look something like this :
• Can you show us an image of what you want this to look like? We might be able to achieve that appearance, with correct snapping, without an object per line. – DMGregory May 14 at 16:09
• Can you record a gif of the Windows 10 effect you want to imitate, and embed it in your question? – DMGregory May 14 at 16:14
• @DMGregory I added a gif now. does that help? – tatsu May 14 at 17:32
When this question was just about drawing a plain grid of lines, I said...
I'd be tempted to draw this with a shader on a single quad, using the world coordinates of the vertices to snap the lines to your desired intervals, no matter if the parent quad is moving in sub-grid increments.
Here's a quick shader that does this, with parameters to control the exact density of the grid lines, their offset relative to the world origin, their thickness and colour. You can control the size of the fade-out just by scaling the quad that you draw it on.
Shader "Unlit/WorldGrid"
{
Properties
{
_ScaleShift ("Lines per Unit & Shift", Vector) = (10, 10, 0, 0)
_Color ("Colour", Color) = (1, 0, 0, 1)
_Thickness ("Thickness", float) = 0.5
}
{
Tags { "RenderType"="Transparent" "Queue"="Transparent" "IgnoreProjector"="True" }
LOD 100
Lighting Off
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float2 worldPos : TEXCOORD1;
float4 vertex : SV_POSITION;
};
float4 _ScaleShift;
fixed4 _Color;
float _Thickness;
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
float4 worldPos = mul(unity_ObjectToWorld, v.vertex);
o.uv = 2.0f * (v.uv - 0.5f);
o.worldPos = worldPos.xz * _ScaleShift.xy + _ScaleShift.zw;
return o;
}
fixed4 frag (v2f i) : SV_Target
{
float2 distanceFromLine = abs(frac(i.worldPos + 0.5f) - 0.5f);
float2 speedPerPixel = fwidth(i.worldPos);
float2 pixelsFromLine = abs(distanceFromLine / speedPerPixel);
float opacity = 1.0f - saturate(min(pixelsFromLine.x, pixelsFromLine.y) - 0.5f * (_Thickness - 0.5f));
float falloff = max(1.0f - radiusSquared, 0.0f);
fixed4 col = _Color;
col.a *= opacity * falloff * falloff;
return col;
}
ENDCG
}
}
}
But now that I have your full requirements:
• Deforms to fit terrain
• Colour-coded cells based on buildable/non-buildable sites
...well, it's doable:
But we need something more complicated:
So let's unpack that, starting with a bit of prep work:
• In your camera's settings, set "Depth Texture" to "On" - that will force the camera to render the scene's depth, which we'll need for the terrain-hugging decal effect.
• Create a cube that we'll use for drawing the effect. Make it as wide in each direction as the amount of grid you want to show, and tall enough to span changes in terrain height.
• Create a new Unlit shader graph, and a material to put it on.
• Set your Unlit Master node to use Additive blending if you want glowy lines like in the gif above.
• Set up shader parameters for...
• Grid Density (Vector 2) - how many grid cells per world unit (here I used 5, 5)
• Grid Offset (Vector 2) - how much to shift the grid relative to Unity's grid (I used -0.5, -0.5)
• Thickness (Vector 1) - how thick to draw the lines (0 for hairline, 1 for a bit bolder)
• Buildable Mask (Texture 2D) - a texture as large as your building grid, containing 0/black for obstructed cells, and 1/white for open cells. You can update this in your scripts with SetPixel/SetPixels or blit quads into it as a RenderTexture to keep it up to date as your buildings change.
• Buildable Colour / Blocked Colour (Colour) - the colours to draw the lines.
Now let's attack the graph in pieces:
Finding our place in the world
We start by taking a ray from the camera, through our screen pixel, and extending it as far as the stored depth in the camera's depth texture. That tells us the world-space coordinates of the terrain surface behind this part of the cube.
Then we take the xz components of this world position, and scale and shift them by our grid density/offset to convert into our building coordinate space.
The top section does the same math from the text fragment shader above, taking the building-space point, working out how far this pixel is from the nearest grid edge, and turning that into a brightness/opacity value for our line according to our thickness parameter.
The bottom section takes the worldspace point (before conversion to building space) and transforms it by the inverse of our model matrix to get it into the cube's local coordinates. Then we fade it out radially so you don't see the edges of the cube.
We multiply these two effects to get the resulting brightness/opacity of the line.
A place to stand, a place to grow...
The top section takes our building-space position and rounds it to an integer, so all pixels in the square sample the same building mask tile (technically point filtering should do this too, but I have sometimes found discrepancies between the hardware filter's rounding and what I get in the shader, so I like to err on the side of caution)
We then divide this by the size of our building mask texture to get a texture coordinate inside the building mask, and sample from that to find out whether we should draw this tile as green/buildable, or red/blocked.
If we just left it there though, we'd get these ugly/speckly/half-and-half lines where a buildable cell borders a blocked cell. To fix that, the group on the bottom figures out which edge we're closest to, then steps across that edge to sample the neighbouring cell on that side. We'll draw this edge red if either this cell or our closest neighbour is non-buildable.
Note that the lines headed to that Vector2 node on the bottom cross-over. This means we're doing step(pixelsFromEdge.xy, pixelsFromEdge.yx) - so we get (1, 0) if we're closest to an edge between cells with an x separation, and (0, 1) if we're closest to an edge between cells with a y (z) separation.
By default, this will want to draw over all opaque content in your scene, but you can use Custom Render Queue setting to wedge it between your terrain and your buildings. I found setting the grid to render in queue 2501 (AlphaTest+51), and my buildings to render in 2502+ did the trick. If you're using a built-in shader, you may need to create your own material that references it, then put the Inspector in Debug mode to see the Custom Render Queue field:
• omg! you're nuts! thank you so much! I was in the process of trying to at least do the color part but I was having trouble figuring out the right format of the object or array to pass. big problems. You're an absolute hero! OMG!! – tatsu May 17 at 20:31 |
# Finding limit with approximation
I'm trying to find the limit of the following, where $$m$$ is a constant
$$\lim_{n \rightarrow \infty}\frac{(n-1)!}{2}\bigg(\frac{m}{n}\bigg)^{n}.$$
$$(n-1)! \approx \sqrt{2\pi (n-1)}\bigg(\frac{n-1}{e}\bigg)^{(n-1)}$$
So I obtain
$$\frac{\sqrt{2\pi}}{2}\lim_{n \rightarrow \infty} = (n-1)^{n-\frac12}e^{1-n}\bigg(\frac{m}{n}\bigg)^{n}$$
to which I can't see any simplification. Please suggest some hints. Thanks
• seems like if $-1<m<1$ the sequence converge Mar 13, 2019 at 15:37
$$\lim_{n\to \infty}\dfrac{n!}{2n}\left(\dfrac{m}{n}\right)^n=\lim_{n\to \infty}\dfrac{\sqrt{2\pi n}}{2n}\left(\dfrac{n}{e}\right)^n\left(\dfrac{m}{n}\right)^n=\lim_{n\to \infty}\dfrac{\sqrt{2\pi n}}{2n}\left(\dfrac{m}{e}\right)^n$$
• I obtain $\sqrt{2\pi}\lim_{n \rightarrow \infty} \frac{1}{n^.5}\bigg(\frac{m}{e}\bigg)^{n}$ which will equal 0 for $|m|<1$ ? This is not the result I was expecting. Mar 13, 2019 at 15:44 |
# How to prove $\lnot(p\to q)\vdash p \land\lnot q$
This is the first time I post anything on the forum. I started with Tomassi's Logic and unfortunately I have been unable to solve some of its problems. One I get immediately stuck with is this one:
$$\lnot(p\to q)\vdash p \land\lnot q$$
I have to solve it by natural deduction and the only rules I know are: assumptions, modus ponendo ponens, modus tollendo tollens, double negation, reductio ad absurdum, conditional proof, v-introduction, v-elimination, introduction, and elimination. Tomassi's proof consists of 12 steps.
Moreover, I don't see how to proceed because of the negations on the outside of the parentheses.
Thanks for the help!
• Are you able to prove $p\to q ⊢ \neg p \lor q$? – Javi Aug 2 '18 at 21:18
• Already asked and answered in PhilSE – Mauro ALLEGRANZA Aug 3 '18 at 6:03
I used http://proofs.openlogicproject.org/ to construct a more or less Fitch-style natural deduction proof. Overview of the idea that this proof is just formalizing:
Suppose we have $\lnot (P \to Q)$. Then we first prove $P$. Suppose, to the contrary, that we have $\lnot P$; then assuming $P$, we have a contradiction which also allows to conclude $Q$ via ex falso quodlibet. Thus, if $P$ were false, then $P \to Q$ would be true, giving a contradiction with the assumption $\lnot (P \to Q)$. Therefore, $P$ must be true. Likewise, we also need to prove $\lnot Q$. So, suppose $Q$ were true; then we would automatically have $P \to Q$, again contradicting the assumption $\lnot (P \to Q)$. Finally, since we have proved $P$ and we have also proved $\lnot Q$, we conclude $P \wedge \lnot Q$.
You wish to prove the negation of a conditional entails a conjunction. Your strategy should therefore be to prove the conditional is entailed by the negation of each of the conjuncts.
That requires two negation introductions with their supproofs containing a conditional introduction, then double negation elimination where needed, and conjunction introduction.
$\def\fitch#1#2{~~\begin{array}{|l} #1\\\hline #2\end{array}} \fitch{\neg(p\to q)}{\fitch{q}{\fitch{p}{\vdots\\q}\\p\to q\\\bot}\\\neg q\\\fitch{\neg p}{\fitch{p}{\vdots\\q}\\p\to q\\\bot}\\\neg\neg p\\p\\ p\wedge\neg q}$
I'll leave to you, how to introduce that conditional under each assumption.
You could also try showing that the conditional is entailed by negating the conjunction. Show that if $\neg(p\wedge\neg q)$ and $p$ then $q$.
$$\fitch{\neg (p\to q)}{\fitch{\neg(p\wedge \neg q)}{\fitch{p}{\vdots\\q}\\p\to q\\\bot}\\\neg\neg(p\wedge\neg q)\\p\wedge\neg q}$$
• Many thanks for your answer. However I cannot get those conditionals yet. Could you please clarify me more on this? – Diego Ruiz Haro Aug 2 '18 at 22:31
• Upvoted because I believe your answer with the outline and hints should be seen before mine with the full solution. – Daniel Schepler Aug 2 '18 at 22:45
• @DiegoRuizHaro You cannot see why $q$ would hold under the assumption of $q$ and $p$? Or under the assumption of $\neg p$ and $p$? – Graham Kemp Aug 2 '18 at 23:09
• Ah... Do you have access to Ex Falso Quodlibet ? – Graham Kemp Aug 3 '18 at 0:52
• @GrahamKemp I have no access to Ex Falso Quodlibet. I only know: · assumptions, · modus ponendo ponens, · modus tollendo tollens, · double negation, · reductio ad absurdum, · conditional proof, · v-introduction, v-elimination, · &-introduction, and &-elimination – Diego Ruiz Haro Aug 3 '18 at 5:46 |
## Trace Between Generated CUDA Code and MATLAB Source Code
This example shows how to trace (highlight sections) between MATLAB® source code and the generated CUDA® code. Tracing between source code and generated code helps you to:
• Understand how the code generator maps your algorithm to GPU kernels.
• Debug issues in the generated code.
• Evaluate the quality of the generated code.
You can trace by using one of these methods:
• Configure GPU Coder™ to generate code that includes the MATLAB source code as comments. In the comments, a traceability tag immediately precedes each line of source code. The traceability tag provides details about the location of the source code. If you have Embedded Coder®, in the code generation report, the traceability tags link to the corresponding MATLAB source code.
• With Embedded Coder, produce a code generation report that includes interactive traceability. Interactive tracing in the report helps you to visualize the mapping between the MATLAB source code and the generated C/C++ code. See Interactively Trace Between MATLAB Code and Generated C/C++ Code (Embedded Coder).
### Generate Traceability Tags
#### Create the MATLAB Source Code
To illustrate traceability tags, this example uses an implementation of the Mandelbrot set by using standard MATLAB commands running on the CPU. This implementation is based on the code provided in the Experiments with MATLAB e-book by Cleve Moler.
The Mandelbrot set is the region in the complex plane consisting of the values z0 for which the trajectories defined by this equation remain bounded at k→∞.
`${z}_{k+1}={z}_{k}^{2}+{z}_{0},\text{ }k=0,\text{\hspace{0.17em}}1,\text{ }\text{\hspace{0.17em}}\dots$`
Create a MATLAB function called `mandelbrot_count.m` with the following lines of code. This code is a vectorized MATLAB implementation of the Mandelbrot set. For every point `(xGrid,yGrid)` in the grid, it calculates the iteration index `count` at which the trajectory defined by the equation reaches a distance of `2` from the origin. It then returns the natural logarithm of `count`, which is used generate the color coded plot of the Mandelbrot set.
```function count = mandelbrot_count(maxIterations,xGrid,yGrid) % Add kernelfun pragma to trigger kernel creation coder.gpu.kernelfun; % mandelbrot computation z0 = xGrid + 1i*yGrid; count = ones(size(z0)); z = z0; for n = 0:maxIterations z = z.*z + z0; inside = abs(z)<=2; count = count + inside; end count = log(count); ```
#### Create Test Vectors
Create test vectors for the entry-point function by using the following lines of code. The script generates a 1000 x 1000 grid of real parts (x) and imaginary parts (y) between the limits specified by `xlim` and `ylim`. You can use these inputs to validate the `mandelbrot_count` entry-point function and plots the resulting Mandelbrot set.
```maxIterations = 500; gridSize = 1000; xlim = [-0.748766713922161,-0.748766707771757]; ylim = [0.123640844894862,0.123640851045266]; x = linspace(xlim(1),xlim(2),gridSize); y = linspace(ylim(1),ylim(2),gridSize); [xGrid,yGrid] = meshgrid(x,y); ```
#### Generate Traceability Tags
To produce traceability tags in the generated code, enable generation of MATLAB source code as comments.
• In the GPU Coder app, set MATLAB source code as comments to `Yes`.
• In a code generation configuration object, create a `coder.gpuConfig` object and set the `MATLABSourceComments` property to `true`.
```cfg = coder.gpuConfig('dll','ecoder',true); cfg.GenerateReport = true; cfg.MATLABSourceComments = true; cfg.GpuConfig.CompilerFlags = '--fmad=false'; codegen -config cfg -args {maxIterations,xGrid,yGrid} mandelbrot_count```
Note
The `--fmad=false` flag when passed to the `nvcc`, instructs the compiler to disable Floating-Point Multiply-Add (FMAD) optimization. This option is set to prevent numerical mismatch in the generated code because of architectural differences in the CPU and the GPU. For more information, see Numerical Differences Between CPU and GPU.
#### Access the Report
To open the code generation report, click View report.
The code generation report is named `report.mldatx`. It is located in the `html` subfolder of the code generation output folder. If you have MATLAB R2018a or later, you can open the `report.mldatx` file by double-clicking it.
In the MATLAB Source pane, select `mandelbrot_count.m`. You see the MATLAB source code in the code pane.
The green GPU marker next to `mandelbrot_count` function indicates that the generated code has both CPU and GPU sections. The green vertical bar indicates the lines of code that are mapped to the GPU. To see information about the type of a variable or expression and the name of the corresponding GPU Kernel Function, pause over the variable or expression. When you select highlighted code by clicking it, the code becomes blue and you can see the information even when you move your pointer away from the selection. The code remains selected until you press `Esc` or select different code.
To view the CUDA code generated for the `mandelbrot_count.m` entry-point function, select `mandelbrot_count.cu` from the Generated Code pane.
### Format of Traceability Tags
In the generated code, traceability tags appear immediately before the MATLAB source code in the comment. The format of the tag is:
`<filename>:<line number>`.
For example, this comment indicates that the code ```z0 = xGrid + 1i*yGrid;``` appears at line `5` in the source file `mandelbrot_count.m`.
```/* 'mandelbrot_count:5' z0 = xGrid + 1i*yGrid; ```
### Traceability Tag Limitations
• You cannot include MATLAB source code as comments for:
• MathWorks® toolbox functions
• P-code
• The appearance or location of comments can vary:
• Even if the implementation code is eliminated, for example, due to constant folding, comments can still appear in the generated code.
• If a complete function or code block is eliminated, comments can be eliminated from the generated code.
• For certain optimizations, the comments can be separated from the generated code.
• Even if you do not choose to include source code comments in the generated code, the generated code includes legally required comments from the MATLAB source code.
• Functions with multiple outputs do not get highlighted.
• Calls to `coder` functions such as `coder.nullcopy` will not be highlighted
• Code that gets mapped to library calls such as cuDNN, cuBLAS and cuFFT will not be highlighted. As a result, functions that are completely mapped to GPU may be tagged incorrectly. |
## Algebra 2 (1st Edition)
$135.6^{\circ}$
Let us consider $\theta =\sin^{-1} (0.7)$ In order to get the answer in degrees, we need to put the calculator in degrees mode. Then, our result will be: $\theta \approx 44.4^{\circ}$ But, the angle $\theta$ does not belong to the interval, so we will compute the reference angle that lies in the given interval. Then we have: $\theta = 180^{\circ}- 44.4^{\circ}=135.6^{\circ}$ |
## Geometry: Common Core (15th Edition)
$\text{F. }\frac{1}{6}$
This question is asking us to find the probability of drawing a red marble. The probability of drawing red is equal to the number of red marbles in the bag divided by the total number of marbles in the bag. This gives us: $$P(\text{red})=\frac{2}{2+4+6}=\frac{2}{12}=\frac{1}{6}$$This gives us choice $\text{F. }\frac{1}{6}$ |
# Physics Class 12 NCERT Solutions: Chapter 11 Dual Nature of Radiation and Matter Part 15 (For CBSE, ICSE, IAS, NET, NRA 2022)
Glide to success with Doorsteptutor material for NSO Class-12: fully solved questions with step-by-step explanation- practice your way to success.
Q: 32. (A) Obtain the de Broglie wavelength of a neutron of kinetic energy . As you have seen in Exercise 31 (Reference: Chapter11 Dual Nature of Radiation and Matter Part 14) , an electron beam of same energy be equally suitable for crystal diffraction experiments. Would a neutron beam of the same energy be equally suitable? Explain.
(B) Obtain the de Broglie wavelength associated with thermal neutrons at room temperature . Hence explain why a fast neutron beam needs to be the rmalised with the environment before it can be used for neutron diffraction experiments.
(A) De Broglie wavelength ; neutron is not suitable for the diffraction experiment
Kinetic energy of the neutron,
Mass of a neutron,
The kinetic energy of the neutron is given by the relation:
Where, Velocity of the neutron
Momentum of the neutron
De-Broglie wavelength of the neutron is given as:
It is clear that wavelength is inversely proportional to the square root of mass.
Hence, wavelength decreases with increase in mass and vice versa.
It is given in the previous problem that the inter-atomic spacing of a crystal is about , i.e.. , . Hence, the inter-atomic spacing is about a hundred times greater. Hence, a neutron beam of energy is not suitable for diffraction experiments.
(B) De Broglie wavelength
Room temperature,
The average kinetic energy of the neutron is given as:
Where,
Boltzmann constant
The wavelength of the neutron is given as:
This wavelength is comparable to the inter-atomic spacing of a crystal. Hence, the high-energy neutron beam should first be thermalised, before using it for diffraction.
Q: 33. An electron microscope uses electrons accelerated by a voltage of . Determine the de Broglie wavelength associated with the electrons. If other factors (such as numerical aperture, etc.) are taken to be roughly the same, how does the resolving power of an electron microscope compare with that of an optical microscope which uses yellow light?
Electrons are accelerated by a voltage,
Charge on an electron,
Mass of an electron,
Wavelength of yellow light
The kinetic energy of the electron is given as:
De Broglie wavelength is given by the relation:
This wavelength is nearly times less than the wavelength of yellow light.
The resolving power of a microscope is inversely proportional to the wavelength of light used. Thus, the resolving power of an electron microscope is nearly times that of an optical microscope. |
• # question_answer Consider two process A and B on a system as shown in the figure let $\Delta {{W}_{1}}$ and $\Delta {{W}_{2}}$ be the work done by the system in the process A and B respectively then:- A) $\Delta {{W}_{1}}>\Delta {{W}_{2}}$ B) $\Delta {{W}_{1}}<\Delta {{W}_{2}}$ C) $\Delta {{W}_{1}}=\Delta {{W}_{2}}$ D) Nothing can be said about the relation between $\Delta {{W}_{1}}$ and $\Delta {{W}_{2}}$
dk; $Z\Delta W=PDV$ $\because$ $\Delta V$ is same $Q\Delta W\propto r$ $\therefore$$\Delta {{W}_{2}}>\Delta {{W}_{1}}$ |
# equality between partitions using generating series
Use a generating series to prove that the number of unordered compositions or partitions of $$n$$ in which only the odd parts can be repeated is the number of partitions of $$n$$ where no part can be repeated more than $$3$$ times.
The number of compositions of $$n$$ is $$2^{n-1}$$, which can be found by listing $$n$$ $$1$$'s and placing a $$+$$ or $$,$$ between them; there is a bijection between the set of such arrangements and the set of compositions of $$n$$. For the case where $$n=3,$$ the partitions where only odd parts can be repeated are $$(1,1,1),(2,1),(3)$$, which are also all the partitions where no part can be repeated more than $$3$$ times. For $$n=5,$$ the partitions where only odd parts can be repeated are $$(1,1,1,1,1),(2,1,1,1),(2,3),(3,1,1),(4,1),(5)$$ and the partitions where no part can be repeated more than $$3$$ times are $$(2,1,1,1),(2,3),(3,1,1),(4,1),(5),(2,2,1).$$ I can't seem to find a pattern for this other than the obvious fact that the intersection of $$A_n := \{\text{set of partitions of n where only the odd parts can be repeated}\}$$ and $$B_n := \{\text{set of partitions of n where no part can be repeated more than 3 times}\}$$ is $$C_n := \{\text{set of partitions where only the odd parts can be repeated, but no more than 3 times}\}$$. Also I'm not sure if a recurrence relation will be useful to determine the number here.
The generating function for partitions into even parts only occurring once and odd parts as many as you like is $$\begin{eqnarray*} \prod_{i=1}^{\infty} \frac{1+x^{2i}}{1-x^{2i-1}}. \end{eqnarray*}$$ The generating function for partitions where parts occur three times at most is $$\begin{eqnarray*} \prod_{i=1}^{\infty} (1+x^i+x^{2i}+x^{3i}) =\prod_{i=1}^{\infty} (1+x^i)(1+x^{2i}). \end{eqnarray*}$$ To see the equality of these, note that $$\begin{eqnarray*} \frac{1}{1-x^{2i-1}} = \prod_{j=0}^{\infty} (1+x^{(2i-1)2^{j}}) \end{eqnarray*}$$ and every number can be uniquely expressed as $$(2i-1)2^{j}$$.
Hint to show the last equality ... $$\begin{eqnarray*} \frac{1}{1-y} = 1+y+y^2+y^3+ \cdots=(1+y)(1+y^2)(1+y^4)\cdots \end{eqnarray*}$$ or use induction. |
# Why should we use pseudopotentials in numerical simulations (such as DFT)?
I've noticed that a lot of computational techniques in physics deal with pseudopotentials in their calculation procedure and especially density functional theory (DFT) projects (open sources such as Quantum Espresso, Abinit, Octopus, etc.) require them with care.
The main algorithm of first-principle calculations consists of diagonalization of (hopefully symmetric, so that diagonalizable) Hamiltonian matrix. Since a length scale of core orbitals with large atomic numbers, Z, is very small, we need a large momentum and $|\vec{k}|$ to describe the behavior of the orbitals. Thus we will meet the giant plane-wave (or sine-wave) basis set and so is Hamiltonian matrix. Consider transition metals. They have both conduction and super-deep-core electrons and they have a variety of $|\vec{k}|$. It is computationally unfavorable to diagonalize the Hamiltonian with million by million dimensions.
I think the pseudopotentials can remedy such tragedy by expressing the influences of the core electrons implicitly. Then we need not use the giant basis set and physicists become happy.
However if the computational costs really matter, why should we use the pseudopotentials for low-Z atoms? I don't know what is going on closed source projects(i.e. VASP?), but as far as Abinit and Quantum Espresso, the DFT calculations always need the pseudopotentials regardless of the kind of the atoms. Can't we run mediocre-cost-computations for light atoms even if there is no pseudopotentials?
Furthermore the quality of the computational results hugely depend on the kind of the pseudopotentials we use (and exchange-correlation functionals...). A lot of scientists have worked to develop more feasible exchange-correlation functionals, from mere LDA and GGA to hybrid functionals. Surely they also have developed more plausible pseudopotentials, but the worry about "poor performance pseudopotential" can be completely overcome if we just use the giant basis set I mentioned above. This makes computational costs skyrocket but why don't we use so-called supercomputers? Any poor property of the pseudopotentials will be erased and hopefully many unsolved problems of theoretical divisions(such as the special properties of transition metal oxides) would be resolved. The size of the required basis set is too large to be dealt with, even, world-level supercomputers? |
# Recently Installed Protext, but MiKTeX209-jpeg.dll is missing
This is what I have done:
2) Installed MikTex on C:\Program Files\MiKTeX 2.9
2.1) Executed miktex-update_admin in C:\Program Files\MiKTeX 2.9\miktex\bin\x64\internal
3) Downloaded the last version of TexStudio and installed it on C:\Program Files\TeXstudio
And when I want to run it, it shows that message.
What I have tried:
Installing the same version that is contained in Protex, Tried installing it in program files (x86), googling the problem but not finding anything that helped me (or at least that I understood as help). I have already checked MiKTeX209-jpeg.dll is missing but I don't see how that thread could be of help to me (maybe I am not understanding something). And also installing and re-installing many times.
• 1) Don't mix `Program Files` and `Program Files(x86)` installations if you don't want to have a mess. 2) This file is part of `miktex-jpeg-bin-2.9` for the 32 bits version, or `miktex-jpeg-bin-x64-2.9` for the 64 bits version. Try uninstall and reinstall it with MiKTeX Package Manager. – Bernard Feb 2 '17 at 21:07 |
# Homework Help: More integral issues
1. Nov 27, 2005
### kenny87
Here's the question:
If:
b b
∫ f(x)dx = a + 2b, then ∫ (f(x) + 5)dx = ?
a a
I'm thinking myself into circles... I want to say I need to take the derivative of a+2b to then find out what equals f(x) and then just take the integral of that +5... but its just not working out.
2. Nov 27, 2005
### Tide
If I am reading correctly what you wrote then you are merely adding 5(b-a) to the original integral.
3. Nov 28, 2005
### HallsofIvy
$$\int_a^b (f(x)+ 5)dx= \int_a^b f(x)dx+ 5\int_a^b dx$$
4. Nov 28, 2005
### kenny87
So would it be just a+2b+5?
5. Nov 28, 2005
### Physics Monkey
Nope, what does $$\int^b_a dx$$ equal?
6. Nov 28, 2005
### kenny87
Ok, so then I just do 5(b-a)?
7. Nov 28, 2005
### TD
You have to add that, yes
8. Nov 28, 2005
### kenny87
Yeah, that's what I meant to say.
So in this problem:
If f(x)=g(x)+7 from 3 to 5, then the integral from 3 to 5 of [f(x)+g(x)]dx is?
Can I just use the same method and get
5
2 ∫ g(x)dx+7
3
9. Nov 28, 2005
### TD
Almost, don't forget that the 7 was in the integrand!
$$\int\limits_3^5 {f\left( x \right) + g\left( x \right)dx} = \int\limits_3^5 {g\left( x \right) + 7 + g\left( x \right)dx} = 2\int\limits_3^5 {g\left( x \right)dx} + 7\int\limits_3^5 {dx}$$
10. Nov 28, 2005
### kenny87
how do i figure dx in this case? do i use g(x) or f(x)?
11. Nov 28, 2005
### TD
You either use f(x) and substitute g(x) by f(x)-7 or you use g(x), and substitute f(x) by g(x)+7. |
# Chapter 27. Dispatching Commands
Dispatching commands (commands with Dispatch in the name) provoke work in a compute pipeline. Dispatching commands are recorded into a command buffer and when executed by a queue, will produce work which executes according to the currently bound compute pipeline. A compute pipeline must be bound to a command buffer before any dispatch commands are recorded in that command buffer.
To record a dispatch, call:
void vkCmdDispatch(
VkCommandBuffer commandBuffer,
uint32_t x,
uint32_t y,
uint32_t z);
• commandBuffer is the command buffer into which the command will be recorded.
• x is the number of local workgroups to dispatch in the X dimension.
• y is the number of local workgroups to dispatch in the Y dimension.
• z is the number of local workgroups to dispatch in the Z dimension.
When the command is executed, a global workgroup consisting of x × y × z local workgroups is assembled.
To record an indirect command dispatch, call:
void vkCmdDispatchIndirect(
VkCommandBuffer commandBuffer,
VkBuffer buffer,
VkDeviceSize offset);
• commandBuffer is the command buffer into which the command will be recorded.
• buffer is the buffer containing dispatch parameters.
• offset is the byte offset into buffer where parameters begin.
vkCmdDispatchIndirect behaves similarly to vkCmdDispatch except that the parameters are read by the device from a buffer during execution. The parameters of the dispatch are encoded in a VkDispatchIndirectCommand structure taken from buffer starting at offset.
The VkDispatchIndirectCommand structure is defined as:
typedef struct VkDispatchIndirectCommand {
uint32_t x;
uint32_t y;
uint32_t z;
} VkDispatchIndirectCommand;
• x is the number of local workgroups to dispatch in the X dimension.
• y is the number of local workgroups to dispatch in the Y dimension.
• z is the number of local workgroups to dispatch in the Z dimension.
The members of VkDispatchIndirectCommand structure have the same meaning as the similarly named parameters of vkCmdDispatch. |
# How does radiation shielding using absorbing materials work?
I understand that, for example, a thick enough sheet of lead can absorb gamma radiation, but I want to understand what actually happens at the molecular/atomic/subatomic level. Also, can the same logic be applied to cosmic particles? I have tried Googling for an answer, but to no avail. Can someone enlighten me?
-
## migrated from space.stackexchange.comJul 20 '14 at 20:52
This question came from our site for spacecraft operators, scientists, engineers, and enthusiasts.
It looks to me that you are asking for a review of any kind of possible interaction between matter and various kind of projectiles. Whole books have been written on this! The very short summary is that the energy is dispersed through (multiple) scattering, being transferred by the recoils to the material structure as phonons and finally dispersed as heat. Damages to molecules, nuclei or even nucleons can of course take place according to the energy and the kind of the projectiles. – DarioP Jul 21 '14 at 10:43
There are several different things labeled "radiation". Gamma rays are electromagnetic radiation, similar to visible light but at a higher frequency. X- rays are also electromagnetic radiation. For electromagnetic radiation, elements with heavy nuclei are good shielding. See this Wikipedia article on protection against electromagnetic radiation.
Also called radiation are high speed protons and atomic nuclei.
And the high speed nuclei can vary a lot in velocity. High speed nuclei from outside the solar system are called Cosmic Rays. These tend to be a lot faster than the high speed ions coming from the solar wind or the Van Allen Belts.
Galactic Cosmic Rays often are moving at close to light speed. When a such a high speed nuclei or proton strikes a massive nuclei (such as a lead nucleus), it is like a cue ball breaking a rack on a pool table. You have particles going every which way forming secondary cosmic rays. To avoid this shower of secondary particles, atoms with small nuclei are desirable. So hydrogen rich compounds may be better shielding against GCRs. Water is often suggested as shield against GCRs.
-
That makes sense about GCRs but what about electromagnetic radiation? – Icarus Jul 20 '14 at 21:21
@Icarus good point. I've edited the first paragraph because of this comment. – HopDavid Jul 21 '14 at 1:33
Radiation can be several things, but since you specifically mentioned lead shielding, let's look at X-rays - a lot of what you learn applies to other radiation as well.
To stop radiation it needs to interact with "something" that makes it give up its energy and momentum. This is how you get the radiation to stop going in the direction it was going.
Now X-rays typically interact with matter (atoms) in one of three ways:
At low energies you can have the photoelectric effect: the energy of the radiation is completely absorbed by the electrons of the atom - so the photon "disappears" and the electron gets all the energy (minus whatever energy was needed to get it detached from the atom - the bonding energy). Electrons don't travel very far in matter, so the energy is usually absorbed once a photoelectric interaction occurs. The probability of this interaction depends on the energy of the photon and the $Z$ (atomic number) of the atom - higher $Z$ means much higher probability (I have seen $Z^4$ relationships but I'm not sure how well those hold, and over what range.)
As the energy of the photon increases above the K-edge of the atom, you get Compton scatter dominating: this is an elastic collision between the photon and the electrons in the material and it results in a transfer of momentum and energy from the photon to the electron. The famous Compton equation shows the relationship between incident and final energy of the photon as:
$$E'=\frac{E}{1+\frac{E}{m_0c^2}(1+cos\theta)}$$
Where $m_0$ is the rest mass of the electron and $\theta$ is the angle between the incident photon with energy $E$ and final energy $E'$.
The more electrons there are in your material, the more effective the stopping power in this range (above 80 keV or so). This is why lead, depleted uranium, bismuth, tungsten, and other such materials are good for shielding.
At very high energies, you can get pair production: the photon (with more than 1.022 MeV energy) creates an electron/positron pair "out of thin air", giving up 1.022 MeV of energy (which is turned into mass of the particles created).
So to recap: X-rays shielding works by interaction of electrons with photons. Higher density materials improve the probability of Compton scatter; higher atomic number increase photoelectric interaction cross section. Typically, one talks about the half value thickness: the thickness of material that stops half the radiation. Because shielding is a probabilistic process, there is no such thing as "perfect shielding".
One more point about density of the shielding material:
In some situations, you care about stopping the radiation in the shortest possible distance. This happens for example in a radiation pinhole camera (used in SPECT systems), where you want to have a small opening to let radiation through, but need to stop all radiation outside of that. Such an aperture has to be made of the densest high-Z material you can find. People usually choose gold for this application (http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=949378) - the figure of merit here is the product of density and specific scatter cross section, the linear attenuation coefficient with units of $m^-1$. The larger this number, the more efficient the material at stopping radiation in a short distance. A couple of examples (all values at 100 keV, attenuation data from http://physics.nist.gov/cgi-bin/Xcom/xcom2-t):
symbol Z density sigma lambda
(g/cm^3) (cm^2/g) (/cm)
Ir 77 22.5 4.86 109
Pt 78 21.5 4.99 107
Au 79 19.3 5.16 100
As you can see, for this particular example the shortest stopping length is obtained for iridium - because although it has a lower Z than gold, it has higher density.
When you are interested in "bulk radiation protection", for example in nuclear reactors, then the question is simply "how do I get a lot of shielding for not a lot of money". Now the size of the shield does not matter very much, and you end up with water - a very cheap and abundant material which is capable of stopping radiation (not just gamma rays, but neutrons as well). This is the material of choice for shielding (spent) reactor fuel. You may have seen the pictures of the blue-glowing fuel rods under water:
source of this image: http://spectrum.ieee.org/image/37182
The glow is Cerenkov radiation - resulting from the fact that particles are traveling "faster than the speed of light". In this case, that's faster than the speed of light in water - which is of course less than the speed of light in vacuum because of the refractive index of water.
My point is - as long as you put "lots of electrons in the way" you will eventually stop gamma rays: if you need to stop them in a short distance, you need dense high-Z material, but that is not always necessary.
In radiation protection, it is recognized that the combination of shielding, distance, and time to exposure all play a role in keeping radiation dose to people as low as possible: ALARA "As Low As Reasonably Achievable".
Finally - a wikipedia link
-
Water is neither dense (relative to lead, for example) nor does it contain high atomic number atom. Then why do people consider water as a shielding agent? – Icarus Jul 20 '14 at 21:30
Water provides you with lots and lots of electrons for very little money. That makes it a good shielding option for large scale shielding (where size does not matter) – Floris Jul 20 '14 at 21:38
Based on what I understand from your post, if a material is not dense enough, the probability of a photon hitting one of the electrons becomes much lower. Correct? – Icarus Jul 20 '14 at 21:41
@Icarus The figure of merit is roughly the number of electrons per unit area in traversing the whole of the shielding. Dense materials allow you to have a high figure of merit in a short distance (which is very often the thing you want), but in other applications the best engineering deal may be found from maximizing electrons per unit area per dollar. That is often when water gets used. For instance, the on-site cooling ponds for spend fuel at reactor facilities are not compact, but they are cheap (and they provide passice cooling as well). – dmckee Jul 21 '14 at 1:15
Water, and light elements in general, are good for neutrons. – DarioP Jul 21 '14 at 10:52
The two existing answers have noted that EM radiation (X-rays, gamma) is effectively stopped by electrons. There are at least 4 other types common types of radiation:
1. Alpha particles (2 protons, 2 neutrons - essentially He4 2+)
2. Beta particles (single electron)
3. Neutrons
4. Ions (other than alpha particles)
The first thee are commonly generated by nuclear decay reactions; the 4th category is relevant as it's part of the cosmic particles mentioned. Shielding for them differs. Alpha's are quite easy to stop, as are ions. Pretty much any thin layer of matter will stop them - a centimeter of air will have already have a measurable effect.
Beta's are electrons and are therefore easily stopped by many materials. Because they're lighter than alpha's, yet have similar energies, they travel faster. As a result, they penetrate further than alpha's.
Neutrons are the odd one out, as they're electrically neutral. No electron cloud is going to stop them; they're stopped by nuclei. But it doesn't take a particularly heavy nucleus for that. Light elements have less electron orbitals and can therefore pack more nuclei in a given volume, which compensates for the fact that each nucleus is smaller.
-
There is a second advantage to light nuclei in stopping neutrons. The target-frame, kinetic-energy loss of the incident particle is largest when the projectile and target have the same mass. Neutrons hitting $^1\mathrm{H}$ nuclei dump energy faster than when they hit any other nucleus. – dmckee Jul 22 '14 at 23:32 |
Search
Andrei Alexandrescu <andrei at ...> writes:
>
>
> I suggest we go with lazy ranges throughout. No memory allocation unless the user asks for it by calling std.array.array(range). For example, splitter() vs. split() was a huge accelerator in virtually all my text processing programs.
>
> Andrei
>
Fine by me as long as it gets in.
I? a bit curious, though. path2list currently returns an array of slices into the original path. I have a hard time imagining memory allocation this way would be much higher than as a range (unless the original path is also a lazy range).
Next week finals will be over. I?l rewrite the bugger then.
The array of slices itself requires allocation.
Andrei
On 04/26/2010 05:15 PM, Ellery Newcomer wrote:
> Andrei Alexandrescu<andrei at ...> writes:
>
>>
>>
>> I suggest we go with lazy ranges throughout. No memory allocation unless
>> the user asks for it by calling std.array.array(range). For example,
>> splitter() vs. split() was a huge accelerator in virtually all my text
>> processing programs.
>>
>> Andrei
>>
>
> Fine by me as long as it gets in.
>
> I? a bit curious, though. path2list currently returns an array of slices into the original path. I have a hard time imagining memory allocation this way would be much higher than as a range (unless the original path is also a lazy range).
>
> Next week finals will be over. I?l rewrite the bugger then.
>
>
> _______________________________________________
> phobos mailing list
> phobos at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/phobos
On 04/26/2010 04:08 PM, Lars Tandle Kyllingstad wrote:
> Are you envisioning a system that auto-detects whether something is a Windows or a POSIX path and converts it to some OS-agnostic internal representation? E.g. something like
>
> // Auto-detect Windows path
> auto path = Path("c:\\foo\\bar.baz");
I wasn't thinking of that. More like some common primitives that could deal with Windows paths and Unix paths. The separator is already abstracted, and the remaining difference is the existence of a drive letter in Windows (that I notice some Unix shells are starting to replace with a protocol such as smb: or sftp:).
> The only other option I can see is to have std.path automatically work with Windows paths on Windows and POSIX paths on POSIX -- which is exactly what I'm suggesting.
>
>
> Anyway, I'm not married to this idea, I just think it's a good one. ;)
>
> I still think something needs to be done to std.path, though (and I'm still volunteering to do it). Did any of my other suggestions seem worthwhile, or are people happy with the module the way it is? Are there other suggestions?
I love the way you set up things here:
http://kyllingen.net/code/ltk/doc/path.html
It's just that I so much believe it would sit better elsewhere. It's a great design hitting on an unfit problem. For example, let's assume you convince me there really is a need for Unix path manipulation on Windows (which I'm not, but let's say I am). Then I see I can use Windows paths on Unix. Yay! That's what I call a cool design. But wait. When would you need that? Well, most never. That's why I feel there's some lack of fitness in there.
How about this: we focus on an API that allows you to use the alternate separator on Windows ("/") for virtually all Posix primitives. At least in theory, a Windows path without a drive is a Posix path.
Andrei
On 04/27/2010 12:43 AM, Andrei Alexandrescu wrote:
> How about this: we focus on an API that allows you to use the alternate separator on Windows ("/") for virtually all Posix primitives. At least in theory, a Windows path without a drive is a Posix path.
Alright, I won't push it any further. ;) What you're suggesting will cover many use cases, and like I said, it's not *that* important to me.
Below is a listing the API I have in mind. I think it's worth breaking backwards compatibility for a more unified and coherent naming scheme, and I also think it's better to do it now than later. Walter seemed to oppose the idea, what do others think?
// System-dependent constants
string dirSeparator
string altDirSeparator
string pathSeparator
string currentDir
string parentDir
// File extensions
string extension(path)
string setExtension(path, ext)
string setDefaultExtension(path, ext)
string appendExtension(path, ext)
string removeExtension(path, ext=null)
// Extracting drive/directory/filename
string drive(path)
string directory(path)
string filename(path)
string basename(path, suffix)
// Relative/absolute/canonical paths
bool isAbsolute(path)
bool isRelative(path)
bool isCanonical(path)
string toAbsolute(path)
string toRelative(path)
string toCanonical(path)
// Joining/splitting paths
string join(pathComponents...)
SomeRange splitter(path) // cf. Ellery Newcomer's suggestion
// Filename matching
bool wildcardMatch(path, pattern)
bool filenameMatch(path1, path2)
bool filenameCharMatch(char1, char2)
-Lars
On 04/27/2010 03:40 AM, Lars Tandle Kyllingstad wrote:
> On 04/27/2010 12:43 AM, Andrei Alexandrescu wrote:
> Below is a listing the API I have in mind. I think it's worth breaking
> backwards compatibility for a more unified and coherent naming scheme,
> and I also think it's better to do it now than later. Walter seemed to
> oppose the idea, what do others think?
>
>
> // System-dependent constants
> string dirSeparator
> string altDirSeparator
> string pathSeparator
> string currentDir
> string parentDir
I'd change "currentDir" with e.g. "currentDirSymbol" or something. Otherwise people may actually thing currentDir is pwd. Same about parentDir.
> // File extensions
> string extension(path)
> string setExtension(path, ext)
> string setDefaultExtension(path, ext)
> string appendExtension(path, ext)
> string removeExtension(path, ext=null)
I'm not fond of adding support for extensions. On Unix there's no explicit extension. Extension comes from CP/M with 8 characters for name and 3 characters for extension, which is now long defunct.
> // Extracting drive/directory/filename
> string drive(path)
> string directory(path)
> string filename(path)
> string basename(path, suffix)
"dirname" will be instantly recognized by any Unix user.
> // Relative/absolute/canonical paths
> bool isAbsolute(path)
> bool isRelative(path)
> bool isCanonical(path)
> string toAbsolute(path)
> string toRelative(path)
> string toCanonical(path)
>
> // Joining/splitting paths
> string join(pathComponents...)
> SomeRange splitter(path) // cf. Ellery Newcomer's suggestion
>
> // Filename matching
> bool wildcardMatch(path, pattern)
> bool filenameMatch(path1, path2)
> bool filenameCharMatch(char1, char2)
What does the last do?
Andrei
Sorry, sent this to Andrei's private address again...
On 04/27/2010 09:51 PM, Andrei Alexandrescu wrote:
> On 04/27/2010 03:40 AM, Lars Tandle Kyllingstad wrote:
>> On 04/27/2010 12:43 AM, Andrei Alexandrescu wrote:
>> Below is a listing the API I have in mind. I think it's worth breaking
>> backwards compatibility for a more unified and coherent naming scheme,
>> and I also think it's better to do it now than later. Walter seemed to
>> oppose the idea, what do others think?
>>
>>
>> // System-dependent constants
>> string dirSeparator
>> string altDirSeparator
>> string pathSeparator
>> string currentDir
>> string parentDir
>
> I'd change "currentDir" with e.g. "currentDirSymbol" or something. Otherwise people may actually thing currentDir is pwd. Same about parentDir.
Good point.
>> // File extensions
>> string extension(path)
>> string setExtension(path, ext)
>> string setDefaultExtension(path, ext)
>> string appendExtension(path, ext)
>> string removeExtension(path, ext=null)
>
> I'm not fond of adding support for extensions. On Unix there's no explicit extension. Extension comes from CP/M with 8 characters for name and 3 characters for extension, which is now long defunct.
I completely agree with you.
It's not a case of *adding* support, however. getExt() and defaultExt() are already in the current std.path, so what you're suggesting is *removing* support.
I don't mind, but I'm sure it won't sit well with everyone. Like it or not -- and I sure don't -- extensions are still the primary way of conveying file type information.
>> // Extracting drive/directory/filename
>> string drive(path)
>> string directory(path)
>> string filename(path)
>> string basename(path, suffix)
>
> "dirname" will be instantly recognized by any Unix user.
Good idea. I just realised it's kinda pointless to have both filename() and basename() as well.
>> // Relative/absolute/canonical paths
>> bool isAbsolute(path)
>> bool isRelative(path)
>> bool isCanonical(path)
>> string toAbsolute(path)
>> string toRelative(path)
>> string toCanonical(path)
>>
>> // Joining/splitting paths
>> string join(pathComponents...)
>> SomeRange splitter(path) // cf. Ellery Newcomer's suggestion
>>
>> // Filename matching
>> bool wildcardMatch(path, pattern)
>> bool filenameMatch(path1, path2)
>> bool filenameCharMatch(char1, char2)
>
> What does the last do?
The same as the current std.path.fncharmatch(): On POSIX fncharmatch('a','A') is false but on Windows it's true.
I'm not convinced any of these last three are generally useful, but again, I am wary of *removing* functionality.
-Lars
Lars Tandle Kyllingstad <lars at ...> writes:
>
> I'm not convinced any of these last three are generally useful, but again, I am wary of *removing* functionality.
>
> -Lars
>
past bikeshedding, filenameMatch would be especially useful if it could accept ranges from splitter.
Next › Last »
1 2 |
# Hamilton Lin-Manuel Miranda
## Box office and business
The musical's engagement at the Off-Broadway Public Theater was sold-out.[2]
When the musical opened on Broadway, it had a multimillion-dollar advance in ticket sales, reportedly taking in $30 million before its official opening.[62] Hamilton was the second-highest-grossing show on Broadway for the Labor Day week ending September 6, 2015 (behind only the The Lion King).[63] As of September 2015, the show has been sold out for most of its Broadway engagement.[3] Hamilton, like other Broadway musicals, offers a lottery before every show. Twenty-one front row seats and occasional standing room are given out in the lottery. Chosen winners are able to purchase two tickets at$10 each. Unlike other Broadway shows, Hamilton's lottery process drew in large crowds of people that created a congestion problem for West 46th Street. Even though many people were not able to win the lottery, Hamilton creator Lin Manuel-Miranda prepared mini-performances, right before the lotteries were drawn. They were dubbed the '#Ham4Ham' shows, due to the fact that if you won, you gave a Hamilton (a \$10 bill) in exchange for seeing the show. People were then able to experience a part of the show even when they did not win the lottery.[64] The lottery was eventually placed online to avoid increasing crowds and dangerous traffic conditions.[65] On its first day, more than 50,000 people entered, which resulted in the website crashing.[66] Trevor Boffone in his essay on HowlRound wrote: "Ham4Ham follows a long tradition of Latina/o (or the ancestors of present-day Latina/os) theatremaking that dates back to when the events in Hamilton were happening. (...) The philosophy behind this is simple. If the people won't come to the theatre, then take the theatre to the people. While El Teatro Campesino's 'taking it to the streets' originated from a place of social protest, Ham4Ham does so to create accessibility, tap into social media, and ultimately generate a free, self-functioning marketing campaign. In this way, Ham4Ham falls into a lineage of accessibility as a Latina/o theatremaking aesthetic."[67] Following Miranda's departure from the show on July 9th, 2016, Rory O'Malley, then playing King George III, took over as the host of Ham4Ham.[68] The Ham4Ham show officially ended on August 31st, 2016, after over a year of performances, though the lottery still continues daily.[69]
This content is from Wikipedia. GradeSaver is providing this content as a courtesy until we can offer a professionally written study guide by one of our staff editors. We do not consider this content professional or citable. Please use your discretion when relying on it. |
## Cryptology ePrint Archive: Report 2015/883
Revisiting Sum of CBC-MACs and Extending NI2-MAC to Achieve Beyond-Birthday Security
Avijit Dutta and Goutam Paul
Abstract: In CT-RSA 2010, Kan Yasuda has shown that the sum of two independent Encrypted CBC (ECBC) MACs is a secure PRF with security beyond birthday bound. It was mentioned in the abstract of the paper that no proof of security above the birthday bound $(2^{n/2})$ has been known for the sum of CBC MACs" (where $n$ is the tag size in bits). Kan Yasuda's paper did not consider the sum of actual CBC outputs and hence the PRF-security of the same has been left open. In this paper, we solve this problem by proving the beyond birthday security of sum of two CBC MACs. As a tool to prove this result, we develope a generalization of the result of S. Lucks from EUROCRYPT 2000 that the sum of two secure PRPs is a secure PRF. We extend this to the case when the domain and the range of the permutations may have some restrictions. Finally, we also lift the birthday bound of NI2 MAC construction (the bound was proven in CRYPTO 2014 by Gazi et al.) to beyond birthday by a small change in the existing construction.
Category / Keywords: Beyond Birthday, CBC, ECBC, MAC, NI, NI2, Sum of PRP
Date: received 11 Sep 2015, last revised 13 Sep 2015, withdrawn 14 Sep 2015
Contact author: goutam paul at isical ac in
Available format(s): (-- withdrawn --)
Note: A few typos corrected.
Short URL: ia.cr/2015/883
[ Cryptology ePrint archive ] |
Derive formuli for $$\sin(\alpha+\beta)$$ and $$\cos(\alpha+\beta)$$ by using matrices of linear maps.
• #### Hint
Determine the matrix of the composed mapping.
• #### Resolution
When $$f$$ and $$g$$ are the rotations by angle $$\alpha$$ and $$\beta$$, then their matrices are:
$$[f]_{KK}= \begin{pmatrix} \cos \alpha & -\sin\alpha \\ \sin \alpha & \cos\alpha \\ \end{pmatrix}$$ a $$[g]_{KK}= \begin{pmatrix} \cos \beta & -\sin\beta \\ \sin \beta & \cos\beta \\ \end{pmatrix}$$
The composition of $$f$$ and $$g$$ is the rotation by angle $$\alpha+\beta$$. Its matrix is:
$$[g\circ f]_{KK}= \begin{pmatrix} \cos (\alpha+\beta) & -\sin(\alpha+\beta) \\ \sin (\alpha+\beta) & \cos(\alpha+\beta) \\ \end{pmatrix}$$
Simultaneously, the following must hold:
$$[g\circ f]_{KK}=[g]_{KK}[f]_{KK}= \begin{pmatrix} \cos \beta & -\sin\beta \\ \sin \beta & \cos\beta \\ \end{pmatrix} \begin{pmatrix} \cos \alpha & -\sin\alpha \\ \sin \alpha & \cos\alpha \\ \end{pmatrix} =\\= \begin{pmatrix} \cos\beta\cos\alpha -\sin\beta\sin\alpha & -\cos\beta\sin\alpha -\sin\beta\cos\alpha \\ \sin\beta\cos\alpha +\cos\beta\sin\alpha & -\sin\beta\sin\alpha +\cos\beta\cos\alpha \\ \end{pmatrix}$$
As both matrices should be qual, we get the desired formuli:
$$\sin (\alpha+\beta) = \sin\beta\cos\alpha +\cos\beta\sin\alpha$$,
$$\cos(\alpha+\beta) = -\sin\beta\sin\alpha +\cos\beta\cos\alpha$$. |
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here!
# Solve each inequality for x.(a) $\ln x < 0$(b) $e^x > 5$
## (a) $\ln x<0 \Rightarrow x<e^{0} \Rightarrow x<1$. Since the domain of $f(x)=\ln x$ is $x>0,$ the solution of the original inequalityis $0<x<1$.(b) $e^{a}>5 \Rightarrow \ln e^{x}>\ln 5 \Rightarrow x>\ln 5$.
### Discussion
You must be signed in to discuss.
Lectures
Join Bootcamp
### Video Transcript
I'm going to solve both of these inequalities algebraic Lee and then confirmed them graphically. So first of all, with natural log of X is less than zero. I'm going to use a process called exponentially ating, which means that each side of the equation or each side of the inequality will be an exponents on the same base. And I'm going to use based E. So we hav e raised to the left side of the inequality is less than e raised to the right side of the inequality. And we should know, based on inverse functions that e to the natural log of X simplifies to just be X, and each of zero is one, so we have X is less than one. There's one other thing we need to consider, though. Remember the domain of along with Mick Function X has to be greater than zero. So we do have X is less than one, and we have X is greater than zero. When you combine those together, you get zero is less than X is less than one now. Graphically, if you think about what the natural log of X function looks like, it looks like this with the next intercept at one. Where is it greater than zero Down here. We're sorry. Less than zero down here that that's where the Y values are. Less than zero. And that was between X equals zero and X equals one. Okay, now let's do something similar to the other inequality. And I'm going to take the natural log of both sides. So we have natural log of e to the X is greater than natural law. Go five because of inverse functions. Natural log of each of the X simplifies to just be x so x is greater than natural log five.
Oregon State University
Lectures
Join Bootcamp |
The database contains 4,953 groups, including all transitive subgroups of $S_n$, up to conjugation, for $n\leq 23$.
## Browse Galois groups
Browse by degree $n$ : 2 3 4 5 6 7 8 9 10 11 12 13 14 15
## Search
Parity Cyclic Solvable Primitive All Odd Even All Yes No All Yes No All Yes No
Degree: e.g. 6 or 4,6 or 2..5 or 4,6..8
$t$ : e.g. 6 or 4,6 or 2..5 or 4,6..8
Maximum number of groups to display:
## Find a specific Galois group
Search by label a Galois group label, e.g. 6T3 |
# Archives by date
You are browsing the site archives by date.
## Weekly Papers on Quantum Foundations (39)
Magnetic monopoles and nonassociative deformations of quantum theory. (arXiv:1709.10080v1 [hep-th]) hep-th updates on arXiv.org on 2017-9-29 1:58am GMT Authors: Richard J. Szabo We examine certain nonassociative deformations of quantum mechanics and gravity in three dimensions related to the dynamics of electrons in uniform distributions of magnetic charge. We describe a quantitative framework for nonassociative quantum mechanics in this setting, which exhibits... Read more →
## Weekly Papers on Philosophy of Mind (39)
What is really missing from AI? AI [x] add tag New Scientist - Home on 2017-9-28 5:55pm A festival’s search for humanity in AI points to the absence of love – and something even more fundamental bookmark blog it not shared ↑ AI could put a stop to electricity theft and meter misreadings AI [x] add tag New Scientist - Home on 2017-9-23 2:00pm An algorithm... Read more →
## On the Status of the Measurement Problem: Recalling the Relativistic Transactional Interpretation
ABSTRACT. In view of a resurgence of concern about the measurement problem, it is pointed out that the Relativistic Transactional Interpretation (RTI) remedies issues previously considered as drawbacks or refutations of the original TI. Specifically, once one takes into account relativistic processes that are not representable at the non-relativistic level (such as particle creation and annihilation, and virtual propagation), absorption... Read more →
## Weekly Papers on Quantum Foundations (38)
How to make a quantum black hole with ultra-cold gases. (arXiv:1709.07189v1 [cond-mat.quant-gas]) hep-th updates on arXiv.org on 2017-9-22 12:53am GMT Authors: Ippei Danshita, Masanori Hanada, Masaki Tezuka The realization of quantum field theories on an optical lattice is an important subject toward the quantum simulation. We argue that such efforts would lead to the experimental realizations of quantum black holes. The basic idea... Read more →
## Weekly Papers on Philosophy of Mind (38)
The Consciousness of Embodied Cognition, Affordances, and the Brain brain [x] consciousness [x] experience [x] phenomenal [x] add tag Latest Results for Topoi on 2017-9-21 5:00am Abstract Tony Chemero advances the radical thesis that cognition and consciousness are actually the same thing. I question this conclusion. Even if we are the brain–body environmental synergies that Chemero and others claim, we will not be able to conclude that consciousness... Read more →
## A Reformulation of von Neumann’s Postulates on Quantum Measurement by Using Two Theorems in Clifford Algebra
Elio Conte According to a procedure previously introduced from Y. Ilamed and N. Salingaros, we start giving proof of two existing Clifford algebras, the Si that has isomorphism with that one of Pauli matrices and the Ni,±1 where Ni stands for the dihedral Clifford algebra. The salient feature is that we show that the Ni,±1 may be obtained from the Si algebra when we... Read more →
## Weekly Papers on Quantum Foundations (37)
Time in the theory of relativity: on natural clocks, proper time, the clock hypothesis, and all that Philsci-Archive: No conditions. Results ordered -Date Deposited. on 2017-9-15 9:50pm GMT Bacelar Valente, Mario (2013) Time in the theory of relativity: on natural clocks, proper time, the clock hypothesis, and all that. [Preprint] The relativity of simultaneity and presentism Philsci-Archive: No conditions. Results... Read more →
## Weekly Papers on Quantum Foundations (36)
From Euclidean Geometry to Knots and Nets Philsci-Archive: No conditions. Results ordered -Date Deposited. on 2017-9-09 2:25am GMT Larvor, Brendan (2017) From Euclidean Geometry to Knots and Nets. [Preprint] Classical analogue of the Unruh effect. (arXiv:1709.02200v1 [gr-qc]) gr-qc updates on arXiv.org on 2017-9-09 1:51am GMT Authors: Ulf Leonhardt, Itay Griniasty, Sander Wildeman, Emmanuel Fort, Mathias Fink In the Unruh effect an observer with constant acceleration... Read more →
## Weekly Papers on Quantum Foundations (35)
S-matrix interpretation in categorical quantum mechanics. (arXiv:1708.09383v1 [quant-ph]) quant-ph updates on arXiv.org on 2017-9-02 2:02am GMT Authors: Xiao-Kan Guo We study the $S$-matrix interpretation of quantum theory in light of Categotical Quantum Mechanics. The $S$-matrix interpretation of quantum theory is shown to be a functorial semantics relating the algebras of quantum theory to the effective $S$-matrix formalism. Consequently, issues such as... Read more → |
# Holographic Algorithms - Equivalence of Bases
I was going through Les Valiant's seminal paper and I had a tough time with Proposition 4.3 on page 10 of the paper.
I cannot see why is it the case that if there is a generator with certain values for $valG$ with a basis $\{(a_1,b_1) \ldots (a_r,b_r)\}$, then there exists some generator with same $valG$ values for any basis $\{(xa_1,yb_1) \ldots (xa_r,yb_r)\}$ ($1^{st} kind$) or $\{(xb_1,ya_1) \ldots (xb_r,ya_r) \}$ ($2^{nd} kind$) for any $x,y \in F$.
Valiant points out the reason in preceding paragraph - namely the $1^{st}$ kind of transformation can be achieved by appending to every input or output node an edge of weight $1$. The $2^{nd}$ kind of transformation, Valiant says, can be achieved by appending to input or output nodes chains of length $2$ weighted by $x$ and $y$ respectively.
I have not been really able to understand these statements. Maybe they are already clear, but still I cannot really see why the above construct helps achieve any realizable $valG$ values with one basis with the new basis which is one of the above types.
Please help illuminate them to me. On a different note, are there some tensor free surveys for hologaphic algorithms available online. Most of them use tensors which, sadly, scare me :-(
Best -Akash
Tensors (in this sense) are just arrays of numbers, so I don't think you'll find tensor free surveys unless they're completely free of calculations.
The "$T^{\otimes k}$" operation formalizes both the operations of changing basis and attaching gadgets to each output node (in fact I like to think of a change of basis as a sort of gadget operation). Let $\Gamma$ be a generator matchgate with standard signature $M_{i_1i_2\cdots i_k}=u(\Gamma)$. This an array of $2^k$ numbers. The signature in a new basis is given by
$(T^{\otimes k}M)_{i_1i_2\cdots i_k}:=\sum_{i_1',\cdots,i_k'} T_{i_1i_1'} \cdots T_{i_ki_k'} M_{i_1'i_2'\cdots i_k'}$
where $T$ is some two-by-two matrix descring the new basis.
Let $\Gamma'$ be the matchgate formed by adding $k$ new vertices to $\Gamma$, taking these to be the new outputs, and connecting them to the old outputs by an edge of weight $x$. Then the new signature is $C^{\otimes k}M$ where $C_{ij}$ is the matrix $\begin{pmatrix}0&x\\1&0\end{pmatrix}$. If you then perform the change of basis $TC^{-1}$ you get the signature $T^{\otimes k}M$.
• Sorry for the late reply, I was occupied today. I am afraid, due to my limited understanding of tensors, I still cannot understand you. I used to think that the signature of a generator matchgate in the new basis, $S$ was derived from the signature $u(\Gamma)$ in the old basis by the solution $S = S_0$ to $T^{\otimes k} \times S = u(\Gamma)$. I thought Valiant mentioned in his $valG(\Gamma, x)$ example that he just intended to express perfMatch vector as the sum of coefficients wrt to the new basis. I cannot be sure though, for my obvious lack of background with tensors. – Akash Kumar Feb 17 '11 at 4:06
• [contd] Also, I am not able to follow your example with $C^{\otimes k}M$. Could you please elaborate a little more? Thanks -- Akash – Akash Kumar Feb 17 '11 at 4:11
• I'm happy to try to elaborate, but I might just be adding confusing notation. Could you answer this first: if you add edges at each output node, what effect do you think this would have on the signature? Also, that $S_0$ can be expressed as $(T^{-1})^{\otimes k} S$ - I can't remember off-hand what the actual coefficients of $T$ should be in terms of Valiant's $n_0,n_1,p_0,p_1$. – Colin McQuillan Feb 17 '11 at 10:31
• I will try to state my confusion with an example. Consider a generator which is path of length 3 where all the 3 nodes are o/p nodes. The std signature of this generator is $(0,1,1,0,1,0,0,1)$. And the signature of the modified gadget with 3 new nodes each connected to one o/p node in the std basis is $x(1,1,1,1\ 1,1,1,0)$. Could you please continue with this example? I would like to see how does $C^{\otimes 3} where$$C = (0,\ 1)^t (x=1,\ 0)^t$ act on $u(\Gamma)$. Thanks for your patience – Akash Kumar Feb 17 '11 at 13:20
• Let $P_3$ be a path of order 3. Call the vertices x,y,z where y is the middle vertex. $P_3 \setminus Z$ has a perfect matching iff Z is {x}, {z}, or {x,y,z}. So $u(P_3)=(0,1,0,0,1,0,0,1)$. With edges attached the signature is $(1,0,0,1,0,0,1,0)$. Try calculating for example $(C^{\otimes 3} u(P_3))_{1,2,2}=1$ using the formula above. – Colin McQuillan Feb 17 '11 at 14:21 |
## Are all objects with irrational lengths measureable?
I was deleting old emails a while ago and I came across with questions from some students reading my blog. I have answered quite a number of questions from middle school and high school students via Email and Facebook since this blog started. I think some are worth publishing here, so I’ll probably post one from time to time. Below is the first Q & A in this series.
Question
Are all objects with irrational lengths measureable?
Yes. In principle, they are measureable.
The number line represents all real numbers. It contains all the rational and irrational numbers. In fact, there is a one-to-one correspondence between the set of real numbers and the set of points on the number line. This means that every real number has a corresponding point on the number line, and every point on the number line has a corresponding real number. Therefore, since we can locate every irrational number on the number line, we can find its distance from 0. This distance represents the irrational length.
## Why are Non-terminating, Repeating Decimals Rational
Rational numbers is closed under addition. That is, if we add two rational numbers, we are guaranteed that the sum is also a rational number. The proof of this is quite easy, so I leave it as an exercise for advanced high school students.
Before discussing non-terminating decimals, let me also note that terminating decimals are rational. I think this is quite obvious because terminating decimals can be converted to fractions (and fractions are rational). For example, $0.842$ can be expressed as
$\displaystyle\frac{842}{1000}$.
Further, terminating decimals can be expressed as sum of fractions. For example, $0.842$ can be expressed as
$\frac{8}{10} + \frac{4}{100} + \frac{2}{1000}$.
Since rational numbers is closed under addition, the sum of any number of fractions is also a fraction. This shows that all terminating decimals are fractions. » Read more
## Where is the Nobel Prize in Mathematics?
The Nobel Prize are prestigious awards given each year to individuals (as well as organizations) who have contributed significantly in cultural and scientific advances.
Alfred B. Nobel, the inventor of dynamite, bequeathed 31 million Swedish kronor in 1895 (about 250 million dollars in 2008) to fund the awards for achievements in Chemistry, Physics, Physiology and Medicine, Literature, and Peace. In 1901, the first set of awards were given, and in 1969, the Nobel Foundation established the Nobel Prize for Economics.
But did you ever wonder why there was no Nobel Prize of Mathematics? » Read more
1 2 3 4 5 8 |
# Chapter 7 - Section 7.3 - Multiplying and Simplifying Radical Expressions - Exercise Set - Page 531: 38
$f(x)=|x-1|\sqrt{5}$
#### Work Step by Step
Factor out 5 in the trinomial to obtain: $f(x)=\sqrt{5(x^2-2x+1)}$ The trinomial is a perfect square whose factored form is $(x-1)^2$. Thus, $f(x)=\sqrt{5(x-1)^2}$ The principal square root of any number/expression is always non-negative. Since $x$ can be any real number, then an absolute value must be applied to the principal square root of $(x-1)^2$ to make it non-negative for all values of $x$. Thus, simplifying the function gives: $f(x)=\sqrt{5(x-1)^2} \\f(x)=|x-1|\sqrt{5}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
# Module Dls.ArrayDenseMatrix
module ArrayDenseMatrix: sig .. end
General purpose dense matrix operations on arrays.
See sundials: The DENSE Module
type t = Sundials.RealArray2.t
A dense matrix accessible directly through a Bigarray.
See sundials: Small dense matrices
See sundials: newDenseMat
#### Basic access
val make : int -> int -> float -> t
make m n x returns an m by n dense matrix with elements set to x.
See sundials: newDenseMat
val create : int -> int -> t
create m n returns an uninitialized m by n dense matrix.
See sundials: newDenseMat
val get : t -> int -> int -> float
get a i j returns the value at row i and column j of a.
val set : t -> int -> int -> float -> unit
set a i j v sets the value at row i and column j of a to v.
val update : t -> int -> int -> (float -> float) -> unit
update a i j f sets the value at row i and column j of a to f v.
val set_to_zero : t -> unit
Fills the matrix with zeros.
See sundials: setToZero
#### Calculations
val add_identity : t -> unit
Increments a square matrix by the identity matrix.
val matvec : t -> x:Sundials.RealArray.t -> y:Sundials.RealArray.t -> unit
Compute the matrix-vector product $y = Ax$.
Since 2.6.0
See sundials: denseMatvec
val blit : t -> t -> unit
blit src dst copies the contents of src into dst. Both must have the same size.
See sundials: denseCopy
val scale : float -> t -> unit
Multiplies each element by a constant.
See sundials: denseScale
val getrf : t -> Sundials.LintArray.t -> unit
getrf a p performs the LU factorization of the square matrix a with partial pivoting according to p. The values in a are overwritten with those of the calculated L and U matrices. The diagonal belongs to U. The diagonal of L is all 1s. Multiplying L by U gives a permutation of a, according to the values of p: p.{k} = j means that rows k and j were swapped (in order, where p.{0} swaps against the original matrix a).
Raises ZeroDiagonalElement Zero found in matrix diagonal
See sundials: denseGETRF
val getrs : t -> Sundials.LintArray.t -> Sundials.RealArray.t -> unit
getrs a p b finds the solution of ax = b using an LU factorization found by Dls.ArrayDenseMatrix.getrf. Both p and b must have the same number of rows as a.
See sundials: denseGETRS
val getrs' : t -> Sundials.LintArray.t -> Sundials.RealArray.t -> int -> unit
Like Dls.ArrayDenseMatrix.getrs but stores b starting at a given offset.
val potrf : t -> unit
Performs Cholesky factorization of a real symmetric positive matrix.
See sundials: densePOTRF
val potrs : t -> Sundials.RealArray.t -> unit
potrs a b finds the solution of ax = b using the Cholesky factorization found by Dls.ArrayDenseMatrix.potrf. a must be an n by n matrix and b must be of length n.
See sundials: densePOTRS
val geqrf : t -> Sundials.RealArray.t -> Sundials.RealArray.t -> unit
geqrf a beta work performs the QR factorization of a. a must be an m by n matrix, where m >= n. The beta vector must have length n. The work vector must have length m.
See sundials: denseGEQRF
val ormqr : a:t -> beta:Sundials.RealArray.t -> v:Sundials.RealArray.t -> w:Sundials.RealArray.t -> work:Sundials.RealArray.t -> unit
ormqr q beta v w work computes the product w = qv . Q is an m by n matrix calculated using Dls.ArrayDenseMatrix.geqrf with m >= n, beta has length n, v has length n, w has length m, and work has length m.
See sundials: denseORMQR
beta : vector passed to Dls.ArrayDenseMatrix.geqrf
v : vector multiplier
w : result vector
work : temporary vector used in the calculation |
By mahmoudbadawy, history, 5 years ago,
Hello Codeforces!
I'm glad to announce that on Feb/07/2017 17:05 UTC Codeforces Round #396 for the second division will take place. As usual, First division participants can take part out of competition.
This round was prepared by me and mohammedehab2002.
I'd like to thank moaz123 for helping us prepare the round, zoooma13 for testing some problems, KAN for helping us in contest preparation and for translating the statements into Russian and MikeMirzayanov for the great Codeforces and Polygon platforms.
You will be given 5 problems and 2 hours to solve them.
The scoring distribution will be announced later.
UPD 500-1000-1500-2000-2500
UPD Congratulations to the winners!
Div1+Div2:
Div2:
• +319
» 5 years ago, # | ← Rev. 2 → -332 A true Codeforces fan can not scroll down without upvoting this comment .
• » » 5 years ago, # ^ | +23 *without downvoting this comment
• »
»
»
5 years ago, # ^ |
-26
# Первый выпуск моего стэндапа
Уже доступен
» 5 years ago, # | +25 MikeMirzayanov mahmoudbadawy There is a bug with registration. Div1 users cant register unofficially.
• » » 5 years ago, # ^ | +35 Sorry, please try again now.
» 5 years ago, # | +3 Hope to be a good round. Good luck to all participants.
• » » 5 years ago, # ^ | -19 you don't need to say "good luck" buddy. coz good luck is not related to contest anywhere
• » » » 5 years ago, # ^ | +1 coz luck is not rated)
• » » » 5 years ago, # ^ | +40 I don't know man... sometimes when the site doesn't lag at all I feel pretty lucky...
» 5 years ago, # | +2 The email says the round will have 6 problems, but the post says 5. Which one is it?
• » » 5 years ago, # ^ | +5 They are 5 problems.
» 5 years ago, # | -20
• » » 5 years ago, # ^ | +5 That's FlapJack not Chowder.
» 5 years ago, # | 0 is this the 1st Arabic official round on CF ??
• » » 5 years ago, # ^ | ← Rev. 2 → +12 No, round #67 was written by ahmed_aly: Codeforces Beta Round #67 (Div. 2)
• » » » 5 years ago, # ^ | 0 problems are really good
• » » 5 years ago, # ^ | ← Rev. 2 → +2
• » » » 5 years ago, # ^ | +19 That's Great , one day we will have a round prepared by us
• » » » » 5 years ago, # ^ | 0 Actually in Syria we have some great problem seters, they should have prepared a round here long time ago :D
» 5 years ago, # | -107 Is it rated??
• » » 5 years ago, # ^ | ← Rev. 2 → -18 There's always that one guy who asks this question.
» 5 years ago, # | +9 mohammedehab2002 shares a name with an Egyptian weightlifting superstar. Guessing not the same guy but would be cool if so.
» 5 years ago, # | +19 short blog :) I love it
• » » 5 years ago, # ^ | -9 Don't have character?
• » » » 5 years ago, # ^ | 0 I think it might be president Elsissi :P
» 5 years ago, # | -36 Copy Codeforces testcases by clicking on them :OAn extension for chrome only Installation: Download from https://drive.google.com/file/d/0B4HQVLPL4OXRUHJNa0dnQkNYNzA/viewyou can see GIF https://media.giphy.com/media/26xBNH5TzjX0z90R2/giphy.gifGo to "chrome://extensions/" Drag and drop the file into the pageGood Luck And Have Fun <3
• » » 5 years ago, # ^ | -14 Thank you :(
» 5 years ago, # | -22 Here's to clever yet easy problems! Hoping to become candidate master finally...
• » » 5 years ago, # ^ | 0 Your new rating is determined by your contest rank.
» 5 years ago, # | 0 Fast and accurate wins the race.
» 5 years ago, # | 0 Rating != Knowledge Even newbies can think of a beautiful problem, which can be challenging for masters!
• » » 5 years ago, # ^ | ← Rev. 2 → 0 @VoR_ZaKon : All I wanted to say was that rating doesn't defines one's thinking ability. PS — He deleted his comment. His comment was — "Ok then I am better than tourist".
• » » » 5 years ago, # ^ | +4 He is banned (his comments and posts deleted automaticly). LOL :D
» 5 years ago, # | -29 Prepare by pupil?
• » » 5 years ago, # ^ | -16 Prepared by a specialist and Candidate Master
» 5 years ago, # | +34 I have short.
• » » 5 years ago, # ^ | ← Rev. 3 → 0 I think there were a lot of replys 5 minutes ago..UPD: I think admin deleted them.
» 5 years ago, # | 0 the new contetst with specialist builder. this contest will be nice
» 5 years ago, # | +3 New authors often surprise CodeForces community (in good way)... Trust on interesting contest, good luck to all!)
» 5 years ago, # | +5 New contest, new opportunities to learn and possibly get better. Thank you CodeForces
» 5 years ago, # | +135 What a stupid contest
• » » 5 years ago, # ^ | +4 why do you think so?
• » » » 5 years ago, # ^ | +4 problem D and E are old problems
• » » » » 5 years ago, # ^ | +1 how did you solve E ?
• » » » » 5 years ago, # ^ | -8 Where did you see problem E before? I have seen it in a way that you are given weights for the edges but I have never seen it with weights for the vertices. And the solution for the "edge" version is different than the one for this problem.
• » » » » » 5 years ago, # ^ | +3 Solve the problem considering the weight on the edges and the starting distance is a[centroid]. Now you need to invert the bits that appear on a[centroid] because if the path goes throught it, you will count it twice. Do that and then the rest is identical.
• » » » » » 5 years ago, # ^ | 0 Hi can you mention the link of the problem which given weights are for the edges ? I'd like to practice on that one too. Thanks
» 5 years ago, # | 0 already got these hackers
» 5 years ago, # | -6 belive it or not, this was the worst contest I have ever seen
• » » 5 years ago, # ^ | +7 Maybe your worst result ever ;)
• » » » 5 years ago, # ^ | 0 Should I say something or you know what I'm thinking about ?
• » » » » 5 years ago, # ^ | ← Rev. 2 → +1 Come on :)I was just kidding!
• » » » » » 5 years ago, # ^ | +3 Cool :)
• » » 5 years ago, # ^ | +15 E was a genuinely interesting problem IMHO. ABD were around the average, C was way too tedious. This may not be the best contest, but there is no way I could call this the worst contest ever.
• » » » 5 years ago, # ^ | ← Rev. 2 → 0 C wasn't tedious(if what I did was correct). The second part of C was a simple for loop, and the first and third parts were n^2 DP.
• » » » » 5 years ago, # ^ | +5 I did the same as you, but honestly I found that writing essentially three solutions was quite annoying. Maybe tedious is an overstatement.
• » » » » » 5 years ago, # ^ | +4 Maybe they intended the problem to look difficult.
» 5 years ago, # | +23 I'll just leave this here...
• » » 5 years ago, # ^ | +8 now, may I'll see legendary newbie rate !
» 5 years ago, # | ← Rev. 2 → +98 Hi! I'm working on rating calculation tool. It's close to be ready! You could find this contest's rating prediction here. I hope chrome extension would be ready till next contest.
• » » 5 years ago, # ^ | +8 Awesome, I love it :)
• » » 5 years ago, # ^ | +5 Good job!
• » » 5 years ago, # ^ | 0 Nice. But I think I will just use the page instead of the extension. Thank you for this.
• » » 5 years ago, # ^ | 0 For me, it doesn't seem to be working? It says I got rank as 291 when I got 99th place, and predicts rating decrease even though seed is 620?
• » » » 5 years ago, # ^ | ← Rev. 2 → 0 It's because server isn't so powerful to process changes just in time. Your rank and rating will change soon:) It seems you are going well!
• » » 5 years ago, # ^ | +9 By checking a few random contestants, I found that the predicted rating changes and real rating changes are always differ by 5.It would be awesome if this difference could be fixed as well :D Maybe it's something related to the unrated contestants? (Not sure)Can't wait to see the accuracy of your updated hardwork!
• » » » 5 years ago, # ^ | +5 Thank you! I will try to fix this)
» 5 years ago, # | +5 In B, we can write O(n^2) because maximum value of n for which answer is NO is ~90 ?
• » » 5 years ago, # ^ | +3 You can sort and itertate through in B.
• » » 5 years ago, # ^ | 0 I tried to find a hack test case for O(n ^ 3) and O(n ^ 2) solutions.. :D
• » » » 5 years ago, # ^ | 0 For O(n^3) use the first let's say 10 fibonacci numbers and then their multiples For example 100000 1 1 2 3 5 8 13 21 34 55 110 165 220 275 330 385 etc
• » » » » 5 years ago, # ^ | 0 Hm. I think this test case is wrong because "220 275 330" are valid sides
• » » » » » 5 years ago, # ^ | 0 The guy that I hacked checked if (v[ 1 ],v[ 2 ],v[ 3 ]) from a triangle, then(v[ 1 ],v[ 2 ],v[ 4 ])... then( v[ 2 ],v[ 3 ],v[ 4 ]) and so on, so on this test it gets TLE
• » » » » » » 5 years ago, # ^ | 0 Oooh. ok)
• » » 5 years ago, # ^ | 0 Have you proven that the maximum value is 90? Cause I was thinking for half an hour for a test that O(n^2) doesn't work for it but I couldn't come up with anything.
• » » » 5 years ago, # ^ | ← Rev. 2 → +14 The smallest test case of answer "NO" is the fibonacci sequence. So if N >= 45, it is always possible to make a triangle.
• » » » 5 years ago, # ^ | 0 Let's make sorted array with the answer "NO"1 1 2 3 5 ...Fibonacci numbers!ai <= 1e9, so n with an answer "NO" has to be small
• » » » 5 years ago, # ^ | ← Rev. 3 → 0 Yep, it is easy to prove that maximum value is less than 100.To build a correct triangle you need 3 values , lets say its a<=b<=cIf a+b>c triangle can be build, then if we want to create maximum sequence of numbers that cant result in triangle we have to create something like fibbonacci sequence. t[i]=t[i-1]+t[i-2]You can look up at fibbonacci and see that 70th or 80th(i dont know exactly) is already bigger than 10^9.Then, if n>100 pritf yes, otherwise use O(n^3) algorithm
» 5 years ago, # | ← Rev. 6 → +5 How to solve problem D? I could think of an approach using 2 dsu's. but did'nt code
• » » 5 years ago, # ^ | ← Rev. 2 → +13 I used 1 dsu and an additional array of antonyms (if i-th and j-th sets are antonymical, ant[i]=j and ant[j] = i).You just have to update the sets_merge operation to handle antonyms: if you merge a and b, ant[a] and ant[b] should also be merged.
• » » » 5 years ago, # ^ | 0 how update and handle sets in this case? 1 aba aca 1 ada aea 1 afa aga 2 afa aea
• » » » » 5 years ago, # ^ | +3 you have to think this way:2 strings are synonims.you do a union operation on them.but what happends if one of them or both have an antonym?if they both have one,you unite their antonyms.if only one has,you make sure that the root knows who's that antonym. if they are antonyms you update the antonyms array.but,there are again subcaseslet's say the words are a and b if a had an antonym,then b and ant[a] are synonyms,so you unite them if b had an antonym,then a and ant[b] are synonims. again,be carefull that the root always knows who is his antonym
• » » 5 years ago, # ^ | +6 Yes it can be solved using dsu, you just need to color the verticles (the words) and find the answer for each query
• » » 5 years ago, # ^ | +18 You can use 2 nodes for a single node. The new graph will have 2*N nodes. For synonyms, add edge between (u,v) and (u', v'). For antonyms, add edge between (u, v') and (v, u'). Use DSU.
» 5 years ago, # | +5 Can someone explain C? I thought it was DP but I couldn't figure out how to put it together.
• » » 5 years ago, # ^ | 0 lets say string is from i to j then, for(k=i;k<=j;k++) { if(i,k is a valid substring ) dp[i][j]+=(dp[k+1][j]); }
• » » 5 years ago, # ^ | 0 You are right, it was a DP indeed :)
• » » 5 years ago, # ^ | 0 to calculate number of ways to split the substring(l,r) you have to choose the leftmost splitter i starting from l until it violates the conditions mentioned or reach r. after choosing leftmost splitter i you have to find number of ways to split substring (i+1,r) and add it with the result
• » » 5 years ago, # ^ | ← Rev. 2 → 0 Yes, it's DP. I used two functions: 1) Number of different partitions of prefix with length i when rightmost word has length j 2) Minimum number of words in partitions of prefix with length i when rightmost word has length j
• » » 5 years ago, # ^ | 0 Let dp[i] denote the number of ways to split the substring[0,i) in a valid manner. Then the answer is obviously dp[n]. We have base case dp[0] = 1. Then dp[i] += dp[j]%MOD if substring(j,i] is valid for 0 <= j < i. You can think of it as dp-ing on the possible spots to cut the string. Question 2 and 3 can be answered by updating along the way.
» 5 years ago, # | +12 I loved D. Thanks for a great contest.
» 5 years ago, # | 0 Lost a lot of points on C, because I thought there should be at least one cut :/
» 5 years ago, # | 0 with 10 more minutes I would have finished debugging D and solved all problems. :( contest is much easier than usual
• » » 5 years ago, # ^ | 0 Can you tell me the idea behind E, please?
• » » » 5 years ago, # ^ | +8 I used centroid decomposition to solve E
• » » » » 5 years ago, # ^ | ← Rev. 3 → 0 I suppose it shouldn't fit to TL (because it O(N*logN*logMaxA))UPD. Yes, you got TL14. The correct solution (dp on tree for each bit) doesn't use centroid decomposition and has O(N*logMaxA) complexity.
• » » » » » 5 years ago, # ^ | +5 Yeah you are right i got TLE :(
• » » » » » 5 years ago, # ^ | +5 Hmm... I solved it using centroid decomposition and it passed just fine.
• » » » 5 years ago, # ^ | ← Rev. 2 → 0 Perform DP for each bit counting the ways with xor = 1 and xor = 0.
• » » » 5 years ago, # ^ | +1 Root the tree at an arbitrary vertex. Let's calculate the answer for all bits separately. For each vertex u calculate the number of pairs (u, v), where v is some vertex in the subtree of u so that the path weight for that bit is 1, and also so that the path weight is 0. This can be done through dynamic programming.Handling paths (u, w) that pass through v (so that v != u and v != w) can be done through iterating through the children of v.
• » » » » 5 years ago, # ^ | 0 I could not understand your approach. Can you elaborate a little please.
• » » » 5 years ago, # ^ | 0 For each bit i, assign value node u = bit i-th of a[u]. The problem become: Count number of path in tree that have odd number of 1. Then we multify it with 2^i.
• » » 5 years ago, # ^ | 0 Although I agree with u based on D and E, C was better than usual. Just check out round #394, C was a cakewalk.
» 5 years ago, # | +5 Awesome problemset! Devoted my whole time in solving E but couldn't. If there would have been integers attached with edges instead of nodes then it was quite easily solvable but this little twist made the contest complete fun for me. Thanks !
» 5 years ago, # | +18
» 5 years ago, # | 0 Can someone give an idea for D?
• » » 5 years ago, # ^ | +1 While adding pairs, you can use disjoint-set union to see if the words are already associated with the same meaning (be it antonymous or synonymous). Ignore those edges when adding them. Run DFS for each component, coloring its synonyms white and antonyms black. Then you can answer the queries (of both kind)
» 5 years ago, # | ← Rev. 2 → +17 Problem D is here Link However without the part of answering some queries on relations.
• » » 5 years ago, # ^ | +16 And here is E :P
• » » » 5 years ago, # ^ | +22 I'm also pretty sure B is somewhere online
• » » » » 5 years ago, # ^ | 0 Second POI.
• » » » » 5 years ago, # ^ | +4 B is a simpler version of this https://www.codechef.com/FEB17/problems/MAKETRI
• » » » » » 5 years ago, # ^ | +39 That's like saying A is a simpler version of the KMP algorithm...
• » » » » 5 years ago, # ^ | 0
• » » » 5 years ago, # ^ | +28 It is different from E which has weights on vertices, not on edges.
• » » » » 5 years ago, # ^ | 0 The solution would still be the same.
• » » » » » 5 years ago, # ^ | 0 Can you please explain how we can derive solution for weight on nodes. I had a solution with dfs and components counting if there were weights on edges. How to convert that in this problem ?
• » » » » » » 5 years ago, # ^ | ← Rev. 2 → +10 Just consider the given graph is a rooted tree, and the weight of node i, a[i], can be transformed into the weight of the edge between i and the parent of i.While calculation the answer with DFS, just don't forget to special handle the current node. We should also add the weight of the current node cur, a[cur] to the cost of the paths passing through node cur.
• » » » » » » 5 years ago, # ^ | +4 You have to deal with every bit alone. Dfs from the root and calculate the value of the bit i on the path from the root to each node.Now you only need the number of ones and the number of zeros in each subtree. You can combine the answers from different subtrees as for each two nodes the path length will be pathfromroot[x] ^ pathfromroot[y] ^ val[lca]There is only 2 outputs for each bit depending on the value of val[lca] so it can be calculated easily
» 5 years ago, # | +1 Just a couple of minutes more and I would've solved D. It's nice to spend the entire contest on C and realize that D is solvable when you have only 15 mins left... Thanks for the contest, anyway.
» 5 years ago, # | 0 Can i see some hack cases for 'A' that you people used. I tried hacking, but failed. Thanks
• » » 5 years ago, # ^ | 0 I could only hack one submission that returned -1 when one string contained the other, so my hack case was: aaaa aa
• » » » 5 years ago, # ^ | 0 +1
• » » 5 years ago, # ^ | 0 abc abc 24496013
• » » » 5 years ago, # ^ | 0 +1
» 5 years ago, # | ← Rev. 2 → +3 For problem B, what should be the hack case for O(n^3) solution which didn't use sorting?Update: This is a hack case for O(n^3): 100000 1 2 3 4 5 6 7 8 9 .......... 100000
• » » 5 years ago, # ^ | +1 100000 1 1 2 3 5 8 13 21 34 55 110 165 220 275 330 385 etc
• » » » 5 years ago, # ^ | 0 In this way, the value won't fit in integer and will be > 10^9, so I think this won't work.
• » » » » 5 years ago, # ^ | +10 How would the values exceed 109?The elements excluding the first nine are all multiples of 55, and so the last element in this list would be 55 × (105 - 9) = 5499505 < 109
• » » » » » 5 years ago, # ^ | -16 The 44'th number of this sequence ( i.e. 1 1 2 3 5 8 13 ......) is 1134903170 which is > 10^9
• » » » » » » 5 years ago, # ^ | +10 Dude, read it until the end.
• » » » » » » 5 years ago, # ^ | +15 Didn't you notice the sequence Flavius mentioned is not simply Fibonacci sequence? It is not a Fibonacci sequence starting from 55. So the 44th element of that sequence is not exceeding 109, it is just 55 × (44 - 9) = 1925.I hope you can read carefully next time :)
• » » » 5 years ago, # ^ | 0 220 275 330 form a Valid Triangle.
• » » » » 5 years ago, # ^ | +5 But the simplest O(N3) nested loop will still iterate long time through the first few elements until it finds the first valid triangle. FOR i FROM 1 TO n FOR j FROM (i + 1) TO n FOR k FROM (j + 1) TO n Check(A[i], A[j], A[k]) In cases that the logic is similar to the above code, i iterates from 1 to 10 but still can't find any solution. Each iteration of i takes O(N2) as it will still check for all possible j and k that i < j < k. In other words, the code will TLE before finding a valid triangle.BTW, the first valid triangle should be 110, 165 and 220
• » » » 5 years ago, # ^ | 0 The sequence overflows after 43 terms. For O(N^3) solutions, the hack used can be any sequence with 100000 terms and the first term as 1: 100000 1 2 3 4 5 6 ...since for i=1, the loop runs O(n^2) times and we get NO for all cases, this will give TLE.
• » » » » 5 years ago, # ^ | +5 Is it really hard to notice that the sequence is not just simply Fibonacci sequence?
• » » » » » 5 years ago, # ^ | 0 Sorry. My bad.
• » » 5 years ago, # ^ | +3 There isn't, with more than 45 elements it's always possible and O(n^3) obviously works for n = 45 http://codeforces.com/blog/entry/50280?#comment-342293
• » » » 5 years ago, # ^ | 0 O(n^3) solution got TLE on test case 33
• » » » » 5 years ago, # ^ | ← Rev. 2 → 0 Test 33 is abcd abc, so I'm guessing there was a bugged while instead of 3 forsedit: disregard this comment, wrong problem
• » » » » » 5 years ago, # ^ | 0 You are talking about problem A, while were talking about problem B.
• » » » » » » 5 years ago, # ^ | 0 Sorry, my mistake. Problem B's case 33 are just numbers from 1 to 10^5, so by checking in O(n^3) the first 45 numbers the solution (2, 3, 4) will show up
• » » 5 years ago, # ^ | ← Rev. 2 → 0 5 2 2 2 100 1000 and 6 1 10 100 10 1 1 correct answers: YES | YESEdit: these are tests with which I generally hacked some B codes. Misread your comment a bit... O(n^3) might not be hackable, since for n approx. >100 the code should always say "YES". See discussion few posts up about Fibonacci Sequence.
• » » » 5 years ago, # ^ | 0 O(n^3) solution will also give AC output for your inputs.
• » » » 5 years ago, # ^ | 0 But have you seen that in system test, O(n^3) solution got TLE on test case 33 ?
• » » » » 5 years ago, # ^ | 0 I tried to hack Atai solution for n=10000 with input as 1,2,3,4..........10000 but it showed unsuccessful.. But the same sol... is showing tle for test case 33 system test... how is my hack solution passing and system test failing... failed system soln http://codeforces.com/contest/766/submission/24500965 passed hack soln http://codeforces.com/contest/766/hacks/296935/test
• » » » » » 5 years ago, # ^ | +5 You unfortunately forgot to add an extra zero. The max input size is 10^5, where your hack is only 10^4.
• » » » » » » 5 years ago, # ^ | 0 it was n^3 soln if it failed at 100000 it will also fail at 10000
• » » » » » » » 5 years ago, # ^ | ← Rev. 2 → +5 The reason is he selects 1 as his first side length, and then selects 2, and then iterates from 3-10^4 to find a valid length but fails. He then switches his second leg to be 3, and iterates third leg from 4-10^4. You see this will continue until he selects 2 as his first leg.Here, he will immediately find a triangle (2,3,4). This process is n^2 for this test case (because of the early break case). If you would have used 10^5, he would have done (10^5)^2, which is 10^10 and then he would have TLE'd. Sorry, bad luck.
• » » 5 years ago, # ^ | 0 100000 1 1000 10004 10006 10008 10010 10012 10014 10016 10018 10020 10022...
» 5 years ago, # | +20 How can this user, r_clover, solve both problem B and D within a minute?
• » » 5 years ago, # ^ | 0 Wrote code for B, forgot submitting, then wrote code for D, submitted D, realized he had forgotten submitting B and submitted B.
• » » 5 years ago, # ^ | 0 Exactly ! and the funny thing is that took him 4 mins to solve A and 7 mins for both A and D ...
• » » 5 years ago, # ^ | +37 Coding styles are different too... He was not alone :)...
• » » 5 years ago, # ^ | 0 His results/submissions have disappeared
• » » 5 years ago, # ^ | +26 In the same way as you did here and got disqualified :
• » » » 5 years ago, # ^ | 0 S for SAVAGE
» 5 years ago, # | 0 Problems are good i think...i hope there would be a chance today...
» 5 years ago, # | +3 guys is that site is true ?! http://cfpredictor.us-west-2.elasticbeanstalk.com/roundResults.jsp?contestId=766
» 5 years ago, # | -16 So, Codeforces started copying problem from UVa :\ :\ :P
» 5 years ago, # | +1 That moment when you failed both C and E only because of wrong answer on n = 1 case...
» 5 years ago, # | 0 Did anyone else failed system test on test 22 on problem C.
» 5 years ago, # | ← Rev. 2 → 0 Thanks for this good contest . I really like these problems.
» 5 years ago, # | +3 Why codeforces div2 contests are becoming easy?
• » » 5 years ago, # ^ | +47 Maybe you're getting better?
• » » » 5 years ago, # ^ | +7 E div2 in 'div1+div2 rounds', are harder than E div2 in 'only div2 rounds'...
• » » » » 5 years ago, # ^ | 0 But it shouldn't be!div2 only contests should be interesting for div1 users too.
• » » 5 years ago, # ^ | -6 What's the problem with it?
• » » » 5 years ago, # ^ | -18 Solving hard problems makes you strong.
• » » » » 5 years ago, # ^ | ← Rev. 2 → -14 AB should be easy because before solving harder problems (CDE) we have to solve easier problems first.
• » » 5 years ago, # ^ | ← Rev. 2 → -11 That's true for a d1,but it isn't true for d2,so i don't think it is a big problem since the round is destinated only for d2.
» 5 years ago, # | 0 guys this guy i tried to hack his code for A by test case abc xyz but it's weird that it was unsuccessful attempt ? his code :s=input() t=input() if s in t: if len(s)==len(t): print(-1) else: print (abs(len(s)-len(t))) elif t in s: if len(s)==len(t): print(-1) else: print (abs(len(s)-len(t)))else: if len(s) > len(t): print(len(s)) else: print(len(t))
• » » 5 years ago, # ^ | 0 The code works perfectly for your case. It is because your case belongs to the third condition (the else: line)As if s in t: in Python means checking if the string s is a substring of t, while in your case, abc is not a substring of xyz, so s in t returns FALSE in your case. And same for the t in s condition as well.As a result, the code goes here: if len(s) > len(t): print(len(s)) else: print(len(t)) And thus, print 3 as the output, which is the correct answer.
» 5 years ago, # | +24 Interesting thing about problem B is that stupid solution which uses random works fine and passes all tests. Solution: 24513380
• » » 5 years ago, # ^ | +5 It's not so stupid. If n > 50 then you can always find such triangle so chance of finding one randomly is high. In other hand when n <= 50, randoming is pretty much like using n^3 brute force, so as I said it's not a dumb solution.
» 5 years ago, # | 0 Div2 C, rip to all of those who got WA on test 36
» 5 years ago, # | 0 Will be editorial?
• » » 5 years ago, # ^ | 0 It's already here: link
» 5 years ago, # | 0 I implemented B in a little bit bad way, I used 3 for loops but the third one is redundant then also time complexity of my solution is O(n^2). After locking and seeing other solutions I thought that my solution can be hacked by giving tle but it indeed passed the system testing. But by intuition it was clear that finding such a test case was difficult, so I wanted to know whether it is possible to hack my solution with certain sequence or not ? and how did it passed the system test . http://codeforces.com/contest/766/submission/24497421
• » » 5 years ago, # ^ | 0 check the editorial and you will understand why O(n^3) passed system test
» 5 years ago, # | 0 I got time run exception for solution of second qn written in python. after rewriting it in C++; It I passed it. What does it mean? n = input() C= map(int,raw_input().split()) C.sort() flag = False for i in range(n): for j in range(i+1,n): for k in range(j+1,n): if C[i]+C[j]>C[k] and C[i]+C[k]>C[j] and C[k]+C[j]>C[i]: flag = True break print "YES" if flag else "NO"
• » » 5 years ago, # ^ | 0 It means python is slowsee it for more information
» 5 years ago, # | 0 Good round. Where I can find a tutorial of these tasks for training?
• » » 5 years ago, # ^ | 0 I'm sorry I have found — http://codeforces.com/blog/entry/50294
» 5 years ago, # | 0 I know I'm a little late, I had to work yesterday :(Problem D is exactly this problem (with different input/output format obviously)https://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=1099
» 5 years ago, # | 0 I have calculated the complexity the code to be NlogN*maxl +MlogN*maxl+2*Mlog+N*logN+Q similar to the question authors (N+M+Q)logN*maxL, but getting TLE on test 13 please somebody can review my code. I know I havn't run the dfs from root node but this should pass too. please help!http://ideone.com/EhwB7S
» 5 years ago, # | 0 who can solve this problem http://informatics.mccme.ru/moodle/mod/statements/view3.php?chapterid=3092&run_id=249r32910#1
• » » 5 years ago, # ^ | ← Rev. 4 → 0 FIRST TRAVERSE WHOLE ARRAY ANF FIND SUMIF(SUM IS ODD) RETURN -1;THEN USE DP FOR SIZE=N/2 AND TOTAL=SUMDP[SIZE][TOTAL]=DP[SIZE-1][TOTAL] || ( DP[SIZE-1][TOTAL-Arr[i]] for all i ) RETURN DP[N/2][SUM];
• » » » 5 years ago, # ^ | 0 Can you send the code
• » » » » 5 years ago, # ^ | 0 I don't understand
» 5 years ago, # | ← Rev. 2 → 0 Hey guys. How to solve E? I can't get this prob.
» 5 years ago, # | 0 Hey guys. How to solve this problem https://www.e-olymp.com/ru/contests/7949/problems/66445? I can't get this prob. |
# Imortant and timely question for me.
#### DanoDragon
##### New member
I feel like an idiot for asking but I have never got this concept down. I would appreicate any help on this question.
"Many radio stations in the US have call letters that begin with a W or K and have four letters.
a) How many arrangements of four letters are possible as call letters?
b) What is the probability of having call letters KIDS?
I do want the answer but what I really need is information on holw to solve tis problem.
Thanks out there,
Danny
#### morson
##### Full Member
Possibilities:
W _ _ _
K _ _ _
If it started with a W, then in the second letter, there's 25 letters to pick from, making 24 letters to pick from in the next, and 23 letters to pick from in the next. So 25.24.23 arrangements, no? That's assuming you can't use the same letter repeatedly! If you can, there's $$\displaystyle 26^3$$ possiblities if it started with W. Then I suppose you'd just double the amount of arrangements if you want to consider the case for k being the first letter.
For your second question, the chance of something happening is the number of ways it can occur divided by the total number of outcomes (above paragraph).
#### Random
##### New member
Are they talking about calling numbers like 1-800-234-KDCA or the station name? There is a big difference.
#### DanoDragon
##### New member
See, here's the thing. This is for an elementary class that doesn't use exponentials. So, this answer seems too large. I'm in a teaching class that is showing us how to teach students and on this topic I am lost. In this text large numbers have not been included so I feel I have missed something. I want this information to be an effective teacher so I can explain it when they have a question. (Unlike when I asked in high school.)
So, I can be greatly mistaken.
Danny
#### DanoDragon
##### New member
Posted as written in the text....to be honest, I'm not even sure K & W are supposed to be a part of the first part of the problem.
Danny
#### Random
##### New member
I was going to type this before, but wasnt sure if "call" meant telephone so you would have to deal with the letters on the numbers, so here goes:
For part A:
How many possibilities are there for the first letter? (K and W)
How many letters are in the alphabet?
How many letters are in the alphabet?
How many letters are in the alphabet?
You need to multiply all these together. You need to think how many possiblities there are for each character. Since the first character has W or K for the possibility, you only have 2 possibilities. The other 3 letters could be any letter, and there are 26 letters in the alphabet, so you muliply all of these together. So, 2*26*26*26=35152
There are 35152 possible station names.
For part 2 (this is where I stopped typing before because there would only be 1 possible station!):
If KIDS is the name of the radio station... it is only one possible name so 1/35152
Very poor question, but multiplying like above removes the exponents... I think I learned this type of probability before I touched exponents. |
# Thermal conductivity of conductor material
Thermal conductivity of conductor material at 20$^{\circ}$C. Values are taken from standard IEC 60287-3-3, chapter 4.2 for copper and aluminium and from the engineering toolbox for other materials.
Symbol
$k_{c}$
Unit
K.m/W
Related
$M_{c}$
$\rho_{cr}$
Choices
MaterialValue
Cu384.62
Al204.08
Brz110
CuZn187.97
Ni91.0
SS16.31 |
[lltx] kerning; am i mad?
Khaled Hosny khaledhosny at eglug.org
Sun Jun 6 20:14:27 CEST 2010
On Mon, Jun 07, 2010 at 03:19:25AM +0930, Will Robertson wrote:
> Hi,
>
> It's time for me to finish for the night, but I think I'm going crazy.
> Just yesterday I generated the following test file for fontspec:
>
>
>
> But after messing around and probably updating some stuff here and there, I now get:
>
>
>
> No more kerning!
> Here's a test file to replicate what I'm seeing:
>
> \documentclass{article}
> \begin{document}
> \font\1=name:Verdana\1 Test
> \end{document}
>
> What do you see?
>
> I can't replicate the old output that has proper kerning, even going back to older versions of luaotfload. I've tried TL2010 and TL2009. Am I mad? Where has my kerning gone?
Using "\font\1=name:Verdana:script=latn\1 Test" I get kerning, so for some reason fontspec stopped applying default script.
--
Khaled Hosny
Arabic localiser and member of Arabeyes.org team
Free font developer |
# Smooth Sailing
• \$32.00
This formula, formerly called Stressless, is for anxious and/or depressed people. It does not contain St. John's Wort (as many cannot take that and some pharmaceuticals simultaneously). Because it is a balancer (and not a booster or suppressor), both sides of the mood coin (and people that get both) can take this supplement.
This tincture comes in a 4 oz. dropper bottle, about 120 doses.
Ingredients (Locally Sourced Whenever Possible): Gluten Free Alcohol, Water, Reishi*, Shiitake*, Maitake*, Oyster Mushroom*, Astragalus*, Tulsi*, Eleuthero*, & Purslane* (*organic)
How to Use:
Start with 30 drops (a half dropper, which is easy to obtain by squeezing the bulb). Some people may need to double this, others will not based on body mass, metabolism, and other factors. Take this regularly first thing in the morning and in the early evening. You may want to add a mid-day dose if it tapers off at that time for you. |
# oriented_bounding_box_numpy
compas.geometry.oriented_bounding_box_numpy(points)[source]
Compute the oriented minimum bounding box of a set of points in 3D space.
Parameters
points (array-like) – XYZ coordinates of the points.
Returns
array – XYZ coordinates of 8 points defining a box.
Raises
QhullError – If the data is essentially 2D.
Notes
The oriented (minimum) bounding box (OBB) of a given set of points is computed using the following procedure:
1. Compute the convex hull of the points.
2. For each of the faces on the hull:
1. Compute face frame.
2. Compute coordinates of other points in face frame.
3. Find “peak-to-peak” (PTP) values of point coordinates along local axes.
4. Compute volume of box formed with PTP values.
3. Select the box with the smallest volume.
Examples
Generate a random set of points with $$x \in [0, 10]$$, $$y \in [0, 1]$$ and $$z \in [0, 3]$$. Add the corners of the box such that we now the volume is supposed to be $$30.0$$.
>>> points = np.random.rand(10000, 3)
>>> bottom = np.array([[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0]])
>>> top = np.array([[0.0, 0.0, 1.0], [1.0, 0.0, 1.0], [0.0, 1.0, 1.0], [1.0, 1.0, 1.0]])
>>> points = np.concatenate((points, bottom, top))
>>> points[:, 0] *= 10
>>> points[:, 2] *= 3
Rotate the points around an arbitrary axis by an arbitrary angle.
>>> from compas.geometry import Rotation
>>> from compas.geometry import transform_points_numpy
>>> R = Rotation.from_axis_and_angle([1.0, 1.0, 0.0], 0.3 * 3.14159)
>>> points = transform_points_numpy(points, R)
Compute the volume of the oriented bounding box.
>>> from compas.geometry import length_vector, subtract_vectors, close
>>> bbox = oriented_bounding_box_numpy(points)
>>> a = length_vector(subtract_vectors(bbox[1], bbox[0]))
>>> b = length_vector(subtract_vectors(bbox[3], bbox[0]))
>>> c = length_vector(subtract_vectors(bbox[4], bbox[0]))
>>> close(a * b * c, 30.)
True |
# What are the physical manifestations of radial nodes?
I realize that nodes (both angular and radial) are areas with zero probablility of finding an electron. I realize that plotting the square of the radial eigenfunction for an orbital will give the probability density of finding an electron at a given distance from the nucleus.
So for the 1s orbital (zero nodes), the graph of the square of the eigenfunction will be one hump. For the 2s orbital (one node) there will be a small hump followed by a large hump with a zero point in between them (talking about the sqaure of the eigenfunction not the eigenfunction itself).
For the 1s orbital, the peak of the hump will represent the most probable radius the electron will be found from the nucleus. The probability will lower as the radius increases or decreases from the most probable radius (similar to gaussian distribution), and the electron will have zero probability of being found within the nucleus of the atom (0 radius).
And from what I understand the radial node(for the 2s orbital) is physically manifested at the point where the 1s orbital and 2s orbital meet (my instructor used the analogy of russian nesting dolls), and on the NON squared eigenfunction plot of the 2s orbital, the node is represented where the wave function crosses the x axis going from +y to -y.
So here is what Im really wondering (please correct me on anything stated previously if I am mistaken). Is the first (smaller) hump on the squared radial eigenfunction (probability density function) for the 2s orbital the probability that the electron is in the 1s orbital (I believe this would be called an excited state if im not mistaken)? This would make the most sense to me, as it would be saying that it is possible for a 2s electron to be in either the 1s orbital or 2s orbital, but not in between (where the wave function goes from +y to -y)
Edit: Perhaps it is incorrect to say an electron from the 2s orbital has the possibility of being in the 1s orbital. What I really mean is the electron may occupy the same space as the 1s orbital, but since orbitals are just 3d areas where an electron is most likely to be found perhaps I am not incorrect in saying that a 2s electron occupies the 1s orbital since the 1s orbital is technically just a region in space. My guess is that the probability of a 2s electron occupying the same space as the 1s orbital is lower than it occupying the space of the 2s orbital due to repulsion by the electrons that already occupy the 1s orbital.
Here is a picture representing everything ive tried to explain so far:
• You have it all backwards. The point where the 1s orbital and 2s orbital meet is not important at all, and in particular has nothing to do with the radial node of 2s. (These are different points, in fact.) When an electron sits on 2s, well, this means it sits totally, 100% on 2s and not on 1s. It does not care at all about 1s. It does not even "know" whether 1s exists. True, that electron spends part of its own existence in the region of the first hump where most of 1s also resides, but then again, every orbital overlaps with every other. It's not like orbitals own the space they occupy. – Ivan Neretin Sep 8 '16 at 18:36
• – DavePhD Sep 8 '16 at 18:39
I realize that plotting the square of the radial eigenfunction for an orbital will give the probability density of finding an electron at a given distance from the nucleus.
This isn't quite right. The square of the radial function will give the probability density for s-orbital, but this is not the probability of finding the electron at a given distance from the nucleus, because the number of points a given distance from the nucleus increase with $r^2$. To get the plots in Fig. c, the radial function squared in multiplied by $r^2$.
For example, for 1s, the probability density is greatest at the nucleus, but by multiplying by $r^2$ when r=0, a radial probability density of zero is obtained.
Furthermore, as you can see from the graphs on pages 187 and 188 here, for 2s and 3s the most probable location (highest probability density) is at the nucleus (r=0). Again, only by multiplying by $r^2$ when r=0, is a radial probability density of zero is obtained.
So "Fig. a" in the OP is misleading, the central region of 2s and 3s should be shaded much more intensely than the outer region(s).
So here is what Im really wondering (please correct me on anything stated previously if I am mistaken). Is the first (smaller) hump on the squared radial eigenfunction (probability density function) for the 2s orbital the probability that the electron is in the 1s orbital (I believe this would be called an excited state if im not mistaken)? This would make the most sense to me, as it would be saying that it is possible for a 2s electron to be in either the 1s orbital or 2s orbital, but not in between (where the wave function goes from +y to -y)
No, the 2s electron is only in the 2s orbital.
• So why does the probability distribution curve have a smaller 1st hump (smaller radius) than the 2nd hump (larger radius) for the 2s orbital if the probability that the electron is closer to the nucleus is higher? – Keaton Sep 8 '16 at 19:32
• @Ki11akd0g the probability that the electron is in a given infinitesimal volume element (probability density) is highest at the nucleus, but the probability that it is a given distance from the nucleus (radial probability density) is at the outermost hump, primarily because a spherical shell have infinitely more points than a single point, and because a bigger radius shell has a greater area than a small radius shell. . – DavePhD Sep 8 '16 at 19:39
• I actually understand the concept of nodes now, what was misleading is the color patterning depicted on thw images. I was considering the 2s orbital to contain a 1s orbital inside. The 2s orbital and 1s orbital are completely separate, although they may overlap and occupy the some of the same space, they are separate entities. – Keaton Sep 8 '16 at 19:42
• AND I understand what you meant by the probability distribution vs the radial probability distribution. The probability of finding an electron decreases as you move further from the nucleus (on a straight line path away from the nucleus). But there are alot more points at which the radius equals a specific value the further you move away from the nucleus (there is a greater area of points at r=2 than r=1). So its the product of the probability of actually finding an electron at a given distance from the nucleus and the greater area at which that probability applies to. THANKS SO MUCH! – Keaton Sep 8 '16 at 20:08 |
# How do you evaluate 48^(4/3)*8^(2/3)*(1/6^2)^(3/2) ?
May 2, 2016
${48}^{\frac{4}{3}} \cdot {8}^{\frac{2}{3}} \cdot {\left(\frac{1}{6} ^ 2\right)}^{\frac{3}{2}} = {2}^{\frac{13}{3}} / {3}^{\frac{1}{3}}$
#### Explanation:
${48}^{\frac{4}{3}} \cdot {8}^{\frac{2}{3}} \cdot {\left(\frac{1}{6} ^ 2\right)}^{\frac{3}{2}}$
= ${\left(2 \times 2 \times 2 \times 2 \times 3\right)}^{\frac{4}{3}} \cdot {\left(2 \times 2 \times 2\right)}^{\frac{2}{3}} \cdot {\left({\left(2 \times 3\right)}^{- 2}\right)}^{\frac{3}{2}}$
=${\left({2}^{4} \times 3\right)}^{\frac{4}{3}} \cdot {\left({2}^{3}\right)}^{\frac{2}{3}} \cdot {\left(2 \times 3\right)}^{- 2 \times \frac{3}{2}}$
= ${2}^{4 \times \frac{4}{3}} \cdot {3}^{\frac{4}{3}} \cdot {2}^{3 \cdot \frac{2}{3}} \cdot {\left(2 \times 3\right)}^{- 3}$
= ${2}^{\frac{16}{3}} \cdot {3}^{\frac{4}{3}} \cdot {2}^{2} \cdot {2}^{- 3} \cdot {3}^{- 3}$
= ${2}^{\frac{16}{3} + 2 - 3} \times {3}^{\frac{4}{3} - 3}$
= ${2}^{\frac{13}{3}} \times {3}^{- \frac{1}{3}}$
= ${2}^{\frac{13}{3}} / {3}^{\frac{1}{3}}$ |
Math Help - Prove by Induction
1. Prove by Induction
$\sum_{n=0}^\infty {{n+k} \choose k}\ x^n = {1 \over {(1-x})^{k+1}}$
where $|x| < 1$
Can anyone lead me here? Such questions get the better of me all the time. How should I go about improving?
Much appreciated.
2. Let K = 0.
LHS = $1 + x + x^2 + x^3 + ....= {1 \over {1 - x}}$= RHS
using the summation of geometric progression ${a \over {1 - r}}$
For K = 2,
LHS = ${2 \choose 2} x^0 + {3 \choose 2} x^1 + {4 \choose 2} x^2 + ... \ \ \ = \ 1 + 3x + 6x^2 + 10x^3 + ...$
Now I cannot use the geometric progression, and hence I come unstuck.
For RHS,
its ${1 \over {1 - 3x + 3x^2 - x^3}} = {1 \over {{\sum_{k=0}^3} {3 \choose k} (-x)^k}}$
using Binomial Theorem
3. Originally Posted by chopet
$\sum_{n=0}^\infty {{n+k} \choose k}\ x^n = {1 \over {(1-x})^{k+1}}$
where $|x| < 1$
Can anyone lead me here? Such questions get the better of me all the time. How should I go about improving?
Much appreciated.
I don't have the time right now, but to me it looks like a job for an induction proof.
-Dan
4. Use generating functions,
$\frac{1}{1-x} = 1+x+x^2+x^3+...$
So,
$\frac{1}{(1-x)^{k+1}} = \frac{1}{1-x}\cdot ... \cdot \frac{1}{1-x}$.
Which becomes,
$(1+x+x^2+...)(1+x+x^2+...)...(1+x+x^2+...)$
Which is the LHS. |
# Definition:Real Group Element
## Definition
Let $G$ be a group.
Let $g \in G$.
Then $g$ is a real element (of $G$) if and only if it is conjugate to its inverse:
$\exists h \in G : hgh^{-1} = g^{-1}$ |
## Lecture 8: Friday, September 6, 2013
Over the next couple of lectures, we will generalize cyclic groups $Z_n$ and dihedral groups $D_n$. Given positive integers $m$, $n$, and $k$, formally define the abstract group
$D(m,n,k) = \left \langle \gamma_0, \, \gamma_1, \, \gamma_\infty \ \biggl| \ {\gamma_0}^m = {\gamma_1}^n = {\gamma_\infty}^k = \gamma_0 \, \gamma_1 \, \gamma_\infty = 1 \right \rangle.$
Such a group is called a Triangle Group, although it is also known as a von Dyck Group after the German mathematician Walther Franz Anton von Dyck (1856 — 1934). Today, we focus on why these are called “triangle groups.”
### Basic Properties
It is easy to see that $D(m, n, k) = D(k, m, n) = D(k, n, m)$, that is, the ordering of $m$, $n$, and $k$ does not matter. Indeed, using the substitutions $\gamma_0 = s$, $\gamma_1 = r$, and $\gamma_{\infty} = ( s \, r)^{-1}$, we have the following diagram:
\begin{matrix} {\begin{aligned} u & = (s \, r)^{-1} \\ v & = s \\ u \, v & = r^{-1} \end{aligned}} & {\begin{aligned} u^k & = 1 \\ v^m & = 1 \\ (u \, v)^n & = 1 \end{aligned}} & D(k,m,n) = \left \langle u, v \ \biggl| \ u^k = v^m = (u \, v)^n = 1 \right \rangle \\ & & \uparrow \\ {\begin{aligned} s & = \gamma_0 \\ r & = \gamma_1 \\ s \, r & = {\gamma_\infty}^{-1} \end{aligned}} & {\begin{aligned} s^m & = 1 \\ r^n & = 1 \\ (s \, r)^k & = 1 \end{aligned}} & D(m,n,k) = \left \langle r, s \ \biggl| \ s^m = r^n = (s \, r)^k =1 \right \rangle \\ & & \downarrow \\ {\begin{aligned} z & = s \, r \\ w & = r^{-1} \\ z \, w & = s \end{aligned}} & {\begin{aligned} z^k & = 1 \\ w^n & = 1 \\ (z \, w)^m & = 1 \end{aligned}} & D(k,n,m) = \left \langle z, w \ \biggl| \ z^k = w^n = (z \, w)^m =1 \right \rangle \end{matrix}
### Triangles in the Affine Real Plane
We give a geometric interpretation of this group using matrices. Given positive integers $m$, $n$, and $k$ which satisfy the identity $\dfrac {1}{m} + \dfrac {1}{n} + \dfrac {1}{k} = 1$, define the quantities
\begin{aligned} x_P & = 0 & & & y_P & = 0 & & & & & A & = \dfrac {\pi}{m} \\[10pt] x_Q & = \cos A \, \sin B + \sin A \, \cos B & & & y_Q & = 0 & & & \text{in terms of} & & B & = \dfrac {\pi}{n} \\[10pt] x_R & = \cos A \, \sin B & & & y_R & = \sin A \, \sin B & & & & & C & = \dfrac {\pi}{k}. \end{aligned}
Denoting $\mathbb A^2(\mathbb R)$ as the affine real plane, consider the affine points $P = (x_P, y_P)$, $Q = (x_Q,y_Q)$ and $R = (x_R, y_R)$ as well as the transformations $\gamma_0, \, \gamma_1, \, \gamma_\infty: \mathbb A^2(\mathbb R) \to \mathbb A^2(\mathbb R)$ defined by
\begin{aligned} \gamma_0: \qquad \left[ \begin{matrix} x \\[5pt] y \end{matrix} \right] & \qquad \mapsto \qquad \left [ \begin{matrix} \cos \, 2 A & -\sin \, 2 A \\[5pt] \sin \, 2 A & \cos \, 2 A \end{matrix} \right ] \left[ \begin{matrix} x - x_P \\[5pt] y - y_P \end{matrix} \right] + \left[ \begin{matrix} x_P \\[5pt] y_P \end{matrix} \right] \\[10pt] \gamma_1: \qquad \left[ \begin{matrix} x \\[5pt] y \end{matrix} \right] & \qquad \mapsto \qquad \left [ \begin{matrix} \cos \, 2 B & -\sin \, 2 B \\[5pt] \sin \, 2 B & \cos \, 2 B \end{matrix} \right ] \left[ \begin{matrix} x - x_Q \\[5pt] y - y_Q \end{matrix} \right] + \left[ \begin{matrix} x_Q \\[5pt] y_Q \end{matrix} \right] \\[10pt] \gamma_\infty: \qquad \left[ \begin{matrix} x \\[5pt] y \end{matrix} \right] & \qquad \mapsto \qquad \left [ \begin{matrix} \cos \, 2 C & -\sin \, 2 C \\[5pt] \sin \, 2 C & \cos \, 2 C \end{matrix} \right ] \left[ \begin{matrix} x - x_R \\[5pt] y - y_R \end{matrix} \right] + \left[ \begin{matrix} x_R \\[5pt] y_R \end{matrix} \right] \end{aligned}
Proposition.
Continue notation as above.
• The angles $A$, $B$, and $C$ sum to $180^\circ$, that is, $A + B + C = \pi$. Moreover, $\cos^2 A + \cos^2 B + \cos^2 C + 2 \, \cos A \, \cos B \, \cos C = 1$.
• $V = \{ P, \, Q, \, R \} \subseteq \mathbb A^2(\mathbb R)$ is a triangle with angles $A$, $B$, and $C$ and area $\dfrac {1}{2} \, \sin A \, \sin B \, \sin C$.
• The transformation $\gamma_0$ is a rotation around $P$ by $(2 \pi/m)$ radians counterclockwise, the transformation $\gamma_1$ is a rotation around $Q$ by $(2 \pi/n)$ radians counterclockwise, and the transformation $\gamma_\infty$ is a rotation around $R$ by $(2 \pi/k)$ radians counterclockwise.
• The compositions ${\gamma_0}^m = {\gamma_1}^n = {\gamma_\infty}^k = \gamma_0 \, \gamma_1 \, \gamma_\infty = 1$ are the identity transformation, while the composition
$\gamma_1 \circ \gamma_0: \qquad \left[ \begin{matrix} x \\[5pt] y \end{matrix} \right] \qquad \mapsto \qquad \left [ \begin{matrix} \cos \, 2 C & \sin \, 2 C \\[5pt] -\sin \, 2 C & \cos \, 2 C \end{matrix} \right ] \left[ \begin{matrix} x - x_R \\[5pt] y - y_R \end{matrix} \right] + \left[ \begin{matrix} x_R \\[5pt] -y_R \end{matrix} \right]$
has infinite order. In particular, $D(m,n,k)$ is a finitely generated group which is not abelian and has infinitely many elements.
We leave the proof as an exercise for the reader.
There are only finitely many positive integers $m$, $n$, and $k$ satisfying the conditions above, and up to symmetry they fit into the following table:
Group $D(m, n, k)$ $m$ $n$ $k$ Triangle $V = \{ P, Q, R \}$ Symmetries of the kisrhombille 2 3 6 $30^\circ - 60^\circ - 90^\circ$ Symmetries of the kisquadrille 2 4 4 $45^\circ - 45^\circ - 90^\circ$ Symmetries of the deltille 3 3 3 $60^\circ - 60^\circ - 60^\circ$
These are examples of wallpaper groups. The images of the triangle $V$ by transformations of the elements in $D(m,n,k)$ form a tiling of the plane $\mathbb A^2(\mathbb R)$. This explains why $D(m,n,k)$ is called a “triangle group”.
### Triangles on the Unit Sphere
Triangle groups $D(m,n,k)$ are much more interesting when the integers $m$, $n$, and $k$ are a bit larger. For example,
\begin{aligned} Z_n & = \left \langle r \, \biggl| \, r^n = 1 \right \rangle & \simeq & \left \langle \gamma_0, \, \gamma_1, \, \gamma_\infty \, \biggl| \, {\gamma_0}^1 = {\gamma_1}^n = {\gamma_\infty}^n = \gamma_0 \, \gamma_1 \, \gamma_\infty = 1 \right \rangle = D(1,n,n); \\[10pt] D_n & = \left \langle r, \, s \, \biggl| \, s^2 = r^n = (s \, r)^2 = 1 \right \rangle & \simeq & \left \langle \gamma_0, \, \gamma_1, \, \gamma_\infty \, \biggl| \, {\gamma_0}^2 = {\gamma_1}^n = {\gamma_\infty}^2 = \gamma_0 \, \gamma_1 \, \gamma_\infty = 1 \right \rangle = D(2,n,2). \end{aligned}
We will use this to study triangles on the unit sphere $S^2(\mathbb R) = \left \{ (u,v,w) \in \mathbb A^2(\mathbb R) \, \biggl| \, u^2 + v^2 + w^2 = 1 \right \}$. Given positive integers $m$, $n$, and $k$ which satisfy the inequalities $\dfrac {1}{m} + \dfrac {1}{n} + \dfrac {1}{k} > 1$ and $m, \, n, \, k \geq 2$, define the quantities
\begin{aligned} x_P & = 0 & & & x_Q & = \dfrac {\sqrt{\delta}}{\sin A \, \sin B} & & & x_R & = \cos A \, \dfrac {\sqrt{\delta}}{\sin C \, \sin A} \\[5pt] y_P & = 0 & & & y_Q & = 0 & & & y_R & = \sin A \, \dfrac {\sqrt{\delta}}{\sin C \, \sin A} \\[5pt] z_P & = 1 & & & z_Q & = \dfrac {\cos C + \cos A \, \cos B}{\sin A \, \sin B} & & & z_R & = \dfrac {\cos B + \cos C \, \cos A}{\sin C \, \sin A} \end{aligned} \quad \text{where} \quad \begin{aligned} A & = \dfrac {\pi}{m} \\[5pt] B & = \dfrac {\pi}{n} \\[5pt] C & = \dfrac {\pi}{k} \end{aligned}
and $\delta = 1 - \bigl( \cos^2 A + \cos^2 B + \cos^2 C + 2 \, \cos A \, \cos B \, \cos C \bigr) > 0$. Denoting $S^2(\mathbb R)$ as the unit sphere, consider the affine points
\begin{aligned} P & = \left[ \begin{matrix} x_P \\[5pt] y_P \\[5pt] z_P \end{matrix} \right] = \gamma_P \left[ \begin{matrix} 0 \\[5pt] 0 \\[5pt] 1 \end{matrix} \right] \\[5pt] Q & = \left[ \begin{matrix} x_Q \\[5pt] y_Q \\[5pt] z_Q \end{matrix} \right] = \gamma_Q \left[ \begin{matrix} 0 \\[5pt] 0 \\[5pt] 1 \end{matrix} \right] \\[5pt] R & = \left[ \begin{matrix} x_R \\[5pt] y_R \\[5pt] z_R \end{matrix} \right] = \gamma_R \left[ \begin{matrix} 0 \\[5pt] 0 \\[5pt] 1 \end{matrix} \right] \end{aligned} \quad \text{where} \quad \begin{aligned} \gamma_P & = \left[ \begin{matrix} 1 & 0 & 0 \\[5pt] 0 & 1 & 0 \\[5pt] 0 & 0 & 1 \end{matrix} \right] \\[5pt] \gamma_Q & = \left[ \begin{matrix} z_Q & 0 & x_Q \\[5pt] 0 & 1 & 0 \\[5pt] -x_Q & 0 & z_Q \end{matrix} \right] \\[5pt] \gamma_R & = \left[ \begin{matrix} x_R \, z_R/\sqrt{x_R^2 + y_R^2} & - y_R/\sqrt{x_R^2 + y_R^2} & x_R \\[5pt] y_R \, z_R/\sqrt{x_R^2 + y_R^2} & x_R/\sqrt{x_R^2 + y_R^2} & y_R \\[5pt] -\sqrt{x_R^2 + y_R^2} & 0 & z_R \end{matrix} \right] \end{aligned}
as well as the rotations $\gamma_0, \, \gamma_1, \, \gamma_\infty: S^2(\mathbb R) \to S^2(\mathbb R)$ defined by
\begin{aligned} \gamma_0: & \qquad \left[ \begin{matrix} x \\[5pt] y \\[5pt] z \end{matrix} \right] \qquad & \mapsto \qquad \left( \gamma_P \left[ \begin{matrix} \cos 2 A & -\sin 2 A & 0 \\[5pt] \sin 2 A & \cos 2 A & 0 \\[5pt] 0 & 0 & 1 \end{matrix} \right] {\gamma_P}^{-1} \right) \left[ \begin{matrix} x \\[5pt] y \\[5pt] z \end{matrix} \right] \\[5pt] \gamma_1: & \qquad \left[ \begin{matrix} x \\[5pt] y \\[5pt] z \end{matrix} \right] \qquad & \mapsto \qquad\left( \gamma_Q \left[ \begin{matrix} \cos 2 B & -\sin 2 B & 0 \\[5pt] \sin 2 B & \cos 2 B & 0 \\[5pt] 0 & 0 & 1 \end{matrix} \right] {\gamma_Q}^{-1} \right) \left[ \begin{matrix} x \\[5pt] y \\[5pt] z \end{matrix} \right]\\[5pt] \gamma_\infty: & \qquad \left[ \begin{matrix} x \\[5pt] y \\[5pt] z \end{matrix} \right] \qquad & \mapsto \qquad \left( \gamma_R \left[ \begin{matrix} \cos 2 C & -\sin 2 C & 0 \\[5pt] \sin 2 C & \cos 2 C & 0 \\[5pt] 0 & 0 & 1 \end{matrix} \right] {\gamma_R}^{-1} \right) \left[ \begin{matrix} x \\[5pt] y \\[5pt] z \end{matrix} \right] \end{aligned}
Proposition.
Continue notation as above.
• The angles $A$, $B$, and $C$ sum to more than $180^\circ$, that is, $A + B + C > \pi$.
• $V = \{ P, \, Q, \, R \} \subseteq S^2(\mathbb R)$ is a spherical triangle with angles $A$, $B$, and $C$.
• The transformation $\gamma_0$ is a rotation around $P$ by $(2 \pi/m)$ radians counterclockwise, the transformation $\gamma_1$ is a rotation around $Q$ by $(2 \pi/n)$ radians counterclockwise, and the transformation $\gamma_\infty$ is a rotation around $R$ by $(2 \pi/k)$ radians counterclockwise.
• The compositions ${\gamma_0}^m = {\gamma_1}^n = {\gamma_\infty}^k = \gamma_0 \, \gamma_1 \, \gamma_\infty = 1$ are the identity transformation.
Sketch of Proof: In order to verify that the triangle $V = \{ P, \, Q, \, R \}$ makes angles $A$, $B$, and $C$ on the sphere, we first compute the angles the vectors make with each other with respect to the origin $(0,0,0)$. Denoting $a$, $b$, and $c$ as the angles between $Q$ and $R$, $R$ and $P$, and $P$ and $Q$, respectively, we use inner products and cross products to verify that
\begin{aligned} \cos a & = \dfrac {Q \cdot R}{\| Q \| \, \| R \|} & = \dfrac {\cos A + \cos B \, \cos C}{\sin B \, \sin C} \\[5pt] \cos b & = \dfrac {R \cdot P}{\| R \| \, \| P \|} & =\dfrac {\cos B + \cos C \, \cos A}{\sin C \, \sin A} \\[5pt] \cos c & = \dfrac {P \cdot Q}{\| P \| \, \| Q \|} & = \dfrac {\cos C + \cos A \, \cos B}{\sin A \, \sin B} \end{aligned} \qquad \qquad \begin{aligned} \sin a & = \dfrac {\| Q \times R \|}{\| Q \| \, \| R \|} & = \dfrac {\sqrt{\delta}}{\sin B \, \sin C} \\[5pt] \sin b & = \dfrac {\| R \times P \|}{\| R \| \, \| P \|} & = \dfrac {\sqrt{\delta}}{\sin C \, \sin A} \\[5pt] \sin c & = \dfrac {\| P \times Q \|}{\| P \| \, \| Q \|} & = \dfrac {\sqrt{\delta}}{\sin A \, \sin B} \end{aligned}
Using the Spherical Law of Cosines, we see that
$\cos A = \dfrac {\cos a - \cos b \, \cos c}{\sin b \, \sin c}$, $\cos B = \dfrac {\cos b - \cos c \, \cos a}{\sin c \, \sin a}$, and $\cos C = \dfrac {\cos c - \cos a \, \cos b}{\sin a \, \sin b}$. That is, $V$ is indeed a triangle with angle $A$ at vertex $P$, angle $B$ at vertex $Q$, and angle $C$ at vertex $R$. $\square$
The triangles $V = \{ P, \, Q, \, R \}$ such that their angles $A + B + C > \pi$ while $m = \pi/A$, $n = \pi/B$, and $k = \pi/C$ are integers are called Möbius triangles, named in honor of the German mathematician August Ferdinand Möbius (1790 — 1868).
In the next lecture, we’ll focus more of the geometry in this case. And we’ll show some pretty pictures! |
# Category:Definitions/Order Isomorphisms
Let $\phi: S \to T$ be a bijection such that:
$\phi: S \to T$ is order-preserving
$\phi^{-1}: T \to S$ is order-preserving.
Then $\phi$ is an order isomorphism. |
# Kirby–Siebenmann class
Jump to: navigation, search
In mathematics, the Kirby–Siebenmann class is an element of the fourth cohomology group
$e(M) \in H^4(M;\mathbf{Z}_2)$
which must vanish if a topological manifold M is to have a piecewise linear structure. It is named for Robion Kirby and Larry Siebenmann. |
Transformers documentation
I-BERT
You are viewing v4.21.1 version. A newer version v4.27.2 is available.
Join the Hugging Face community
to get started
# I-BERT
## Overview
The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It’s a quantized version of RoBERTa running inference up to four times faster.
The abstract from the paper is the following:
Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced.
This model was contributed by kssteven. The original code can be found here.
## IBertConfig
### class transformers.IBertConfig
< >
( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' quant_mode = False force_dequant = 'none' **kwargs )
Parameters
• vocab_size (int, optional, defaults to 30522) — Vocabulary size of the I-BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling IBertModel
• hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer.
• num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder.
• num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder.
• intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder.
• hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported.
• hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
• attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities.
• max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).
• type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling IBertModel
• initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
• layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
• position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.).
• quant_mode (bool, optional, defaults to False) — Whether to quantize the model or not.
• force_dequant (str, optional, defaults to "none") — Force dequantize specific nonlinear layer. Dequatized layers are then executed with full precision. "none", "gelu", "softmax", "layernorm" and "nonlinear" are supported. As deafult, it is set as "none", which does not dequantize any layers. Please specify "gelu", "softmax", or "layernorm" to dequantize GELU, Softmax, or LayerNorm, respectively. "nonlinear" will dequantize all nonlinear layers, i.e., GELU, Softmax, and LayerNorm.
This is the configuration class to store the configuration of a IBertModel. It is used to instantiate a I-BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the IBERT kssteven/ibert-roberta-base architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
## IBertModel
### class transformers.IBertModel
< >
( config add_pooling_layer = True )
Parameters
• config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
The bare I-BERT Model transformer outputting raw hidden-states without any specific head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
#### forward
< >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using RobertaTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
• 0 corresponds to a sentence A token,
• 1 corresponds to a sentence B token.
What are token type IDs?
• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs.
• last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.
• pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.
• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
• cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads.
• past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.
The IBertModel forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import RobertaTokenizer, IBertModel
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> model = IBertModel.from_pretrained("kssteven/ibert-roberta-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
< >
( config )
Parameters
• config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a language modeling head on top.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using RobertaTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
• 0 corresponds to a sentence A token,
• 1 corresponds to a sentence B token.
What are token type IDs?
• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
• labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
• kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated.
Returns
transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs.
• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss.
• logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The IBertForMaskedLM forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import RobertaTokenizer, IBertForMaskedLM
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt")
... logits = model(**inputs).logits
>>> # retrieve index of <mask>
>>> tokenizer.decode(predicted_token_id)
>>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"]
>>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
>>> outputs = model(**inputs, labels=labels)
>>> round(outputs.loss.item(), 2)
## IBertForSequenceClassification
### class transformers.IBertForSequenceClassification
< >
( config )
Parameters
• config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using RobertaTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
• 0 corresponds to a sentence A token,
• 1 corresponds to a sentence B token.
What are token type IDs?
• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
• labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).
Returns
transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs.
• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss.
• logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax).
• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The IBertForSequenceClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example of single-label classification:
>>> import torch
>>> from transformers import RobertaTokenizer, IBertForSequenceClassification
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
>>> # To train a model on num_labels classes, you can pass num_labels=num_labels to .from_pretrained(...)
>>> num_labels = len(model.config.id2label)
>>> model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base", num_labels=num_labels)
>>> labels = torch.tensor(1)
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
Example of multi-label classification:
>>> import torch
>>> from transformers import RobertaTokenizer, IBertForSequenceClassification
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base", problem_type="multi_label_classification")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
... logits = model(**inputs).logits
>>> predicted_class_id = logits.argmax().item()
>>> model.config.id2label[predicted_class_id]
>>> # To train a model on num_labels classes, you can pass num_labels=num_labels to .from_pretrained(...)
>>> num_labels = len(model.config.id2label)
>>> model = IBertForSequenceClassification.from_pretrained(
... "kssteven/ibert-roberta-base", num_labels=num_labels, problem_type="multi_label_classification"
... )
>>> labels = torch.nn.functional.one_hot(torch.tensor([predicted_class_id]), num_classes=num_labels).to(
... torch.float
... )
>>> loss = model(**inputs, labels=labels).loss
>>> loss.backward()
## IBertForMultipleChoice
### class transformers.IBertForMultipleChoice
< >
( config )
Parameters
• config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( input_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
Parameters
• input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using RobertaTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
• attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
• 0 corresponds to a sentence A token,
• 1 corresponds to a sentence B token.
What are token type IDs?
• position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
• inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
• labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above)
Returns
transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs.
• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
• logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above).
Classification scores (before SoftMax).
• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The IBertForMultipleChoice forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import RobertaTokenizer, IBertForMultipleChoice
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> model = IBertForMultipleChoice.from_pretrained("kssteven/ibert-roberta-base")
>>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced."
>>> choice0 = "It is eaten with a fork and a knife."
>>> choice1 = "It is eaten while held in the hand."
>>> labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1
>>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True)
>>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1
>>> # the linear classifier still needs to be trained
>>> loss = outputs.loss
>>> logits = outputs.logits
## IBertForTokenClassification
### class transformers.IBertForTokenClassification
< >
( config )
Parameters
• config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using RobertaTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
• 0 corresponds to a sentence A token,
• 1 corresponds to a sentence B token.
What are token type IDs?
• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
• labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].
Returns
transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs.
• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss.
• logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax).
• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The IBertForTokenClassification forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import RobertaTokenizer, IBertForTokenClassification
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> model = IBertForTokenClassification.from_pretrained("kssteven/ibert-roberta-base")
>>> inputs = tokenizer(
... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt"
... )
... logits = model(**inputs).logits
>>> predicted_token_class_ids = logits.argmax(-1)
>>> # Note that tokens are classified rather then input words which means that
>>> # there might be more predicted token classes than words.
>>> # Multiple token classes might account for the same word
>>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]]
>>> predicted_tokens_classes
>>> labels = predicted_token_class_ids
>>> loss = model(**inputs, labels=labels).loss
>>> round(loss.item(), 2)
< >
( config )
Parameters
• config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
I-BERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits).
This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)
This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
#### forward
< >
( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
Parameters
• input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary.
Indices can be obtained using RobertaTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.
What are input IDs?
• attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
• 1 for tokens that are not masked,
• 0 for tokens that are masked.
• token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:
• 0 corresponds to a sentence A token,
• 1 corresponds to a sentence B token.
What are token type IDs?
• position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].
What are position IDs?
• head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]:
• inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
• output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
• output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
• return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
• start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
• end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.
Returns
transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)
A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs.
• loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
• start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax).
• end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax).
• hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
• attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The IBertForQuestionAnswering forward method, overrides the __call__ special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
Example:
>>> from transformers import RobertaTokenizer, IBertForQuestionAnswering
>>> import torch
>>> tokenizer = RobertaTokenizer.from_pretrained("kssteven/ibert-roberta-base")
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
... outputs = model(**inputs)
>>> # target is "nice puppet" |
3 years ago
# Relaxing the $\sigma_8$-tension through running vacuum in the Universe.
It has recently been shown that the class of running vacuum models (RVMs) has the capacity to fit the overall cosmological observations better than the concordance $\Lambda$CDM model, therefore supporting the possibility of dynamical dark energy (DE). Apart from the cosmic microwave background (CMB) anisotropies, the most crucial datasets involved are: i) baryonic acoustic oscillations (BAO), and ii) direct large scale structure (LSS) formation data. Analyses mainly focusing on CMB and with insufficient BAO+LSS input generally fail to capture the dynamical DE signature, whereas the few existing studies accounting for the wealth of known CMB+BAO+LSS data (see in particular Sol\`a, G\'omez-Valent \& de Cruz P\'erez 2015, 2017; and Zhao et al. 2017) do converge to the remarkable conclusion that dynamical DE might well be encoded in the current cosmological observations at a $3-4\sigma$ c.l. A decisive factor is the persistent $\sigma_8$-tension between the $\Lambda$CDM and the data. Because the issue is obviously pressing, we devote this work to explain how and why running vacuum in the expanding universe successfully relaxes the existing $\sigma_8$-tension and describes the LSS formation data significantly better than the $\Lambda$CDM.
Publisher URL: http://arxiv.org/abs/1711.00692
DOI: arXiv:1711.00692v1
You might also like
Discover & Discuss Important Research
Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.
Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article. |
# Rayleigh-Sommerfeld Formula: Explanation of these terms?
Can somebody explain the physical meaning of the last exponential term, $e^{j2\pi f_xx+j\pi f_yy}$, in the Rayleigh-Sommerfeld Formula?
Please correct me if I'm wrong, but I understand that $\epsilon(f_x,f_y,0)$ (which is the fourier transform) indicates the individual spatial frequencies and the second term indicates the frequency propagating a distance of $z_i$. What then does the third multiplicative term indicate?
That term is just the Fourier transform kernel, as stated in the book itself, this just gives you the inverse Fourier transform so that you obtain the field in spatial coordinates rather than frequencies. If you want a more geometrical understanding of the Fourier transformation, consider the following video https://www.youtube.com/watch?v=spUNpyF58BY, it has a very pedagogical approach.
• Are you saying the integral itself is the inverse fourier transform? I thought that it was merely an integral to sum up all the frequency terms – Goldname May 15 '18 at 15:38
• Exactly, the integral itself IS the inverse fourier transform. – ohneVal May 15 '18 at 15:48 |
Home
>
English
>
Class 12
>
Physics
>
Chapter
>
Elasticity
>
For a constant hydraulic stres...
For a constant hydraulic stress on an object, the fractional change in the object's volume ((triangleV)/(V)) and its bulk modulus (b) are related as
Updated On: 27-06-2022
(DeltaV)/(V)propK(DeltaV)/(V)prop(1)/(K)(DeltaV)/(V)propK^(2)(DeltaV)/(V)propK^(-2) |
Volume 414 - 41st International Conference on High Energy physics (ICHEP2022) - Higgs Physics
Impact of Advances in Detector Techniques on Higgs Measurements at Future Higgs Factories
U. Einhaus*, B. Dudar, J. List, Y. Radkhorrami and J. Torndal
Full text: pdf
Pre-published on: December 11, 2022
Published on:
Abstract
While the particle physics community is eagerly waiting for a positive sign for the construction of the next energy frontier collider, developments continue to advance the detector capabilities.
New methods and algorithms are being implemented in order to exploit the precious collisions at a Future Higgs Factory (FHF) as well as possible, informing at the same time, which detector aspects are of particular importance or in fact currently limiting.
In this work, three new event analysis methods are briefly introduced and put into context of hardware development for an FHF detector.
While they use data from a large Geant4-based detailed MC production of the International Large Detector (ILD) at the proposed International Linear Collider (ILC) at an e$^+$e$^-$ center-of-mass energy of 250 GeV, the conclusions are applicable to any FHF.
DOI: https://doi.org/10.22323/1.414.0538
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
# A special case of Young's inequality for convolutions
The problem:
Suppose $f,g\in L^1(\mathbb{R})$. Let $x\in \mathbb{R}$ and $\phi_x(y) = f(y)g(x-y)$. Show that for almost all $x$, $\phi_x$ is integrable. For such $x$ let $\psi(x) = \int_{-\infty}^\infty \phi_x(y)dy$ and let $\psi(x) = 0$ if $\phi_x$ is not integrable. Show $$\int_{-\infty}^\infty |\psi(x)|dx \leq \int_{-\infty}^\infty |f(x)|dx\int_{-\infty}^\infty |g(x)|dx.$$
What I've done:
(Thought) To show that the integral of $f(x-y)g(y)$ is finite (i.e., $f(x-y)g(y) \in L^1(\mathbb{R})$) we know H$\ddot{\text{o}}$lder's inequality tells us $$\| f(y)g(x-y)\|_1 \leq \|f(y)\|_p\|g(x-y)\|_q$$ for all $1\leq p,q\leq \infty$ with $\frac{1}{p} + \frac{1}{q} = 1$. In particular, if we pick $p = 1$ we know $\|f(y)\|_1$ is finite, and as $g\in L^1(\mathbb{R})$ we know it must have finite $L^\infty$ norm, so $\| f(y)g(x-y)\| < \infty$, and hence $f(y)g(x-y)$ is integrable almost everywhere.
Let $E_{y,x} = \{y\in \mathbb{R}: \phi_x \text{ is integrable }\}$. Then by Fubini's theorem we have \begin{align*} \int_{-\infty}^\infty |\psi(x)|dx &= \int_{-\infty}^\infty \left|\int_{E_{y,x}} \phi_x(y) dy\right| dx\newline &= \int_{-\infty}^\infty \left|\int_{E_{y,x}} f(y)g(x-y) dy\right| dx\newline &= \int_{E_{y,x}} \left|\int_{-\infty}^\infty f(y)g(x-y) dx\right| dy\newline &= \int_{E_{y,x}} |f(y)|\left|\int_{-\infty}^\infty g(x-y) dx\right| dy\newline &\leq \int_{E_{y,x}} |f(y)|\int_{-\infty}^\infty |g(x-y)| dx dy\newline &= \int_{E_{y,x}} |f(y)|\int_{-\infty}^\infty |g(x)| dx dy\newline &= \int_{E_{y,x}} |f(y)|\|g\|_1 dy\newline &= \|f\|_1\cdot \|g\|_1, \end{align*} Hence $$\int_{-\infty}^\infty |\psi(x)|dx \leq \int_{-\infty}^\infty |f(x)|dx\int_{-\infty}^\infty |g(x)|dx.$$
My issue is the area where I claim we can replace $g(x-y)$ with $g(x)$. It's true for any numerical $y$, but inside the integral, with respect to $x$, $y$ is really a variable, so it seems fishy...
Thanks!
-
This looks fine for me. The only thing that's missing for me is that you don't explicitly say what you know about $E_{y,x}$ and why (in particular the first equality is somewhat unjustified in your exposition). – t.b. Dec 7 '11 at 5:30
@t.b. Hm.. I guess all I have shown at this point, is that $\psi(x) \in L^1(E_{y,x})$, but $E_{y,x}$ could be really small. I suppose Holder's inequality on $\int f(x-y)g(y)dy$ shows the integral is finite almost everywhere, so that $E_{x,y}$ is almost all of $\mathbb{R}$. – Alex Dec 7 '11 at 5:46
How do you apply Hölder, exactly? Can you include that in your question? – t.b. Dec 7 '11 at 5:49
@t.b. Sure! I suppose it may also be worth actually stating I'm using $p = q = 1$ for H$\ddot{\text{o}}$lder's inequality. – Alex Dec 7 '11 at 6:04
Wait, that won't work as $1/1 + 1/1 \neq 1$... I'll think about this some more... – Alex Dec 7 '11 at 6:08 |
# Optimization of a Dissipative Quantum Gate¶
[1]:
# NBVAL_IGNORE_OUTPUT
%load_ext watermark
import os
import qutip
import numpy as np
import scipy
import matplotlib
import matplotlib.pylab as plt
import krotov
import copy
from functools import partial
from itertools import product
%watermark -v --iversions
qutip 4.4.1
matplotlib 3.1.2
scipy 1.3.1
krotov 1.0.0
numpy 1.17.2
matplotlib.pylab 1.17.2
CPython 3.7.3
IPython 7.10.2
This example illustrates the optimization for a quantum gate in an open quantum system, where the dynamics is governed by the Liouville-von Neumann equation. A naive extension of a gate optimization to Liouville space would seem to imply that it is necessary to optimize over the full basis of Liouville space (16 matrices, for a two-qubit gate). However, Goerz et al., New J. Phys. 16, 055012 (2014) showed that is not necessary, but that a set of 3 density matrices is sufficient to track the optimization.
This example reproduces the “Example II” from that paper, considering the optimization towards a $$\sqrt{\text{iSWAP}}$$ two-qubit gate on a system of two transmons with a shared transmission line resonator.
Note: This notebook uses some parallelization features (qutip.parallel_map/multiprocessing.Pool). Unfortunately, on Windows, multiprocessing.Pool does not work correctly for functions defined in a Jupyter notebook (due to the spawn method <https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods>__ being used on Windows, instead of Unix-fork, see also https://stackoverflow.com/questions/45719956). We therefore replace parallel_map with serial_map when running on Windows.
[2]:
import sys
if sys.platform == 'win32':
from qutip import serial_map as parallel_map
else:
from qutip import parallel_map
## The two-transmon system¶
We consider the Hamiltonian from Eq (17) in the paper, in the rotating wave approximation, together with spontaneous decay and dephasing of each qubit. Alltogether, we define the Liouvillian as follows:
[3]:
def two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
):
from qutip import tensor, identity, destroy
b1 = tensor(identity(n_qubit), destroy(n_qubit))
b2 = tensor(destroy(n_qubit), identity(n_qubit))
H0 = (
(ω1 - ωd - δ1 / 2) * b1.dag() * b1
+ (δ1 / 2) * b1.dag() * b1 * b1.dag() * b1
+ (ω2 - ωd - δ2 / 2) * b2.dag() * b2
+ (δ2 / 2) * b2.dag() * b2 * b2.dag() * b2
+ J * (b1.dag() * b2 + b1 * b2.dag())
)
H1_re = 0.5 * (b1 + b1.dag() + b2 + b2.dag()) # 0.5 is due to RWA
H1_im = 0.5j * (b1.dag() - b1 + b2.dag() - b2)
H = [H0, [H1_re, Omega], [H1_im, ZeroPulse]]
A1 = np.sqrt(1 / q1T1) * b1 # decay of qubit 1
A2 = np.sqrt(1 / q2T1) * b2 # decay of qubit 2
A3 = np.sqrt(1 / q1T2) * b1.dag() * b1 # dephasing of qubit 1
A4 = np.sqrt(1 / q2T2) * b2.dag() * b2 # dephasing of qubit 2
L = krotov.objectives.liouvillian(H, c_ops=[A1, A2, A3, A4])
return L
We will use internal units GHz and ns. Values in GHz contain an implicit factor 2π, and MHz and μs are converted to GHz and ns, respectively:
[4]:
GHz = 2 * np.pi
MHz = 1e-3 * GHz
ns = 1
μs = 1000 * ns
This implicit factor $$2 \pi$$ is because frequencies ($$\nu$$) convert to energies as $$E = h \nu$$, but our propagation routines assume a unit $$\hbar = 1$$ for energies. Thus, the factor $$h / \hbar = 2 \pi$$.
We will use the same parameters as those given in Table 2 of the paper:
[5]:
ω1 = 4.3796 * GHz # qubit frequency 1
ω2 = 4.6137 * GHz # qubit frequency 2
ωd = 4.4985 * GHz # drive frequency
δ1 = -239.3 * MHz # anharmonicity 1
δ2 = -242.8 * MHz # anharmonicity 2
J = -2.3 * MHz # effective qubit-qubit coupling
q1T1 = 38.0 * μs # decay time for qubit 1
q2T1 = 32.0 * μs # decay time for qubit 2
q1T2 = 29.5 * μs # dephasing time for qubit 1
q2T2 = 16.0 * μs # dephasing time for qubit 2
T = 400 * ns # gate duration
[6]:
tlist = np.linspace(0, T, 2000)
While in the original paper, each transmon was cut off at 6 levels, here we truncate at 5 levels. This makes the propagation faster, while potentially introducing a slightly larger truncation error.
[7]:
n_qubit = 5 # number of transmon levels to consider
In the Liouvillian, note the control being split up into a separate real and imaginary part. As a guess control we use a real-valued constant pulse with an amplitude of 35 MHz, acting over 400 ns, with a switch-on and switch-off in the first 20 ns (see plot below)
[8]:
def Omega(t, args):
E0 = 35.0 * MHz
return E0 * krotov.shapes.flattop(t, 0, T, t_rise=(20 * ns), func='sinsq')
The imaginary part start out as zero:
[9]:
def ZeroPulse(t, args):
return 0.0
We can now instantiate the Liouvillian:
[10]:
L = two_qubit_transmon_liouvillian(
ω1, ω2, ωd, δ1, δ2, J, q1T1, q2T1, q1T2, q2T2, T, Omega, n_qubit
)
The guess pulse looks as follows:
[11]:
def plot_pulse(pulse, tlist, xlimit=None):
fig, ax = plt.subplots()
if callable(pulse):
pulse = np.array([pulse(t, None) for t in tlist])
ax.plot(tlist, pulse/MHz)
ax.set_xlabel('time (ns)')
ax.set_ylabel('pulse amplitude (MHz)')
if xlimit is not None:
ax.set_xlim(xlimit)
plt.show(fig)
[12]:
plot_pulse(L[1][1], tlist)
## Optimization objectives¶
Our target gate is $$\Op{O} = \sqrt{\text{iSWAP}}$$:
[13]:
gate = qutip.gates.sqrtiswap()
[14]:
# NBVAL_IGNORE_OUTPUT
gate
[14]:
Quantum object: dims = [[2, 2], [2, 2]], shape = (4, 4), type = oper, isherm = False\begin{equation*}\left(\begin{array}{*{11}c}1.0 & 0.0 & 0.0 & 0.0\\0.0 & 0.707 & 0.707j & 0.0\\0.0 & 0.707j & 0.707 & 0.0\\0.0 & 0.0 & 0.0 & 1.0\\\end{array}\right)\end{equation*}
The key idea explored in the paper is that a set of three density matrices is sufficient to track the optimization
\begin{split}\begin{align} \Op{\rho}_1 &= \sum_{i=1}^{d} \frac{2 (d-i+1)}{d (d+1)} \ketbra{i}{i} \\ \Op{\rho}_2 &= \sum_{i,j=1}^{d} \frac{1}{d} \ketbra{i}{j} \\ \Op{\rho}_3 &= \sum_{i=1}^{d} \frac{1}{d} \ketbra{i}{i} \end{align}\end{split}
In our case, $$d=4$$ for a two qubit-gate, and the $$\ket{i}$$, $$\ket{j}$$ are the canonical basis states $$\ket{00}$$, $$\ket{01}$$, $$\ket{10}$$, $$\ket{11}$$
[15]:
ket00 = qutip.ket((0, 0), dim=(n_qubit, n_qubit))
ket01 = qutip.ket((0, 1), dim=(n_qubit, n_qubit))
ket10 = qutip.ket((1, 0), dim=(n_qubit, n_qubit))
ket11 = qutip.ket((1, 1), dim=(n_qubit, n_qubit))
basis = [ket00, ket01, ket10, ket11]
The three density matrices play different roles in the optimization, and, as shown in the paper, convergence may improve significantly by weighing the states relatively to each other. For this example, we place a strong emphasis on the optimization $$\Op{\rho}_1 \rightarrow \Op{O}^\dagger \Op{\rho}_1 \Op{O}$$, by a factor of 20. This reflects that the hardest part of the optimization is identifying the basis in which the gate is diagonal. We will be using the real-part functional ($$J_{T,\text{re}}$$) to evaluate the success of $$\Op{\rho}_i \rightarrow \Op{O}\Op{\rho}_i\Op{O}^\dagger$$. Because $$\Op{\rho}_1$$ and $$\Op{\rho}_3$$ are mixed states, the Hilbert-Schmidt overlap will take values smaller than one in the optimal case. To compensate, we divide the weights by the purity of the respective states.
[16]:
weights = np.array([20, 1, 1], dtype=np.float64)
weights *= len(weights) / np.sum(weights) # manual normalization
weights /= np.array([0.3, 1.0, 0.25]) # purities
The krotov.gate_objectives routine can initialize the density matrices $$\Op{\rho}_1$$, $$\Op{\rho}_2$$, $$\Op{\rho}_3$$ automatically, via the parameter liouville_states_set. Alternatively, we could also use the 'full' basis of 16 matrices or the extended set of $$d+1 = 5$$ pure-state density matrices.
[17]:
objectives = krotov.gate_objectives(
basis,
gate,
L,
liouville_states_set='3states',
weights=weights,
normalize_weights=False,
)
objectives
[17]:
[Objective[ρ₀[5⊗5,5⊗5] to ρ₁[5⊗5,5⊗5] via [𝓛₀[[5⊗5,5⊗5],[5⊗5,5⊗5]], [𝓛₁[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₁(t)], [𝓛₂[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₂(t)]]],
Objective[ρ₂[5⊗5,5⊗5] to ρ₃[5⊗5,5⊗5] via [𝓛₀[[5⊗5,5⊗5],[5⊗5,5⊗5]], [𝓛₁[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₁(t)], [𝓛₂[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₂(t)]]],
Objective[ρ₄[5⊗5,5⊗5] to ρ₅[5⊗5,5⊗5] via [𝓛₀[[5⊗5,5⊗5],[5⊗5,5⊗5]], [𝓛₁[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₁(t)], [𝓛₂[[5⊗5,5⊗5],[5⊗5,5⊗5]], u₂(t)]]]]
The use of normalize_weights=False is because we have included the purities in the weights, as discussed above.
## Dynamics under the Guess Pulse¶
For numerical efficiency, both for the analysis of the guess/optimized controls, we will use a stateful density matrix propagator:
A true physical measure for the success of the optimization is the “average gate fidelity”. Evaluating the fidelity requires to simulate the dynamics of the full basis of Liouville space:
[18]:
full_liouville_basis = [psi * phi.dag() for (psi, phi) in product(basis, basis)]
We propagate these under the guess control:
[19]:
def propagate_guess(initial_state):
return objectives[0].mesolve(
tlist,
rho0=initial_state,
).states[-1]
[20]:
full_states_T = parallel_map(
propagate_guess, values=full_liouville_basis,
)
[21]:
print("F_avg = %.3f" % krotov.functionals.F_avg(full_states_T, basis, gate))
F_avg = 0.344
Note that we use $$F_{T,\text{re}}$$, not $$F_{\text{avg}}$$ to steer the optimization, as the Krotov boundary condition $$\frac{\partial F_{\text{avg}}}{\partial \rho^\dagger}$$ would be non-trivial.
Before doing the optimization, we can look the population dynamics under the guess pulse. For this purpose we propagate the pure-state density matrices corresponding to the canonical logical basis in Hilbert space, and obtain the expectation values for the projection onto these same states:
[22]:
rho00, rho01, rho10, rho11 = [qutip.ket2dm(psi) for psi in basis]
[23]:
def propagate_guess_for_expvals(initial_state):
return objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
[24]:
def plot_population_dynamics(dyn00, dyn01, dyn10, dyn11):
fig, axs = plt.subplots(ncols=2, nrows=2, figsize=(16, 8))
axs = np.ndarray.flatten(axs)
labels = ['00', '01', '10', '11']
dyns = [dyn00, dyn01, dyn10, dyn11]
for (ax, dyn, title) in zip(axs, dyns, labels):
for (i, label) in enumerate(labels):
ax.plot(dyn.times, dyn.expect[i], label=label)
ax.legend()
ax.set_title(title)
plt.show(fig)
[25]:
plot_population_dynamics(
*parallel_map(
propagate_guess_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
## Optimization¶
We now define the optimization parameters for the controls, the Krotov step size $$\lambda_a$$ and the update-shape that will ensure that the pulse switch-on and switch-off stays intact.
[26]:
pulse_options = {
L[i][1]: dict(
lambda_a=1.0,
update_shape=partial(
krotov.shapes.flattop, t_start=0, t_stop=T, t_rise=(20 * ns))
)
for i in [1, 2]
}
Then we run the optimization for 2000 iterations
[27]:
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=3,
)
iter. J_T ∑∫gₐ(t)dt J ΔJ_T ΔJ secs
0 1.22e-01 0.00e+00 1.22e-01 n/a n/a 9
1 7.49e-02 2.26e-02 9.75e-02 -4.67e-02 -2.41e-02 30
2 7.41e-02 3.98e-04 7.45e-02 -8.12e-04 -4.14e-04 33
3 7.33e-02 3.70e-04 7.37e-02 -7.55e-04 -3.85e-04 36
(this takes a while)…
[28]:
dumpfile = "./3states_opt_result.dump"
if os.path.isfile(dumpfile):
opt_result = krotov.result.Result.load(dumpfile, objectives)
else:
opt_result = krotov.optimize_pulses(
objectives,
pulse_options,
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(reentrant=True),
chi_constructor=krotov.functionals.chis_re,
info_hook=krotov.info_hooks.print_table(J_T=krotov.functionals.J_T_re),
iter_stop=5,
continue_from=opt_result
)
opt_result.dump(dumpfile)
[29]:
opt_result
[29]:
Krotov Optimization Result
--------------------------
- Started at 2019-02-25 00:43:31
- Number of objectives: 3
- Number of iterations: 2000
- Reason for termination: Reached 2000 iterations
- Ended at 2019-02-25 23:19:34 (22:36:03)
## Optimization result¶
[30]:
optimized_control = opt_result.optimized_controls[0] + 1j * opt_result.optimized_controls[1]
[31]:
plot_pulse(np.abs(optimized_control), tlist)
[32]:
def propagate_opt(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
).states[-1]
[33]:
opt_full_states_T = parallel_map(
propagate_opt, values=full_liouville_basis,
)
[34]:
print("F_avg = %.3f" % krotov.functionals.F_avg(opt_full_states_T, basis, gate))
F_avg = 0.977
[35]:
def propagate_opt_for_expvals(initial_state):
return opt_result.optimized_objectives[0].propagate(
tlist,
propagator=krotov.propagators.DensityMatrixODEPropagator(),
rho0=initial_state,
e_ops=[rho00, rho01, rho10, rho11]
)
Plotting the population dynamics, we see the expected behavior for the $$\sqrt{\text{iSWAP}}$$ gate.
[36]:
plot_population_dynamics(
*parallel_map(
propagate_opt_for_expvals,
values=[rho00, rho01, rho10, rho11],
)
)
[37]:
def plot_convergence(result):
fig, ax = plt.subplots()
ax.semilogy(result.iters, result.info_vals)
ax.set_xlabel('OCT iteration')
ax.set_ylabel(r'optimization error $J_{T, re}$')
plt.show(fig)
[38]:
plot_convergence(opt_result) |
# All
## At the limits of criticality-based quantum metrology: apparent super-Heisenberg scaling revisited. (arXiv:1702.05660v3 [quant-ph] UPDATED)
We address the question whether the super-Heisenberg scaling for quantum
estimation is realizable. We unify the results of two approaches. In the first
one, the original system is compared with its copy rotated by the parameter
dependent dynamics. If the parameter is coupled to the one-body part of the
Hamiltonian the precision of its estimation is known to scale at most as
$N^{-1}$ (Heisenberg scaling) in terms of the number of elementary subsystems
used, $N$. The second approach considers fidelity at criticality often leading
## Semiclassical approach to finite temperature quantum annealing with trapped ions. (arXiv:1802.06397v1 [quant-ph])
Recently it has been demonstrated that an ensemble of trapped ions may serve
as a quantum annealer for the number-partitioning problem [Nature Comm. DOI:
10.1038/ncomms11524]. This hard computational problem may be addressed
employing a tunable spin glass architecture. Following the proposal of the
trapped ions annealer, we study here its robustness against thermal effects,
that is, we investigate the role played by thermal phonons. For the efficient
description of the system, we use a semiclassical approach, and benchmark it
## Quench Dynamics of Finite Bosonic Ensembles in Optical Lattices with Spatially Modulated Interactions. (arXiv:1802.06693v1 [cond-mat.quant-gas])
The nonequilibrium quantum dynamics of few boson ensembles which experience a
spatially modulated interaction strength and are confined in finite optical
lattices is investigated. Performing quenches either on the wavevector or the
phase of the interaction profile an enhanced imbalance of the interatomic
repulsion between distinct spatial regions of the lattice is induced. Following
both quench protocols triggers various tunneling channels and a rich excitation
## Nonlinear Quantum Rabi Model in Trapped Ions. (arXiv:1709.07378v2 [quant-ph] UPDATED)
We study the nonlinear dynamics of trapped-ion models far away from the
Lamb-Dicke regime. This nonlinearity induces a sideband cooling blockade,
stopping the propagation of quantum information along the Hilbert space of the
Jaynes-Cummings and quantum Rabi models. We compare the linear and nonlinear
cases of these models in the ultrastrong and deep strong coupling regimes.
Moreover, we propose a scheme that simulates the nonlinear quantum Rabi model
in all coupling regimes. This can be done via off-resonant nonlinear red and
## Systematic elimination of Stokes divergences emanating from complex phase space caustics. (arXiv:1802.06407v1 [quant-ph])
Stokes phenomenon refers to the fact that the asymptotic expansion of complex
functions can differ in different regions of the complex plane, and that beyond
the so-called Stokes lines has an unphysical divergence. An important special
case is when the Stokes lines emanate from phase space caustics of a complex
trajectory manifold. In this case, symmetry determines that to second order
there is a double coverage of the space, one portion of which is unphysical.
## Quantum simulation of lattice gauge theories using Wilson fermions. (arXiv:1802.06704v1 [cond-mat.quant-gas])
dynamics of lattice gauge theories, in particular in regimes that are difficult
to compute on classical computers. Future progress towards scalable quantum
simulation of lattice gauge theories, however, hinges crucially on the
efficient use of experimental resources. As we argue in this work, due to the
fundamental non-uniqueness of discretizing the relativistic Dirac Hamiltonian,
## Coherent single-atom superradiance. (arXiv:1705.09136v3 [quant-ph] UPDATED)
Quantum effects, prevalent in the microscopic scale, generally elusive in
macroscopic systems due to dissipation and decoherence. Quantum phenomena in
large systems emerge only when particles are strongly correlated as in
superconductors and superfluids. Cooperative interaction of correlated atoms
radiation phenomenon, exhibiting novel physics such as quantum Dicke phase and
ultranarrow linewidth for optical clocks. Recent researches to imprint atomic
## Superthermal photon bunching in terms of simple probability distributions. (arXiv:1802.06417v1 [physics.optics])
We analyze the second-order photon autocorrelation function $g^{(2)}$ with
respect to the photon probability distribution and discuss the generic features
of a distribution that result in superthermal photon bunching ($g^{(2)}>2$).
Superthermal photon bunching has been reported for a number of optical
microcavity systems that exhibit processes like superradiance or mode
competition. We show that a superthermal photon number distribution cannot be
constructed from the principle of maximum entropy, if only the intensity and
## Cavity-enhanced spectroscopy of a few-ion ensemble in Eu3+:Y2O3. (arXiv:1802.06709v1 [physics.optics])
We report on the coupling of the emission from a single europium-doped
nanocrystal to a fiber-based microcavity under cryogenic conditions. As a first
step, we study the sample properties and observe a strong correlation between
It is shown that a linearized classical gravity wave $\hat{a}$ {\em la} |
All Questions
0answers
13 views
Compactness of adelic quotients for unipotent groups over global fields
Let $K$ be a global field, $\mathbb{A}_K$ the ring of adeles, and $U$ a unipotent algebraic group over $K$. Why is $U(\mathbb{A}_K)/U(K)$ endowed with the quotient topology compact?
0answers
16 views
Is this problem on weighted bipartite graph solvable in polynomial time or it is NP-Complete
I encounter this problem recently and I want to know whether it is NP-Complete or solvable in polynomial time: Given a weighted bipartite graph $G = (V, E)$ where $V$ can be partitioned into two sets ...
0answers
26 views
Laplacian with singular potential
Let $S$ be a $2$-dimensional sphere. Let $p$ be a point in $S$. Let $L$ be a second order elliptic partial differential operator with smooth coefficients defined over the complement of $p$. Near $p$, ...
0answers
27 views
Lipschitz function with somewhere dense image
Let $Q=[-1,1]^2$ denote the unit square and let $f:Q\to Q$ be a Lipschitz function such that for any ball $B(a,r)\subset Q$ with radius $r$, the width of the image $f(B(a,r))$ is at least $cr$ for ...
0answers
26 views
Lower Bound Omega Notation [on hold]
I have to prove that some number $S$ is bigger than $\Omega(|V|)$, where |V| is the number of vertices. I read the definition of asimptotic notations, but I am still confused with the examples. Fot ...
1answer
145 views
Is there a straightedge and compass construction of incommensurables in the hyperbolic plane?
In other words, given a segment in the hyperbolic plane is there a straightedge and compass construction of a segment incommensurable with it? In the Euclidean plane one can take the diagonal of the ...
1answer
95 views
1answer
182 views
A question on degree 4 binary forms
Suppose that we have a binary form $f(x,y) \in \mathbb{Z}[x,y]$ of degree 4, and that we explicitly have $$\displaystyle f(x,y) = a_0 x^4 + a_1 x^3 y + a_2 x^2 y^2 + a_3 xy^3 + y^4,$$ so that $(0,1)$ ...
0answers
138 views
Does $S^4$ have a “symplecto-homeomorphic” structure?
The 4-sphere $S^4$ cannot be a symplectic manifold. In particular, it does not admit an atlas whose transition maps are symplectomorphisms $(\mathbb{R}^4,\omega_0)\to(\mathbb{R}^4,\omega_0)$. A ...
0answers
120 views
Learning math from the very beginning with no previous knowledge [on hold]
I didn't do any math like calculus, functions, vectors, etc, not even in high school. I want to build my math knowledge up from the ground up. A friend recommended that I start with Principia ...
0answers
32 views
L2 norm of a M-Whittaker function
Let $M_{\kappa,\mu}(z)$ be the Whittaker function, as defined here http://en.wikipedia.org/wiki/Whittaker_function. Does any one know the evaluation of the following integral? ...
0answers
42 views
on reductive monoids which are gorenstein
Let $M$ a reductive monoid, i.e. a integral normal affine scheme, which is a monoid whose group of units is a connected reductive group. By Rittatore ...
0answers
69 views
Sum or difference of modulus of holomorphic functions [on hold]
Assume that $f$ and $g$ are two holomorphic functions defined in the unit disk. If $$|f|^2-|g|^2\equiv 1$$ or $$|f|^2+|g|^2\equiv 1,$$ then it seems that $f$ and $g$ are constants. How to prove this.
3answers
217 views
Does this property of a first-order structure imply categoricity?
Let $\mathfrak{A}$ be a first-order structure over a relational language and let $\kappa$ be an infinite cardinal. Lets say that $\mathfrak{A}$ has the $\kappa$-property if for every structure ...
0answers
35 views
Possible ways to create a graph representation from a distance matrix (through approximation)
Forgive me, Im not math professional, but a computer scientist at the beginning of my base research from my thesis, so bare with me if I miss something blatantly obvious. I have a Euclidean distance ...
0answers
127 views
Theory of mnemonics [on hold]
Even for the typical most skilled (human) number theorist it is hard to reproduce only the first 10 digits of $\pi$ in moderate speed (without physically reading them off). On the other hand there ...
0answers
45 views
Is there any nonnegative bounded function satisfying the following property? [on hold]
Is there a smooth funtion $f(r)$, $r\geq 0$, satisfying the following property: $0\leq f(r) \leq c$, $\int^{\infty}_{r_0}\frac{f(r)}{r}dr<\infty$ for some $r_0>0$, and there exists an sequence ...
1answer
295 views
Group structure on an arbitrary completely regular topological space that makes $(x,y)\mapsto xy^{-1}$ continuous at $(1,1)$
Let $(G,\mathcal T)$ be a completely regular topological space. Is there a group structure on $G$ such that the function $$f:G\times G\to G$$ $$f(x,y)=xy^{-1}$$ is continuous at $(1,1)$?
0answers
45 views
Why does $\pi_t$ preserve pullbacks in this special case?
Let $X$ be a fibrant pointed cosimplicial space. Following Bousfield-Kan, let $\text{lim}^{\partial \Delta_{n+1}} X = M^{n}X$ be the nth matching object of $X$. Onecan then show that there is a ...
1answer
255 views
A “good scale” that is not really a scale
I don't know much about singular cardinal combinatorics, so I apologize in advance if I write something that is wrong or looks funny. First let me recall some basic definitions. Let $\lambda$ be a ...
0answers
28 views
Error on parity bits of Reed-Solomon error correction code [migrated]
I'm trying to figure this out but it seems never to be covered in articles explaining Reed-Solomon codes. If I have a string with 64 characters (bytes) and 4 parity bytes for error checking and ...
2answers
82 views
Which criteria guarantee an orthogonal circuit in $\mathbb R^3$ to be rigid?
For $n\ge4$, define an orthogonal circuit or O-circuit as a closed circuit of $n$ unit segments in $\mathbb R^3$ such that any two neighboring segments form a right angle. (Physically this could be ...
15 30 50 per page |
# Observations of excitation and damping of transversal oscillation in coronal loops by AIA/SDO
A.Abedini
The excitation and damping of transversal coronal loop oscillations and quantitative relation between damping time, damping quality (damping time per period), oscillation amplitude, dissipation mechanism and the wake phenomena are investigated. The observed time series data with the \textit{Atmospheric Imaging Assembly} (AIA) telescope on NASA’s \textit{Solar Dynamics Observatory} (SDO) satellite on 2015 March 2, consisting of 400 consecutive images with 12 seconds cadence in the 171 $\rm{{\AA}}$ pass band is analyzed for evidence of transversal oscillations along the coronal loops by Lomb-Scargle periodgram. In this analysis signatures of transversal coronal loop oscillations that are damped rapidly were found with dominant oscillation periods in the range of $\rm{P=12.25-15.80}$ minutes. Also, damping times and damping qualities of transversal coronal loop oscillations at dominant oscillation periods are estimated in the range of $\rm{\tau_d=11.76-21.46}$ minutes and $\rm{\tau_d/P=0.86-1.49}$, respectively. The observational results of this analysis show that damping qualities decrease slowly with increasing the amplitude of oscillation, but periods of oscillations are not sensitive function of amplitude of oscillations. The order of magnitude of the damping qualities and damping times are in good agreement with previous findings and the theoretical prediction for damping of kink mode oscillations by dissipation mechanism. Furthermore, oscillation of loop segments attenuate with time roughly as $t^{-\alpha}$ that magnitude values of $\alpha$ for 30 different segments change from 0.51 to 0.75.
Original Article: http://arxiv.org/abs/1801.09217 |
# Topology (Boundary points, Interior Points, Closure, etc )
1. Mar 4, 2006
Hi.
Can somebody please check my work!?
I'm just not sure about 2 things, and if they are wrong, all my work is wrong.
1. Find a counter example for "If S is closed, then cl (int S) = S
I chose S = {2}. Im not sure if S = {2} is an closed set? I think it is becasue S ={2} does not have an interior point, and 2 has to be a boundary point, (and 2 cannot be both and interior and a boundary point.) Im sure S={2} is closed!
cl (int S)
=cl (int 2)
=cl (empty)
= empty
and that is not equal to S = {2}
2. Let A be a nonempty open subset of R and let Q be the set of rationals. Prove that (A n Q) .... (I hope those symbols show, I got them from MS Word)
I figured that Since "A is a nonempty open subset of R," A has to be composed of MORE than 1 element, hence, A has to have 2,3,4.... elements.
And I know (from lectures and the text book) that between any 2 real numbers, their is a rational. Hence, (A n Q) .
How does that sound? Can somebody please check this? Thanks in advance.
2. Mar 4, 2006
### StatusX
Your answer to 1 sounds good, as long as your considering {2} as a subset of R with the usual topology.
For 2, I assume by your answer that you want to show:
$$A \cap Q \neq \emptyset$$
To be more rigorous, you should show that A must contain an open interval and that any open interval in R must contain some rational numbers.
3. Mar 4, 2006
Thank you for confirming my answers!
Yes, for 2), I was trying to find $$A \cap Q \neq \emptyset$$ ... but im sure Im on the right track for that!
4. Mar 5, 2006
### matt grime
Defintion of open:
A is open if for any point a in A there is an interval (x-e,x+e) contained in A for some e>0 (e depends on a).
Now, you need to show that this has a rational number in it, which can be done in many ways of varying highbrow-ness.
5. Mar 5, 2006
Thats right! i never thought of it like that.
A is open if all the points in A are interior points, and a point x in A is an interior point if a Neighbourhood of x is contained in A ... just like you said, (x-e, x+e).
Since A is a subset of the Real Line, it contains Q.
I have another Question. Prove: An accumulation point of a set S is either an interior point of S or a bounadry point of S
Would it be okay if I write: "If a point in S is either an interior point of S or a boundary point of S, then it is an acculumation point."
I personally find my above "If - Then" statement MUCH easier to prove!
6. Mar 5, 2006
### matt grime
that is a huge leap. It contains an element of Q (actually it contains infintely many elements of Q), it doesn't contain Q.
It is an 'if then' statement already, however it is exactly the reverse of what you wrote.
7. Mar 5, 2006
Oops, that was a hudge leap. But I got the idea that it contains an element of Q.
I could say that between any two real numbers in the interval A their is a rational number (As prooved in lecture), thus, their exsits a rational number in A
So if I prove what I worte, then its as if I assume its an "if and only if statement" which would be an invalid assumption.
8. Mar 5, 2006
### matt grime
Erm, you are free to prove the statement you made, however it doesn't at all answer the question you were asked. Rather than say 'it is as if I assume it is an if and only if statement', I would say you have just not proved what you were asked to prove. I don't know what you were assuming.
9. Mar 5, 2006
You are right.
Oh, what I was trying to say was, for example,
If it rains, then I'll watch TV
If I watch TV, then it rains.
These 2 statements don't mean the same thing.
But If I want to prove "it rains if and only if I watch TV"
Then ill have to prove
If it rains, then I'll watch TV
If I watch TV, then it rains.
So for the question, I was trying to prove it the wrong way beacue it was easier, however, I can't do that.
So ill just have to prove "An accumulation point of a set S is either an interior point of S or a bounadry point of S" and NOT "If a point in S is either an interior point of S or a boundary point of S, then it is an acculumation point." :)
10. Mar 5, 2006
### shmoe
How did you manage to prove your version? It looks false to me.
11. Mar 5, 2006
Oh no.. i think im confusing you guys.. im Sorry for that.
I didnt prove it yet, I was just wondering if it would be valid to proove "If a point in S is either an interior point of S or a boundary point of S, then it is an acculumation point." INSTEAD of proving "An accumulation point of a set S is either an interior point of S or a bounadry point of S" ... Which i found out you can't do that
Sorry for the confusion.
12. Mar 5, 2006
### shmoe
I did understand this point. I'm just pointing out that the statement "If a point in S is either an interior point of S or a boundary point of S, then it is an acculumation point." is false, irregardless of the fact that it wouldn't solve your question. Your set S={2} will serve as a counter example, 2 is a boundary point yet not an accumulation point (at least not under the definition I'm used to, i.e. every neighbourood of 2 would need a point in S\{2}).
13. Mar 5, 2006
Ohhh okay! I see what you'r saying. That is true indeed! I learned now that I have to be very carefull when doing proofs like this ... and their is not to look for an easy way out.
14. Mar 6, 2006
Okay, I've spent all of yesterday and most of today on this same question. With very little progress.
"An accumulation point of a set S is either an interior point of S or a bounadry point of S"
Suppose $a \in S'$ (S' is the set of accumulation points). Then $a \in (N^* (x,e) \cap S) \neq \emptyset$ ... which means that $a \in S$. Since $a \in S$, then a is either an interior point or a boundary point?
How does that sound? I feel like its not good.. but thats as far as I could get.
I tried breaking it up into cases, case1: S is open, case2: S is closed.... where case1 would mean its an interior point.
But this fails because i cannot come up with anything for case2 |
## The Annals of Probability
### The Hydrodynamical Behavior of the Coupled Branching Process
Andreas Greven
#### Abstract
The coupled branching process $(\eta^\mu_t)$ is a Markov process on $(\mathbb{N})^S (S = \mathbb{Z}^d)$ with initial distribution $\mu$ and the following time evolution: At rate $b\eta(x)$ a particle is born at site $x$, which moves instantaneously to a site $y$ chosen with probability $q(x, y)$. All particles at a site die at rate $pd$, individual particles die independent from each other at rate $(1 - p)d$. Furthermore, all particles perform independent continuous time random walks with kernel $p(x, y)$. We consider here the case $b = d$ and the symmetrized kernels $\hat p, \hat q$ are transient. We show that the measures $\mathscr{L}(\eta^\mu_t(\cdot + \lbrack\alpha \sqrt{tx}\rbrack)), (\alpha \in \mathbb{R}^+, x \in \mathbb{R}^d)$ converge weakly for $t \rightarrow \infty$ to $\nu_{\tau(a,x)}$. Here $\nu_\rho$ is the invariant measure of the process with: $E^{\nu_\rho}(\eta(x)) = \rho$ and which is also extremal in the set of all translationinvariant invariant measures of the process. The density profile $\tau(\alpha, x)$ is calculated explicitly; it is governed by the diffusion equation.
#### Article information
Source
Ann. Probab., Volume 12, Number 3 (1984), 760-767.
Dates
First available in Project Euclid: 19 April 2007
https://projecteuclid.org/euclid.aop/1176993226
Digital Object Identifier
doi:10.1214/aop/1176993226
Mathematical Reviews number (MathSciNet)
MR744232
Zentralblatt MATH identifier
0596.60095
JSTOR |
1. [SOLVED] metric problem
maybe im wrong or wrote the correct answer differently...
i put that there is 4.96 x 10^-7 tons in 4.5 x 10^2 kg. is that right?
and for a drive way that measures 20ft by 30ft, how much area does it cover in square mL... but i think its mm since the answer is 5.57 x 10^7. but even so..... i got 5.57 x 10^11...
i need help to see if im right or which is which...
2. sorry.
i came on another problem. the volume of a tank is 2.15 x 1.^5 cm^3.
how many gallons of water will it take it to fill it up?
how do you do that?
3. Use this conversion, it should help
$1000cm^3 = L$
4. Hello,
Originally Posted by >_<SHY_GUY>_<
maybe im wrong or wrote the correct answer differently...
i put that there is 4.96 x 10^-7 tons in 4.5 x 10^2 kg. is that right?
$1 \text{ ton }=907.18474 \text{ kg }$ (I know the tons in kg, I hope you were speaking of "short tons")
So $4.96 \times 10^{-7} \text{ tons }=907.18474 \times 4.96 \times 10^{-7} \text{ kg }$
and for a drive way that measures 20ft by 30ft, how much area does it cover in square mL... but i think its mm since the answer is 5.57 x 10^7. but even so..... i got 5.57 x 10^11...
Hmmm if L stands for Litres, then it is not possible since L is for volumes, not areas !
1 foot=304.8 mm. (we know that 1 foot=30.48 cm and 1cm=10mm)
So 20ft x 30ft=(20*304.8)(30*304.8) mm²=5.57*10^7
Originally Posted by >_<SHY_GUY>_<
sorry.
i came on another problem. the volume of a tank is 2.15 x 1.^5 cm^3.
how many gallons of water will it take it to fill it up?
how do you do that?
Okay.
I think you have a problem with the cm, mm, etc... meanings.
The standard unit is m (for meter)
Here is a table of correspondance :
$\begin{array}{c|c} \text{Power} & \text{Writing} \\ \hline
10^{-3} \text{ m} & \text{mm} \\
10^{-2} \text{ m} & \text{cm} \\
10^{-1} \text{ m} & \text{dm} \\
{\color{red}10^{0} \text{ m}} & {\color{red}\text{m}} \\
10^{1} \text{ m} & \text{dam} \\
10^{2} \text{ m} & \text{hm} \\
10^{3} \text{ m} & \text{km}
\end{array}$
remember that the same letters in front of m apply to litres and grams.
We know that 1 gallon = 3,78541178 litres
Now, remember that 1L=1 dm^3, that is to say $(10^{-1} m)^3=10^{-3} m^3$
or $1 dm^3=(10*cm)^3=1000 cm^3$
Try to do it
Originally Posted by 11rdc11
Use this conversion, it should help
$cm^3 = L$
No, it is dm^3
Edit : I must say I cheat a little since I'm in a country where we use these metric systems. On the contrary, it's a real difficulty if we travel to the USA
5. Originally Posted by 11rdc11
Use this conversion, it should help
$cm^3 = L$
ok the answer for that is 57.0 gal.
this is what i did, but i think i didnt mess up...
$2.15*10^5 cm^3/1 * 1L/ cm^3 * 1qt/0.946L * 1gal/4qt$
and i got this on my calculator: 56818.18182
6. Originally Posted by Moo
Hello,
$1 \text{ ton }=907.18474 \text{ kg }$ (I know the tons in kg, I hope you were speaking of "short tons")
So $4.96 \times 10^{-7} \text{ tons }=907.18474 \times 4.96 \times 10^{-7} \text{ kg }$
Hmmm if L stands for Litres, then it is not possible since L is for volumes, not areas !
1 foot=304.8 mm. (we know that 1 foot=30.48 cm and 1cm=10mm)
So 20ft x 30ft=(20*304.8)(30*304.8) mm²=5.57*10^7
Okay.
I think you have a problem with the cm, mm, etc... meanings.
The standard unit is m (for meter)
Here is a table of correspondance :
$\begin{array}{c|c} \text{Power} & \text{Writing} \\ \hline
10^{-3} \text{ m} & \text{mm} \\
10^{-2} \text{ m} & \text{cm} \\
10^{-1} \text{ m} & \text{dm} \\
{\color{red}10^{0} \text{ m}} & {\color{red}\text{m}} \\
10^{1} \text{ m} & \text{dam} \\
10^{2} \text{ m} & \text{hm} \\
10^{3} \text{ m} & \text{km}
\end{array}$
remember that the same letters in front of m apply to litres and grams.
We know that 1 gallon = 3,78541178 litres
Now, remember that 1L=1 dm^3, that is to say $(10^{-1} m)^3=10^{-3} m^3$
or $1 dm^3=(10*cm)^3=1000 cm^3$
Try to do it
No, it is dm^3
Edit : I must say I cheat a little since I'm in a country where we use these metric systems. On the contrary, it's a real difficulty if we travel to the USA
Well The US Keeps Saying That We Will Switch To Metric...Its Just Sad To See That They Keep Saying That
well the original problem is how many tons are there in 4.5 x 10^2 kg? and the answer is 0.496. but what i did was from 450 kg and convert to grams, to lbs, to tons... and i guess my answer is wrong...
i think my problem is that i quickly read the suffix of the unit and assume that is the number without reading what unit is below or above. i accidently put 1000 mm = 1 cm
and well its for chemisrty... and he only limits us to the units he gave us... but i will try
7. Originally Posted by 11rdc11
Yep it looks right
but apparently its not right... Moo informed me that is dm not cm...
but thank you for your help..
8. Originally Posted by >_<SHY_GUY>_<
but apparently its not right... Moo informed me that is dm not cm...
but thank you for your help..
And I'm sure it's dm not cm. If you take dm, you'll get the correct answer you're given
Originally Posted by >_<SHY_GUY>_<
Well The US Keeps Saying That We Will Switch To Metric...Its Just Sad To See That They Keep Saying That
well the original problem is how many tons are there in 4.5 x 10^2 kg? and the answer is 0.496. but what i did was from 450 kg and convert to grams, to lbs, to tons... and i guess my answer is wrong...
I don't know which tons you use...
There's a ton in my country we use as being 1000kg, but when i looked up in google, i found that a "short ton" is 904 kg.
i think my problem is that i quickly read the suffix of the unit and assume that is the number without reading what unit is below or above. i accidently put 1000 mm = 1 cm
then you should get the correct answer now !
it's 1000mm^3=1 cm^3
You are very welcome
9. Just looked up in my physics book and this may help you
$1m^3 = 1000L = 35.3ft^3=264gal$
Thanks for catching my mistake
10. Oh by the way, look at here for the powers of the meter : Metre - Wikipedia, the free encyclopedia
Litre - Wikipedia, the free encyclopedia
One litre is equal to 0.001 cubic metre and is denoted as 1 cubic decimetre (dm3).
Wikipedia explains it well
11. Originally Posted by Moo
And I'm sure it's dm not cm. If you take dm, you'll get the correct answer you're given
I don't know which tons you use...
There's a ton in my country we use as being 1000kg, but when i looked up in google, i found that a "short ton" is 904 kg.
then you should get the correct answer now !
it's 1000mm^3=1 cm^3
You are very welcome
well... i dont know if it clarifies it... but 2000 lbs = 1 ton thats what we use.
i need to be more aware of the prefixes... and thank you for the table... it helps alot.
12. im still a bit lost.
how many tons in 450 g.
i did this.
450 kg * 1g/ 1000kg * 1lbs/454g * 1ton/ 2000 lbs.
but i get 0.000000496.
but the answer is 0.496... i dont get What Moo Was Saying Of A Conversion Related to this...
maybe im still doing it wrong
13. Check your conversion, 1kg = 1000g
14. Originally Posted by 11rdc11
Check your conversion, 1kg = 1000g
but isnt kg smaller than grams?
so 1 kg doesnt equal 1000g
15. Nope kg bigger than g
Page 1 of 2 12 Last |
# Tricky Factorization
How do I factor this expression: $$0.09e^{2t} + 0.24e^{-t} + 0.34 + 0.24e^t + 0.09e^{-2t} ?$$ By trial and error I got $$\left(0.3e^t + 0.4 + 0.3 e^{-t}\right)^2$$ but I'd like to know how to formally arrive at it.
Thanks.
-
The most striking thing about the given expression is the symmetry. For anything with that kind of symmetric structure, there is a systematic approach which is definitely not trial and error.
Let $$z=e^t+e^{-t}.$$ Square. We obtain $$z^2=e^{2t}+2+e^{-2t},$$ and therefore $e^{2t}+e^{-2t}=z^2-2$. Substitute in our expression. We get $$0.09(z^2-2)+0.24 z+0.34.\qquad\qquad(\ast)$$ This simplifies to $$0.09z^2 +0.24z +0.16.$$ The factorization is now obvious. We recognize that we have simply $(0.3z+0.4)^2$. Now replace $z$ by $e^t+e^{-t}$.
If the numbers had been a little different (but still with the basic $e^t$, $e^{-t}$ symmetry) we would at the stage $(\ast)$ obtain some other quadratic. In general, quadratics with real roots can be factored as a product of linear terms. It is just a matter of completing the square. For example, replace the constant term $0.34$ by, say, $0.5$. We get a quadratic in $z$ that does not factor as prettily, but it does factor.
Comment: For fun we could instead make the closely related substitution $2y=e^t+e^{-t}$, that is, $y=\cosh t$. If we analyze the substitution process further, we get to useful pieces of mathematics, such as the Chebyshev polynomials.
The same idea is the standard approach to finding the roots of palindromic polynomials. For example, suppose that we want to solve the equation $x^4 +3x^3-10x^2+3x+1=0$. Divide through by $x^2$. We get the equation $$x^2+3x-10+\frac{3}{x}+\frac{1}{x^2}=0.$$ Make the substitution $z=x+\frac{1}{x}$. Then $x^2+\frac{1}{x^2}=z^2-2$. Substitute. We get a quadratic in $z$.
-
HINT $\$ Exploit the innate symmetry in the expression using the following with $\rm\: X = e^t\:$
$$\rm\ A\ (X + X^{-1})^2 + B\ (X + X^{-1}) + C\ \ =\ \ A\ X^2 + B\ X + (2\:A + C) + B\ X^{-1} + A\ X^{-2}$$
In your example the LHS quadratic is a perfect square since it has discriminant $= 0$, namely
$$\rm\ a^2\ (X + X^{-1})^2 + 2\:a\:b\ (X + X^{-1}) + b^2\ =\ (a\ (X + X^{-1}) + b)^2\quad\ for\ \ a = 0.3,\ b = 0.4$$
-
First of all, factoring is basically trial and error. In some cases, it can be done without guess and check. But, when you learn to factor polynomials, that is how you are taught to do it. You have a few ideas that give you the basic form, then you guess and check to see if it multiplies out right. The more you do it, the better you get at it. In cases like a quadratic polynomial, you can actually complete the square and find the answer, but for more complicated polynomials that no longer works.
So, if you want to factor this, there's got to be some guessing going on. But, you can make some educated guesses. You look at it and you see the highest power term of e is 2t and the lowest is -2t. You guess that this the square of something with an $e^t$ and an $e^{-t}$. You deduce the coefficients of these since the product has $0.09e^{2t}$ and $0.09e^{-2t}$. All that's really left is a constant term, $c$. Thus, you have $$(0.3 e^t + c + 0.3 e^{-t})^2$$ You multiply it out. You figure out that $c = 0.4$. Assuming this is the right basic form, the only possible choice for $c$ is $0.4$ since $0.24 e^t$ comes from $2c \cdot 0.3 e^t$.
-
The first step is to get rid of the exponential by putting $x = e^t$: $$f(x) = 0.09x^2 + 0.24x + 0.34 + 0.24x^{-1} + 0.09x^{-2}.$$ The second step is to get rid of the negative powers by multiplying by $x^2$: $$g(x) = x^2f(x) = 0.09x^4 + 0.24x^3 + 0.34x^2 + 0.24x + 0.09.$$ Now we have a polynomial $g(x)$ that we need to factor. We get $$g(x) = (0.3x^2 + 0.4x + 0.3)^2.$$ This implies a factorization of $f(x)$: $$f(x) = x^{-2}f(x) = (0.3x + 0.4 + 0.3x^{-1})^2.$$ All you need to do now is to substitute $x = e^t$.
The advantage of this method is that it reduces the problem to factorization of polynomials, which is something we already know how to do.
-
Writing $x = e^t$ makes the factorization more familiar, but it actually doesn't answer the OP's question. How did you know how to factor $g(x)$ as $g(x) = (0.3x^2 +0.4x + 0.3)$? – JavaMan Dec 16 '11 at 16:14
There are known algorithms for factoring polynomials over the rationals. They are probably already programmed in Wolfram alpha, so you can just plug it there and see what you get. No tricks here. – Yuval Filmus Dec 17 '11 at 0:09 |
# compound inequality
Compound Inequality
Two or more inequalities taken together. Often this refers to a connected chain of inequalities, such as 3 < x < 5.
Formally, a compound inequality is a conjunction of two or more inequalities. |
## mathew0135 3 years ago Solve the initial value problem: x(dy/dx)+y(x) = 9y(x)^(2), y(1) = -1
1. mathew0135
$x \frac{ dy }{dx } + y(x) = 9y(x)^2 , y(1) =-1$ Just the equation a little neater
2. Dido525
I would solve for dy/dx and intergrate.
3. UnkleRhaukus
the ex's int he brackets are just indicating the independent variable right?$x \frac{ \text dy }{\text dx } + y = 9y^2 ,$
4. Dido525
and we know y(1) = -1.
5. mathew0135
I believe so, that's just how its been written. I'll try solving for dy/dx and integrating then,
6. Dido525
So I would sub those in.
7. hartnn
u realize that you can separate the variables easily here ?
8. mathew0135
$\frac{ dy }{ dx } = \frac{ 9y(x)^2-y }{ x }$ Not too familiar with these problems, but basically i need to integrate that, no?
9. hartnn
u bring all terms of one variable on one side of = sign, like this : $$\large \frac{1}{9y^2-y}dy=\frac{1}{x}dx$$ then integrate
10. hartnn
can u integrate both sides now ?
11. mathew0135
doing that now
12. hartnn
13. mathew0135
think i got it, left an x over the y side so i confused my self. $1 = \ln(1-9y)-\ln(y)$
14. hartnn
u integrated y-variable correctly , but what about $$\int (1/x)dx$$ its not =1
15. hartnn
i meant u should get something like this : $$\ln x=ln(1-9y)-lny+c$$ then use logarithmic properties to simplify
16. mathew0135
Okay, catching on think i figured out what i did wrong when i integrated last anyway.
17. mathew0135
Maybe y = $\frac{ 1 }{ x+9 }$ May be a final solution, just simplifying the equation?
18. hartnn
ln cx= ln |(1-9y)/y| cxy= (1-9y) now use y(1) = -1 to find c.
19. hartnn
u got this simplification ? ---->ln cx= ln |(1-9y)/y|
20. mathew0135
yup, $C(-1)(1)=1-9(-1)$ $-C=10$ $C=-10$ $10xy=(1-9y)$
21. hartnn
-10xy =1-9y or 10xy-9y+1=0
22. mathew0135
ahh, forgot the negative, think i have enough to try a few more of these questions any way. Thank you. :)
23. hartnn
welcome ^_^ |
# Evaluating a limit using the Stolz-Cesàro theorem
I have been trying to compute the following limit-
$$\lim_{n\to \infty} \dfrac {2021(1^{2020}+2^{2020}+3^{2020}....+n^{2020}) - n^{2021}}{2021(1^{2019}+2^{2019}+3^{2019}.....+n^{2019})} =L$$
By the Stolz-Cesàro theorem, letting the numerator and denominator be $$a_n$$ and $$b_n$$ respectively yields
$$L=\lim_{n\to \infty} n+1-\frac{1}{2021}\bigg((n+1)^2-n^2\left(\frac {n}{n+1}^{}\right)^{2019}\bigg)$$
Now, I don't see how binomial expansions(perhaps some other technique?) can help here. Any help would be appreciated.
• Why did you delete math.stackexchange.com/q/4042052/42969 and post the identical question again? Feb 28 at 6:30
• You should be able to note that the problem is much much simpler to handle if replace $2020$ by $a$, $2019$ by $b$, $2021$ by $c$ and use $b=a-1,c=a+1$ suitably wherever needed. Feb 28 at 12:28
By continuing your approach you can see the limit is \begin{align} L &= \lim n+1 - \dfrac{1}{2021(n+1)^{2019}} \left( (n+1)^{2021} - (n+1 -1)^{2021} \right) \\ &= \lim n+1 - \dfrac{1}{2021(n+1)^{2019}}\left( \binom{2021}{1}(n+1)^{2020} - \binom{2021}{2} (n+1)^{2019} + \cdots \right) \\ &= \boxed{1010} \end{align} |
# Store subranges from contiguous range into array based on criteria [closed]
My code parses through a contiguous range of data in a spreadsheet (orng) and creates subranges from orng that contain the same string value (lot number) in the 6th column of orng. orng has 42 rows and 38 columns and each subrange has 1-15 rows and 38 columns.
As far as I know, I can't create a new range object for each subrange since the number of subranges is unknown. I've created an array that contains the data for the subranges (aData). I
have gotten it to work with the code below, but I feel like there is a much cleaner way of doing it that I can't figure out. I've also tried using a dictionary with no success. I will eventually have to call upon all the data for calculations and using multiple nested for loops to access each element seems convoluted.
I would prefer that the array was dynamic, but whenever I attempted the ReDim Preserve method the values would not be saved to the array. The size of the array would be correct, but every element was "Empty". According to Microsoft "each element of an array must have its value assigned individually" so it seems as though the only way I can keep the values when creating the array is to assign to each element.
After I found that webpage I implemented an array with a predetermined structure and the nested for loops. Is it possible to add the entire subrange to the array in one go? If not, what about a row?
Ideally, I could separate orng into different Areas, but since it is contiguous I am unable to do so (I'm not aware of a way to create Areas in a contiguous range).
What I'd like to know are:
1. is there a better way to do what I am trying to do (collection, dictionary, etc.)
2. if there is not a better way, can I get some advice on how to make this code cleaner (Easier to read, faster, less code, dynamic range, better structure)?
Private Sub rangetest()
Dim twb As Workbook: Set twb = ThisWorkbook
Dim cws As Worksheet: Set cws = twb.Sheets("Cleaned_2019+")
Dim orng As Range
Dim datelot As String, datelotcomp As String
Dim c As Long, i As Long, j As Long, k As Long, numrows As Long, lastrow
As Long, numlots As Long, _
curRow As Long, lotRows As Long, startRow As Long, layerRows As Long,
aRow As Long
Dim aLot() As Variant, aData(9, 49, 37) As Variant
Dim Z As Boolean
Set orng = cws.Range("A973:AL1014") 'Set initial range to work with.
numrows = orng.Rows.Count 'Number of rows in orng.
curRow = 1 'Current row in orng.
startRow = 1 'Starting row in orng for next
layer (changes when lot changes).
i = 0 'Layer of array (for aLot and aData arrays).
j = 0 'Row in orng where values for previous layer ended.
Z = False
Do Until Z = True
datelot = Left(orng.Cells(curRow, 6).Value, 10) 'Lot that we want the data for. Corresponds to a layer in the aData array.
datelotcomp = Left(orng.Cells(curRow + 1, 6).Value, 10) 'Lot of the next row in data sheet.
If datelot <> datelotcomp Then 'If datelotcomp <> to datelot then we want a new layer for array.
layerRows = curRow - j 'Number of rows for a particular layer
ReDim Preserve aLot(i) 'Array of lot names
aLot(i) = datelot 'Assign lot name to aLot array
For aRow = 1 To layerRows 'Row index in array
For lotRows = startRow To curRow 'Loops through orng rows and sets those values in array
For c = 1 To 38 'Loops through columns. There are always 38 columns
aData(i, aRow - 1, c - 1) = orng.Cells(lotRows, c).Value 'Add values to each index in array
Next c
Next lotRows
Next aRow
j = curRow
i = i + 1
startRow = curRow + 1
End If
If curRow = numrows Then 'End loop at end of orng
Z = True
End If
curRow = curRow + 1
Loop
numlots = i
End Sub
The result I get is an array with the structure aData(9, 49, 37) that contains data in the first 4 layers aData(1-3, , ). This corresponds with the unique number of lots (criteria from column 6 of orng) so the code is working correctly. I'd just like advice on if I'm doing anything inefficiently.
I will be checking back to answer questions or to add clarification.
Edit 1:
The orng size will change based on user input. I have the user inputting a start and end date and orng is created based on those values. Once I have the subranges from orng I will then use other criteria from other columns of the subrange to determine which rows to apply calculations to. The end result will be the lot number(s) with the calculations for the lot(s) printed out for the user.
• Ever thought of using a pivot table to do this work for you? – AJD Jul 10 '19 at 22:17
• Yes, there is a better way, and yes you can make the code cleaner. Are you able to post up some sample data and expected output as well? – AJD Jul 10 '19 at 22:19
• A pivot table will not work with what I am going to be doing with the ranges for the end result. Do I just post a picture of the data? I tried adding the data to the original post, but the formatting was a mess. Thank you for your response! – lifeuhfindsaway Jul 11 '19 at 15:09
• If the image is readable on this site, then posting an image should suffice. – AJD Jul 11 '19 at 22:52
There is not much to review as far as the code goes other than avoid using 3D arrays. The problem with 3D arrays is that you need to know the exact size of the first 3 dimensions because only the last dimension is resizable
I attempted the ReDim Preserve method the values would not be saved to the array.
You may want to post your attempts to use ReDim Preserve on StackOverflow because ReDim Preserve does save the values in the array.
Ideally, I could separate orng into different Areas, but since it is contiguous I am unable to do so (I'm not aware of a way to create Areas in a contiguous range). What I'd like to know is 1) is there a better way to do what I am trying to do (collection, dictionary, etc.) and 2) if there is not a better way, can I get some advice on how to make this code cleaner (Easier to read, faster, less code, dynamic range, better structure)?
It is hard to give advice without knowing what you are trying to do with the data other than group it. What I can do is show you how to store non-contiguous ranges in a Dictionary by Lot number and then work with the ranges afterwards.
Private Sub Dictionarytest()
Dim map As New Scripting.Dictionary, rw As Range
Dim key As Variant
'Join Ranges Based on Lot Number and Add them to the Dictionary Map
With ThisWorkbook.Sheets("Cleaned_2019+")
For Each rw In .Range("A973:AL1014").Rows
key = rw.Columns(6).Value
If map.Exists(key) Then
Set map(key) = Union(map(key), rw)
Else
End If
Next
End With
Dim subRange As Range
'Iterate of the Dictionary Map Keys and Print the Join Ranges Addresses
For Each key In map
Set subRange = map(key)
Next
Dim item As Variant
'Iterate of the Dictionary Map Items and Print the Join Ranges Addresses
For Each item In map.Items()
Next
Dim Data As Variant
'Create an Array From the Dictionary Items
Data = map.Items()
'Iterate of the Data Array and Print the Join Ranges Addresses
For Each item In Data
Next
Dim results As Variant
Dim r As Long, c As Long, rowCount As Long
'Iterate of the Dictionary Map Keys and Create Array From the Join Ranges Addresses
'Note: the Results Array Contains all the Data for a Single Lot Number
For Each key In map
Set subRange = map(key)
rowCount = subRange.Cells.CountLarge / subRange.Columns.Count
ReDim results(1 To rowCount, 1 To subRange.Columns.Count)
r = 0
For Each rw In subRange.Rows
r = r + 1
For c = 1 To UBound(results, 2)
results(r, c) = rw.Columns(c).Value
Next
Next
Next
End Sub
Immediate Window Print Out
A973:AL973,A975:AL975,A979:AL979,A985:AL985,A989:AL989,A1006:AL1006 A974:AL974,A982:AL982,A991:AL991,A1002:AL1002,A1013:AL1013 A976:AL976,A1007:AL1007 A977:AL977,A981:AL981,A1001:AL1001 A978:AL978,A988:AL988,A994:AL994,A996:AL996,A1014:AL1014 A980:AL980,A984:AL984,A990:AL990,A998:AL998,A1004:AL1004,A1009:AL1009 A983:AL983,A986:AL986,A993:AL993,A997:AL997,A999:AL999,A1003:AL1003 A987:AL987,A992:AL992,A995:AL995 A1000:AL1000,A1005:AL1005,A1008:AL1008,A1010:AL1011 A1012:AL1012
• as usual (from you) Brilliant use of union with dictionary. I am only started thinking about avoiding use of 3D array. Forcing me to be hardened fan of your code. – Ahmed AU Jul 11 '19 at 1:43
• @AhmedAU thank you sir! – TinMan Jul 11 '19 at 12:41
• I will work on implementing this today and get back to you with the results. This definitely looks more like what I was trying to do, but lacked the know-how. Thank you! – lifeuhfindsaway Jul 11 '19 at 15:13
• This was the core code I ended up using. I had to add some qualifiers, but it worked as intended in the end. Thank you very much. – lifeuhfindsaway Jul 16 '19 at 22:50
• @lifeuhfindsaway I'm glad that it helped. Thanks for accepting my answer. – TinMan Jul 17 '19 at 8:58 |
Skip to Main content Skip to Navigation
# Cônes positifs des variétés complexes compactes
Abstract : There are two notions of positivity for (1,1)-cohomology classes on a complex manifold: numerical effectivity, which is induced by the Lelong positivity at the level of differential forms, and pseudoeffectivity, which is a weaker property induced by the positivity of currents. In a first part, we build local obstructions to the numerical effectivity of a pseudoeffective cohomology class, which enables us to decompose it into a nef part and an exceptional divisor. We then consider the volume of a line bundle, which is an invariant measuring its positivity. We give a differential geometric interpretation of the volume, and we inscribe it into a mobile intersection'' theory, which only deals with the nef parts of the cohomology classes. Finally, we study the case when the manifold is a surface or is hyperkähler, where the geometry of the intersection form allows a more detailed description of these constructions.
Mots-clés :
Document type :
Theses
Domain :
Complete list of metadatas
https://tel.archives-ouvertes.fr/tel-00002268
Contributor : Arlette Guttin-Lombard <>
Submitted on : Tuesday, January 14, 2003 - 2:56:13 PM
Last modification on : Wednesday, November 4, 2020 - 1:58:08 PM
Long-term archiving on: : Tuesday, September 11, 2012 - 7:20:31 PM
### Identifiers
• HAL Id : tel-00002268, version 1
### Citation
Sébastien Boucksom. Cônes positifs des variétés complexes compactes. Mathématiques [math]. Université Joseph-Fourier - Grenoble I, 2002. Français. ⟨tel-00002268⟩
Record views
Files downloads |
# Increasing and differentiable implies nonnegative derivative that is not identically zero on any interval
## Statement
### On an open interval
Suppose $f$ is a function on an open interval $I$ that may be infinite in one or both directions (i..e, $I$ is of the form $\! (a,b)$, $(a,\infty)$, $(-\infty,b)$, or $(-\infty,\infty)$). Suppose the derivative of $f$ exists everywhere on $I$. Suppose further that $f$ is an increasing function on $I$, i.e.:
$x_1,x_2 \in I, x_1 < x_2 \implies f(x_1) < f(x_2)$
Then, $\! f'(x) \ge 0$ for all $x \in I$. Further, there is no sub-interval of $I$ such that $\! f'(x) = 0$ for all $x$ in the sub-interval.
### On a general interval
Suppose $f$ is a function on an interval $I$ that may be infinite in one or both directions and may be open or closed at either end. Suppose $f$ is a continuous function on all of $I$ and that the derivative of $f$ exists everywhere on the interior of $I$. Further, suppose $f$ is an increasing function on $I$, i.e.:
$x_1,x_2 \in I, x_1 < x_2 \implies f(x_1) < f(x_2)$
Then, $\! f'(x) \ge 0$ for all $x$ in the interior of $I$. Further, there is no sub-interval of $I$ such that $\! f'(x) = 0$ for all $x$ in the sub-interval.
## Proof
If $\! f$ is increasing on $I$, then every point in the interior of $I$ is a point of local maximum from the left and local minimum from the right. Thus, by Facts (1) and (2), both the left hand derivative and the right hand derivative of $f$, if they exist, are nonnegative at any point in the interior of $I$. In particular, if the derivative itself exists at a point in the interior of $I$, then it must be nonnegative at that point.
It remains to show that the derivative is not zero on any sub-interval of $I$. For this, note that by Fact (3), a derivative of zero forces the function to be constant on that sub-interval. This, however, contradicts the definition of an increasing function. We thus have the desired contradiction and we are done. |
# Lipschitz orthonormal frames on submanifolds of $\mathbf{R}^n$ ?
Suppose we are given a $d-$dimensional submanifold of $\mathbf{R}^n$ with a trivial normal bundle, whose $d-$dimensional volume is $V$ and has a non-self-intersecting tube of radius $r$ around it. Can one obtain an explicit upper bound $L$ such that there must exist a frame of orthonormal vector fields that is $L-$Lipschitz (with respect to the Euclidean norm on both the frame and the manifold)?
-
Have you worked this out when $d = 1$? That seems like a somewhat easier initial case to understand. – Deane Yang Jul 22 '10 at 0:38
You will probably need to require $C^2$ smoothness of your submanifold. Take a simple example of $d=1$ and the manifold the graph of the function $f(x) = x^\alpha$ for $x > 0$ and $f(x) = 0$ if $x \le 0$. Then for $1 < \alpha < 2$, the normal bundle is only Hölder continuous, so no $L$ exists at $x = 0$. Or would you exclude this counterexample for the reason that the exponential map from the normal bundle is not a diffeomorphism onto any ball of radius $r > 0$ at $x = 0$?
Generally, you will probably need to have a bound on the extrinsic curvature, for which $C^2$ and compactness would suffice.
Sorry, I meant an orthonormal frame for the normal bundle that is Lipschitz. Supposing the manifold is closed, When $d=1$ the manifold is $S^1$, so can't one just do parallel transport along the curve starting from a point $x$ all the way back and then use a constant-rate rotation to match up at $x$? |
# Tag Info
7
This can be done by running an external program, for example, date on Unix systems. The output can be redirected to a file and then read by TeX. On Unix systems, also piping is possible. Example: \documentclass{article} \immediate\write18{date >\jobname.date} \begin{document} Current time is: \input{\jobname.date} \end{document} The shell ...
4
Don't abuse the mono font, use polyglossia and its interface: \documentclass[a4paper]{hitec} \author{\textit{Ms Author}} \date{September 9, 1999} \title{A fancy title} \usepackage{fontspec} \usepackage{polyglossia} \usepackage{lipsum} \setmainlanguage{english} \setotherlanguage[variant=polytonic]{greek} \newfontfamily\greekfont{Cardo} \begin{document} ...
3
Maybe you can use features provided by your editor. In case of emacs, you can use time-stamp. First, you have to adjust the variables time-stamp-format and time-stamp-pattern. I suggest using file local versions by putting something like this at the end of your document: %%% Local Variables: %%% mode: latex %%% eval: (set (make-local-variable ...
2
The problem is that you are doing an \includegraphics{} on a EPS file that is invalid. When this happened to me, the EPS file hadn't changed in a year, so I have to assume it was the newer version of XeLaTeX that I was using (either a new bug, or the new version is less tolerant of sloppy EPS files. How to find the EPS that causes this problem: Sadly the ...
Only top voted, non community-wiki answers of a minimum length are eligible |
# The Mid-Point of (3p,4) and (-2,2q) is (2,6) . Find the value of p+q - Bzziii
The mid-point of (3p,4) and (-2,2q) is (2,6) . Find the value of p+q (a) 5 (b) 6 (c) 7 (d) 8
The mid-point of (3p,4) and (-2,2q) is (2,6) . Find the value of p+q
(a) 5
(b) 6
(c) 7
(d) 8
(b) 6
Explanation:
Coordinates of Mid-Point M = (\frac{x_{1}+x_{2}}{2},\frac{y_{1}+y_{2}}{2})
(2, 6) = (\frac{3p + (- 2)}{2},\frac{4 + 2q}{2})
(2, 6) = (\frac{3p - 2}{2},\frac{2q + 4}{2})
x- Coordinate
⇒ 2 = "3p - 2"/"2"
⇒ 2\times 2 = 3p - 2
⇒ 4 = 3p - 2
⇒ 4 + 2 = 3p
⇒ 6 = 3p
⇒ 3p = 6
⇒ p = 6/3
∴ p = 2
Now,
y - Coordinate
⇒ 6 = "2q + 4"/"2"
⇒ 6\times 2 = 2q + 4
⇒ 12 = 2q + 4
⇒ 12 - 4 = 2q
⇒ 8 = 2q
⇒ 2q = 8
⇒ q = 8/2
∴ q = 4
Thus,
⇒ p + q = 2 + 4
= 6 |
Compiling Knuth's tex with GNU Pascal - gpc
by Martin Monperrus
The famous typesetting program TeX is written in Pascal (encapsulated in a literate manner). However, most of the current distributions of Tex (e.g. Texlive or miktex) give a binary produced out of C code. The C code is produced from the Pascal code using a program called web2c.
For sake of elegance, I wanted to compile directly the original Pascal code. It turned out that it's not possible as is. It took me some time to find on the web the hints to compile the Pascal source code of Knuth's tex. I found three solutions.
GNU Pascal
First, Knuth himself published in 2000 a port of tex to GNU Pascal. However, this version is limited to tex v3.14159 and requires a small additional heterogeneous piece of C code.
Second, Waldek Hebisch published a port gor GNU Pascal. It is nicely documented and published on CTAN. It supports the latest version of Knuth's tex (v3.1415926), and has a clean directory scheme for the additional files (eg. fonts). Since it has very limited dependencies to the GPC library (only one call to `execute` and one to `installsignalhandler` that are easy to remove), it can be easily compiled in other runtime environments (and maybe with other compilers).
FreePascal
Preferred Wolfgang Helbig has also ported TeX to FreePascal, see https://ctan.org/pkg/tex-fpc, with the latest version from 2020-11-24
Note: `tex-gpc.pas` is compatible with the pretty printer ptop only if you set the linesize to high values. For instance: `ptop -l 250 tex-gpc.pas tex-gpc-pretty.pas` |
How to use Latex to print a document to look like a lined notebook?
I would like to print a document with text on lined paper like a notebook like http://www.highschoolmathandchess.com/2013/05/25/latex-handwriting-on-notebook-paper/
However, I would like to be able to edit the line colours. I would like to put the header/title in the blank white space. Is there a way to do the document above without using the background jpg? The document is also multiple pages.
I have looked at this question too and would like to have lines only and not a grid. Is there a latex template that makes a page look like a math notepad?
Thanks!
-
You can draw those lines with tikz and using \foreach. – Harish Kumar Jul 6 '14 at 2:00
Welcome to TeX.SX! Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – cfr Jul 6 '14 at 2:04
One possibility using the background package and a \foreach loop to draw the horizontal rules:
\documentclass{article}
\usepackage[vmargin=3cm]{geometry}
\usepackage{tikzpagenodes}
\usepackage{lipsum}
\usepackage{background}
\backgroundsetup{
contents={%
\begin{tikzpicture}
\foreach \fila in {0,...,52}
{
(current page.west|-0,-\fila*12pt) -- ++(\paperwidth,0);
}
\draw[overlay,red!70!black,line width=1pt]
([xshift=-1pt]current page text area.west|-current page.north) --
([xshift=-1pt]current page text area.west|-current page.south);
\end{tikzpicture}%
},
scale=1,
angle=0,
opacity=1
}
\begin{document}
\lipsum[1-14]
\end{document}
Depending on the settings used for the actual document, you might need to do some adjustments, but the idea is the same.
-
I actually managed to use background for this! And I did not see your answer before posting mine. I probably used it all wrong, though... – cfr Jul 6 '14 at 3:17
@cfr I'm glad to see that you managed to use background; and no, you didn't use it wrong. You can suppress the line position=current page.center since that is the default position. – Gonzalo Medina Jul 8 '14 at 3:19
This is the first time I've ever succeeded in actually using the background package so caveat emptor...
This code is based on code originally posted at http://michaelgoerz.net/notes/printable-paper-with-latex-and-tikz.html. Basically, the site hosts a range of templates for creating all kinds of paper in TeX (both in US sizes and those used by everyone else). Squared, narrow-ruled, wide-ruled, Cornell, college, graph...
However, I've modified the code quite a bit for this answer so any errors are definitely mine! [In particular, any mess on account of the use of background is definitely mine as the original doesn't use that package in any way, shape or form.]
You could try something like this which combines a tikzpicture as background picture with the titling package:
\documentclass[letterpaper, 10pt]{article} % for letter size paper
% 215.9mm × 279.4mm
\usepackage{tikz, background, titling, kantlipsum, setspace}
\usepackage[left=1.5in,right=.25in,top=1.125in,bottom=.125in]{geometry}
\usetikzlibrary{calc}
\backgroundsetup{%
position=current page.center,
angle=0,
scale=1,
contents={%
\begin{tikzpicture}%
[
normal lines/.style={gray, very thin},
every node/.append style={black, align=center, opacity=1}
]
\foreach \y in {0.71,1.41,...,25.56}
\draw[normal lines] (0,\y) -- (8.5in,\y);
\draw[normal lines] (1.25in,0) -- (1.25in,11in);
\node (t) [font=\LARGE, anchor=south] at ($(0,25.56)!1/2!(8.5in,25.56)$) {\thetitle};
\node (d) [font=\large, anchor=south west, xshift=1.5em] at (0,25.6) {\today};
\node (p) [font=\large, anchor=south east, xshift=-1.5em] at (8.5in,25.56) {p.~\thepage};
\end{tikzpicture}%
}}
\renewcommand{\rmdefault}{augie}
\title{My doc}
\author{Me}
\begin{document}
\pagestyle{empty}
\doublespacing
\kant[1-6]
\end{document}
-
@MichaelGorez Thanks for updating the link! When I tried originally, I couldn't find the site so it is good to know it is still there. I hope that you did not object to my building on your work here ;). – cfr Aug 16 '15 at 18:53 |
# Physics projectile motion question
1. Feb 18, 2016
### Krookwood
1. The problem statement, all variables and given/known data
a gold ball is hit with an initial velocity of 52m/s at an angle of 50 degrees
How long is it in the air
2. Relevant equations
Sin=opp/hyp
D=vt+1/2at^2
3. The attempt at a solution
First I split the vector into its vertical and horizontal components
sin50=x/52
52sin50=x
x=39.83m/s (up)
cos50=x/52
52cos50=x
x=33.4m/s (Horizontal)
Then I use the kinematic equation to find the amount of time it is in the air considering its total vertical displacement will be 0
dv=v1*t+1/2a*t2
0=(39.83m/s)(t) + 1/2 (-9.8m/s2)t2
i then factor out "t" and this is where I'm having an issue. I get my equation to look like this
0=t(39.83m/s - 4.9m/s2 t)
and the supposed answer is "0 or 8.1" however I'm unable to get this same result, I may be forgetting something so if anyone could explain how I get this number I'd greatly appreciate it.
Last edited: Feb 18, 2016
2. Feb 18, 2016
### Dr. Courtney
You need to take care to express both the vertical and the horizontal equations of motion and to distinguish them from each other. Your symbols are also confusing position and velocity.
3. Feb 18, 2016
### Gianmarco
If your $V_0 = 62\frac{km}{h}$ and your angle is 35°, why are you using $V_0 = 52\frac{km}{h}$ and an angle of 50°
4. Feb 18, 2016
### Krookwood
sorry i wrote the wrong question i've corrected it
5. Feb 18, 2016
### Gianmarco
I also think that the question isn't about how far the ball will travel, but after how much time it will hit the ground
6. Feb 18, 2016
### Krookwood
yeah sorry, I have a lot of work to do and I'm making mistakes. sorry.
7. Feb 18, 2016
### Gianmarco
It's okay, just look at your result
and think of what the problem is asking you
8. Feb 18, 2016
### Krookwood
I'm having severe math block and honestly, can't figure it out. This is a support question for an online class so after you give them the answer they give you the answer and the correct steps to follow, it takes about 3 days for a teacher to reply so essentially I have no one to ask. everything is right for me until this point, and I'm unsure what jump they made to get "0 or 8.1"
What steps do you use to add 39.83m/s with -4.9m/s2
9. Feb 18, 2016
### Gianmarco
You want to find the time the ball will take to hit the ground during its motion. The equation for displacement along the y axis is $y(t) = v_yt-\frac{g}{2}t^2$. By setting $y(t) = 0$, like you did, you will find the time the ball takes to hit the ground. $0 = t(v_y - \frac{g}{2}t)$. Both sides of the equation should be zero, so you have to find the values of t(it's a quadratic equation, so you expect two solutions) for which the right side is zero.
10. Feb 18, 2016
### Krookwood
Thanks so much for the help, i don't know how i didn't realise that i needed the quadratic equation especially considering i had two possible results given.
11. Feb 18, 2016
### inderr
Its easy. Just do 52mph * 50/ph (angle), and you get 50*52 seconds which is 8 minutes
source: pHd in kemistry at harverd |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.