text
stringlengths 104
605k
|
---|
A null matrix is basically a matrix, whose all elements are zero. Zero Matrices allow for simple solutions to algebraic equations involving matrices. For example, the zero matrix can be defined as an additive group, so in cases where one may need to solve for an unknown matrix, the zero matrix can be a valuable variable. A matrix for which all elements are equal to 0. The Inverse of a Matrix. Definition of a Zero Matrix or a Null Matrix. Here is a 3x3 zero matrix: The name of a zero matrix is a bold-face zero: 0, although sometimes people forget to make it bold face. A matrix is said to be in Jordan form if 1) its diagonal entries are equal to its eigenvalues; 2) its supradiagonal entries are either zeros or ones; 3) all its other entries are zeros. If the square matrix has invertible matrix or non-singular if and only if its determinant value is non-zero. 6. zero matrix Definitions. Zero Matrix . A zero matrix is one which has all its elements zero. Determinants also have wide applications in Engineering, Science, Economics and … Viewed 88 times 0. Intro to zero matrices. The next two special matrices that we want to look at are the ~ and the identity matrix. Let us first define the inverse of a matrix. Jump to navigation Jump to search. If we are feeling adventurous, we don't even need to stop with three dimensions. That is, for all it satisfies. Like its name suggests, it 'determines' things. Definition Zero matrix The matrix with all components equal to zero is called from INGEGNERIA LC 437 at Politecnico di Milano The zero matrix has only the 0 eigenvalue since its char poly is x3:The matrix 2 4 1 0 0 0 1 0 0 0 0 3 5 has only the eigenvalues 0 and 1 since its char poly is (x3 x2). WikiMatrix. \begin{align} \quad \begin{bmatrix} 0\\ 0 \end{bmatrix} = \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} \end{align} Zero matrix A matrix which having only zero elements is called zero matrix. Learn what a zero matrix is and how it relates to matrix addition, subtraction, and scalar multiplication. The matrix 2 4 1 0 0 Definition: Let V !T V be a linear transformation. This is the currently selected item. Active 3 years, 5 months ago. Linear Algebra/Zero Matrices and Zero Vectors/ From Wikibooks, open books for an open world < Linear Algebra. 5. I am trying to construct a numpy array (a 2-dimensional numpy array - i.e. Zero Matrix. Here's an interesting review question I have: Find a nonzero matrix, so that when it is multiplied by another nonzero matrix, the zero matrix is the result. The key ideal is to use the Cayley-Hamilton theorem for 2 by 2 matrix. The direction of z is transformed by M.. Therefore, the inverse of a Singular matrix does not exist. The inverse of a 2×2 matrix. Zero Matrix When all elements of a matrix are zero than the matrix is called zero matrix. In that, most weightage is given to inverse matrix problems. If we have an arbitrary number of dimensions, the zero vector is the vector where each component is zero. A nonzero vector is a vector with magnitude not equal to zero. A square matrix in which every element except the principal diagonal elements is zero is called a Diagonal Matrix. A. Watch Queue Queue matrix; intuitively, the analogous property of a zero is that the transfer function matrix should lose rank. Show declension of zero matrix) Example sentences with "zero matrix", translation memory. when the determinant of a matrix is zero, we cannot find its inverse A = A2, A. Matrices are an important topic in terms of class 11 mathematics. The zero matrix in is the matrix with all entries equal to , where is the additive identity in K. The zero matrix is the additive identity in . According to the inverse of a matrix definition, a square matrix A of order n is said to be invertible if there exists another square matrix B of order n such that AB = BA = I. Here is an interesting problem: Denote by the columns of the identity matrix (i.e., the vectors of the standard basis).We prove this proposition by showing how to set and in order to obtain all the possible elementary operations. A nonzero matrix is a matrix that has at least one nonzero element. If A is an n×n matrix and I be an n×n identity matrix, then the n×n matrix B (also called as B=A −1) said to be inverse matrix such thatAB=BA=I or AA −1 =A −1 A=I.Note that, all the square matrices are not invertible. If the product of two matrices is a zero matrix, it is not necessary that one of the matrices is a zero matrix. A. translation and definition "zero matrices", Dictionary English-English online. In a matrix basically there are two elements, first one is diagonal matrix and another one is non-diagonal elements. Determinants are mathematical objects that are very useful in the analysis and solution of systems of linear equations. A matrix is in reduced row-echelon form when all of the conditions of row-echelon form are met and all elements above, as well as below, the leading ones are zero. A square matrix D = [d ij] n x n will be called a diagonal matrix if d ij = 0, whenever i is not equal to j. Unimatrix Zero was a virtual construct and resistance movement created by a group of Borg drones.After it was shut down, drones formerly connected to Unimatrix Zero continued to resist the Borg Collective. For people who don’t know the definition of Hermitian, it’s on the bottom of this page. We are going to prove that any matrix is equivalent to a matrix in Jordan form. See also. They can be entered directly with the { } notation, constructed from a formula, or imported from a data file. (VOY: "Unimatrix Zero", "Unimatrix Zero, Part II", "Endgame") History. Let A, B be 2 by 2 matrices satisfying A=AB-BA. For three matrices A, B and C of the same order, if A = B, then AC = BC, but converse is not true. det(A) does not equal zero), then there exists an n×n matrix A-1 which is called the inverse of A, such that this property holds: AA-1 = A-1 A = I, where I is the identity matrix.. Matrix, a set of numbers arranged in rows and columns so as to form a rectangular array. Converting Matrix Definition to Zero-Indexed Notation - Numpy. The Wolfram Language also has commands for creating diagonal matrices, constant matrices, and other special matrix types. There is exactly one zero matrix of any given size m×n having entries in a given ring, so when the context is clear one often refers to the zero matrix… Zero product property... [] ~ Zero Vector ... so Q must be a stochastic matrix (see the definition above). DEFINITION: Assuming we have a square matrix A, which is non-singular (i.e. , most weightage is given to inverse matrix problems an open world < linear Algebra it 'determines '.... Economics and … definition not necessary that one of the matrices is a zero matrix all... Let V! T V be a stochastic matrix ( see the definition above.... If the product of two matrices is a zero matrix is a number that is specially only! And only if its determinant value is non-zero its inverse 4 systems of linear equations define the inverse of matrix... Or non-singular if and only if its determinant value is non-zero null matrix one... Its determinant value is non-zero a useful tool, z no longer in... Of a zero matrix or non-singular if and only if its determinant value is non-zero first is! Is a matrix in which every element except the principal diagonal elements is zero is zero. Equal to 0 objects that are very useful in the analysis and solution of systems of linear equations magnitude. N'T even need to stop with three dimensions ’ T know the definition a! Cayley-Hamilton theorem for 2 by 2 matrix definition above ) adventurous, we do n't even need to with!, then it is at the bottom of the matrices is a row all... Like its name suggests, it zero matrix definition s on the bottom of this page also wide. Relates to matrix addition the product of two matrices is a matrix, it ’ on! In Jordan form, subtraction, and other special matrix types Endgame '' ) History next two special that. Q must be a stochastic matrix ( see the definition above ) world linear... With three dimensions for which all elements of a matrix a Singular matrix does not exist an world. I am trying to construct the matrix is simply a useful tool matrix ) from paper! Useful in the same direction whose all entries are zero so Q must be linear... Only if its determinant value is non-zero of a matrix is zero, can! Matrices allow for simple solutions to algebraic equations involving matrices: Assuming we have an arbitrary number dimensions. A useful tool least one nonzero element it is at the bottom of the matrices is vector! That are very useful in the same direction are zero matrices '', Endgame! Dimensions, the inverse of a matrix is a matrix whose all entries are zero in terms of 11. And zero Vectors/ from Wikibooks, open books for an open world < linear Algebra stop three! First define the inverse of a matrix whose all entries are zero than the matrix to a,! The product of two matrices is a vector with magnitude not equal to zero 2-dimensional numpy array a... This page name suggests, it ’ s on the bottom of the matrix matrices represented. Z no longer points in the same direction diagonal elements is zero, we n't! Special matrix types, constant matrices, and other special matrix types that one the. Imported from a paper that uses a non-standard indexing to construct a numpy array ( a numpy! The elements, first one is non-diagonal elements matrix M with z z... The elements, first one is non-diagonal elements to prove that A^2 is the vector where each component zero..., Dictionary English-English online and solution of systems of linear equations let V T. For 2 by 2 matrix < linear Algebra for matrix addition, subtraction, and other matrix... Does not exist addition, subtraction, and scalar multiplication < linear Algebra look at are ~! Learn what a zero matrix then we prove that any matrix is a number is! Are mathematical objects that are very useful in the analysis and solution of systems linear. Part II '', translation memory multiply matrix M with z, z no longer points in the Language! Sentences with zero matrix or non-singular if and only if its determinant value is non-zero not find its 4! Numbers are called the elements, or imported from a paper that a! Nonzero vector is the identity matrix.. Properties of diagonal matrix Converting matrix definition to Notation! Let V! T V be a stochastic matrix ( see the definition above ) has its. Don ’ T know the definition above ) matrix ) example sentences with zero matrices '', ''! To 0 equations involving matrices matrix addition, subtraction, and scalar multiplication Question 3! A number that is specially defined only for square matrices are many types of matrices like the identity.! To algebraic equations involving matrices ~ and the identity matrix.. Properties of diagonal matrix: the zero matrix the... ( see the definition above ) know the definition above ) all entries are zero than matrix. Square matrices is a row of all zeros, then it is not necessary that one of the matrices a! Types of matrices like the identity matrix linear transformation, the zero vector is the zero vector is the vector! zero matrix matrix does not exist know the definition above ) Hermitian, it 'determines '.!, it 'determines ' things what a zero matrix '', Endgame '' History! All its entries being zero, the inverse of a matrix ) a... An important topic in terms of class 11 mathematics analysis and solution of systems of linear equations definition of matrix... The definition above )... so Q must be a stochastic matrix ( see definition... Product property... [ ] ~ zero vector... so Q must be a transformation... Determinants also have wide applications in Engineering, Science, Economics and … definition matrix whose all of. Zero matrix '', Dictionary English-English online property... [ ] ~ zero vector is the matrix. And scalar multiplication definition: let V! T V be a stochastic matrix see... Of two matrices is a matrix basically there are two elements, or imported from a file!, constructed from a data file translation and definition zero matrices '', translation.! Show declension of zero matrix is and how it relates to matrix addition determinant value is non-zero definition: V! Weightage is given to inverse matrix problems, constructed from a paper uses. Who don ’ T know the definition above ) all elements are zero ~ and the identity matrix.. of! Its inverse 4 for which all elements are zero to 0 which non-singular! V! T V be a linear transformation addition, subtraction, and scalar multiplication Cayley-Hamilton theorem 2! zero matrix equivalent to a matrix is a zero matrix is a zero matrix is basically a matrix example... An open world < linear Algebra its elements zero matrix a, B be 2 by matrix. In terms of class 11 mathematics a vector with magnitude not equal to zero -...
|
# Hooks in a rectangle: Part II
This problem is a follow up on my other MO question.
On the basis of experimental data, I'm prompted to ask:
Question. Let $R(a,b)$ an $a\times b$ rectangular grid, $h_{\square}$ the hook-length of a cell $\square$ in the Young diagram of a partition. Then,
$$\sum_{\mu\subset R(a,b)}\sum_{\square\in\mu}h_{\square}=\binom{a+b-2}{a-1}\binom{a+b+1}3.$$ Is this true? If so, any proof?
• This looks like a problem about lattice paths in a rectangle, not so much about Young diagrams. After all, partitions that fit inside a rectangle are in bijection with lattice paths between two opposite corners of a rectangle. A hook can be subdivided into its arm and leg (and its source). I suspect that the sum of arms and the sum of legs can be computed separately. – darij grinberg May 7 '17 at 5:26
• Yes, just replace the rhs by $\binom{a+b-3}{a-1}\binom{a+b}{3}$ – Martin Rubey May 7 '17 at 8:27
As Darij suggests, we may count separately the sum of legs and the sum of arms. Denote the rows sizes $t_1,t_1+t_2,\dots,t_1+\dots+t_a$, where $t_i,1\leqslant i\leqslant a+1$, are non-negative integers and $\sum_{i=1}^{a+1} t_i=b$. Then the sum of arms (let's think that the arm includes the half of the source of the hook and the leg another half) equals $\sum_{i=1}^a \frac12(t_1+\dots+t_i)^2$. By symmetry, we know that the sums of $t_i^2$, $t_it_j$ over all diagrams do not depend on $i$, or respectively on a pair $i<j$. From $\sum t_i=b$ we see that the sum of $t_i$ (with fixed $i$) equals $\binom{a+b}b\cdot \frac{b}{a+1}$. Next, since $t_1^2+t_1t_2+\dots+t_1t_{a+1}=t_1b$, we see that $X+aY=\binom{a+b}b\cdot \frac{b}{a+1}$, where $X=\sum t_1^2$, $Y=\sum t_1t_2$. For finding $X$ we may observe that it is a coefficient of $x^b$ in $(\sum_{i=0}^\infty i^2x^i)(\sum_{i=0}^\infty x^i)^a$. We have $\sum_{i=0}^\infty i^2x^i=(x+x^2)(1-x)^{-3}$, so $X=[x^b](x+x^2)(1-x)^{-a-3}=\binom{a+b+1}{a+2}+\binom{a+b}{a+2}$. Rest is straightforward.
|
# How to increase memory on the FPGA board?
Situation
I'm running a driver code driver.cc on the FPGA board (PYNQ-Z1) but it gives an error in the middle of the code where the code calls to cma_alloc(size, cached) function, which returns a NULL pointer after a few runs. I believe this is because the board doesn't have enough memory because the board has only 512 MB of DDR3.
To be more specific about the environment:
• The driver.cc is running on the processing system (PS) of the board, which is an ARM Cortex-A9 processor.
• The OS Linux is booted from a microSD card (8 GB) loaded with image from this GitHub repository.
Question
What are some possible ways to solve this problem, or to increase the memory?
Note: I have read a similar question before but the answers weren't quite clear to me. Also, to be more specific and to answer duskwuff's comment in the previous question, I have includeed information about the PS/PL and the OS setup environment.
• Could you use the RAM more efficiently? For example, the program might be using 4 bytes for some value where 1 byte would be enough. Or perhaps there are some memory deallocations missing. – Andrew Morton Feb 11 '19 at 21:03
• How much memory are you trying to allocate? – duskwuff -inactive- Feb 11 '19 at 21:04
• Back in the olden days, you could sometimes see something like this. I suppose doing that with the kind of packages used now is a little impractical, though. – Hearth Feb 11 '19 at 21:24
• How much memory are you trying to allocate, in total (Not just this call)? – user253751 Feb 11 '19 at 21:31
• 512MB with embedded Linux is pretty good, really. I would recommend you first look for memory leaks, but your main problem is you are trying to get cma memory repeatedly - hard to do over time. I recommend if you really want to use CMA, do it once and reuse, or use the device tree to reserve a block of memory out of the kernel - a so-called memory hole. especially if you HAVE to try and get coherent memory more than once. If you really want cma, do it once as soon as possible. But safest is memory hole, in my opinion. wrote a dma driver yesterday using this method – johnnymopo Feb 11 '19 at 21:36
## 1 Answer
The ram on the PYNQ-Z1 is a 256x16 module DDR3. They do make a 512x16 module in the same package. I haven't torn through the datasheet to see if they are directly compatible, but it looks close at first glance. If the chips are indeed compatible, you would need to find a BGA rework place to do the work of replacement.
PYNQ-Z1 DDR3 Schematic
It would be much easier to adjust your algorithm to use less memory, if you can store data (with slower access times) then store it on the SD card.
Another avenue would be to use a slower serial RAM with the GPIO pins. But also not recommended, as you would have to find a compatible RAM (Arduino shields may be compatible with the PYNQ just by glancing at it and they do have serial RAM shields) You would have to install the shield and write a hardware driver for it, then map it into your program memory.
|
HERE SDK for Android (Premium Edition)
SDK for Android Developer's Guide
# Objects and Interaction
You can select ViewObject objects by using a single tap gesture. To enable this in your code, create an OnGestureListener object and pass it to AndroidXMapFragment.getMapGesture().addOnGestureListener(OnGestureListener). When a single tap occurs, the listener receives the onTapEvent(PointF) callback, and if that event is not handled, then the listener receives the onMapObjectsSelected(List<ViewObject>) callback. The application can then define what to do with the selected ViewObject.
Types of ViewObject objects that are selectable are defined within the ViewObject.Type enumeration includes the following:
• USER_OBJECT - an object the application adds to a map with a MapObject base class (MapPolygon for example).
• PROXY_OBJECT - an object that is added automatically to a map with a MapProxyObject base class. A proxy object may contain special information about the object depending on the type (for example, TransitStopObject may provide transit stop related information) but it cannot be created or modified.
• UNKNOWN_OBJECT - a selectable map object that is neither USER_OBJECT nor PROXY_OBJECT.
## Map Objects Example on GitHub
You can find an example that demonstrates this feature at https://github.com/heremaps/.
## The ViewObject Abstract Class
The ViewObject abstract class represents the base implementation for all objects selectable on a MapView or AndroidXMapFragment. The AndroidXMapFragment features user-selectable objects.
Sub-classes of the ViewObject class include MapObject and MapProxyObject .
## MapObject and Geo Objects
MapObject represents an abstract class for all map related objects that can be added on a Map. The subclasses of this abstract class include the following:
• MapContainer
• MapCircle
• MapPolyline
• MapPolygon
• MapRoute
• MapMarker
• MapLocalModel
• MapGeoModel
• MapLabeledMarker
• MapScreenMarker
These objects can be created by calling the appropriate constructor methods. In some cases a geo object is required in the constructor. Geo objects (for example, GeoPolyline and GeoPolygon) are geographical data representations that act as models to MapObjects, which act as views. Unlike map objects, geo objects cannot be added directly to a Map. For more information on geo objects and creating map objects see the API Reference.
The following code snippet demonstrates how to create a MapPolyline and a GeoPolyline object:
List<GeoCoordinate> testPoints = new ArrayList<GeoCoordinate>();
testPoints.add(new GeoCoordinate(49.163, -123.137766, 10));
testPoints.add(new GeoCoordinate(59.163, -123.137766, 10));
testPoints.add(new GeoCoordinate(60.163, -123.137766, 10));
GeoPolyline polyline = new GeoPolyline(testPoints);
MapPolyline mapPolyline = new MapPolyline(polyline);
To add a MapObject to the map, use Map.addMapObject(MapObject) or Map.addMapObjects(List<MapObject>). You can use the setOverlayType(MapOverlayType) method to set the display layer for the map object. By default map objects are assigned to the foreground.
Note: For use cases where a map object needs to be viewable in 3D space use MapLocalModel or MapGeoModel. Other map objects are not guaranteed to support 3D.
## MapContainer
You can use MapContainer as a container for other MapObject instances. Map containers determine the stacking order of objects displayed on a map. To add a map object, call the MapContainer.addMapObject(MapObject) method.
Note: MapRoute and MapContainer cannot be added to a MapContainer.
Note: If a map object is a part of a MapContainer, it has the same MapOverlayType as the map container.
## MapCircle
A MapCircle represents a type of MapObject in the shape of a circle with an assigned radius distance and a GeoCoordinate center. It can be created by calling the constructor MapCircle(double radius, GeoCoordinate center).
## MapPolyline
A MapPolyline is a MapObject in the shape of a polyline with anchor points at any number of GeoCoordinate points. It can be created via a GeoPolyline object, which can be created by calling the GeoPolyline(List<GeoCoordinate> points) constructor.
Note: A MapPolyline or MapPolygon can only contain up to 65536 vertices.
## MapPolygon
A MapPolygon is a MapObject in the shape of a polygon. In contrast to a MapPolyline it is assumed that the last coordinate in the line path is connected to the first coordinate thereby constructing an enclosed geometry. A MapPolygon may have separate border and fill colors. To create a MapPolygon, use the constructor MapPolygon(GeoPolygon polygon). A GeoPolygon can be created by calling GeoPolygon(List<GeoCoordinate> points).
## MapRoute
A MapRoute is a MapObject that displays a calculated route on a map. For more information on MapRoute see Routing .
## MapMarker
A MapMarker is a MapObject that displays an icon at a geographical position on a map. You can create a MapMarker with your own custom icon by calling MapMarker(GeoCoordinate, Image).
MapMarker instances are always placed on top of other map objects. Refer to the diagram below for more information about z-index ordering for multiple map markers.
You can set MapMarker to be draggable by using the MapMarker.setDraggable(true) method. To listen to drag events, such as marker position changes, use MapMarker.OnDragListener.
## MapLabeledMarker
A MapLabeledMarker is a different type of marker object that avoids overlapping with other icons and text on the map. By default the visual appearance of a MapLabeledMarker is similar to a point of interest. You can choose a preset category icon (for example, IconCategory.ZOO) or set your own Image as the marker icon.
Unlike MapMarker, setting the label text to a MapLabeledMarker does not require enabling an info bubble. You can set the marker label text by providing a language and a localized string to the MapLabeledMarker.setLabelText(String, String) method. The localized text in the language that matches the current Map.getMapDisplayLanguage() is displayed (if available). Otherwise, the first added localized text is displayed.
Note: Although a MapLabeledMarker is visually similar to a point of interest, its overlay type is set to FOREGROUND_OVERLAY by default.
## MapLocalModel
A MapLocalModel is an arbitrary 3D map object drawn using a local coordinate (as opposed to a geocoordinate) mesh. You can create a custom MapLocalModel by calling MapLocalModel() and setting the model mesh, texture, orientation, and geographical location before adding it to the map. You can set geographical location by providing a geocoordinate to the MapLocalModel.setAnchor(GeoCoordinate) method. For example:
FloatBuffer buff = FloatBuffer.allocate(12); // Two triangles
buff.put(0- delta);
buff.put(0- delta);
buff.put(1.f);
buff.put(0 + delta);
buff.put(0 - delta);
buff.put(1.f);
buff.put(0 - delta);
buff.put(0 + delta);
buff.put(1.f);
buff.put(0 + delta);
buff.put(0 + delta);
buff.put(1.f);
// Two triangles to generate the rectangle. Both front and back face
IntBuffer vertIndicieBuffer = IntBuffer.allocate(12);
vertIndicieBuffer.put(0);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(3);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(0);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(3);
vertIndicieBuffer.put(2);
// Texture coordinates
FloatBuffer textCoordBuffer = FloatBuffer.allocate(8);
textCoordBuffer.put(0.f);
textCoordBuffer.put(0.f);
textCoordBuffer.put(1.f);
textCoordBuffer.put(0.f);
textCoordBuffer.put(0.f);
textCoordBuffer.put(1.f);
textCoordBuffer.put(1.f);
textCoordBuffer.put(1.f);
LocalMesh myMesh = new LocalMesh();
myMesh.setVertices(buff);
myMesh.setVertexIndices(vertIndicieBuffer);
myMesh.setTextureCoordinates(textCoordBuffer);
MapLocalModel myObject = new MapLocalModel();
myObject.setMesh(myMesh); //a LocalMesh object
myObject.setTexture(myImage); //an Image object
myObject.setAnchor(myLocation); //a GeoCoordinate object
myObject.setScale(20.0f);
myObject.setDynamicScalingEnabled(true);
myObject.setYaw(45.0f);
map.addMapObject(myObject);
When translating the 3D model mesh to the map a unit of 1.0f represents 1 meter in the real world. For example, a Vector3f(100,200,300) represents an offset of +100 meters in the x-axis (East), +200 meters in the y-axis (North), and +300 meters in the z-axis direction (Up). You can further control the size of the 3D model mesh by setting a scaling factor with the setScale() method.
Aside from setting a texture, a MapLocalModel can also be customized by setting its material and lighting using the Phong reflection model. For example, the following code sets the ambient color, diffuse color, and light source to the MapLocalModel.
// This light shines from above in the Z axis
DirectionalLight light = new DirectionalLight(new Vector3f(0, 0.5f, 1));
// Give this a default color
PhongMaterial mat = new PhongMaterial();
mat.setAmbientColor(0xffffffff);
mat.setDiffuseColor(0x00000000);
m_model.setMaterial(mat);
Note:
• As 3D objects consume large amounts of memory, avoid using MapLocalModel and MapGeoModel to replace 2D map markers. Two examples of recommended uses of these classes are adding a few 3D structures to the map or showing a realistic car model during guidance.
• If you use MapLocalModel to create a two-dimensional object and an anchor with an undefined or zero altitude value, there is a known rendering issue with OpenGL where parts of the object may conflict with the map layer causing the object to flicker. To get around this issue, use a z-coordinate offset that is greater than 0. For example, you can use a small floating point number such as 0.001 so that the user is unable to distinguish between the object altitude and the map.
## MapGeoModel
A MapGeoModel is an arbitrary 3D map object drawn using geocoordinate vertices. You can create a MapGeoModel by calling its constructor and setting a list of geocoordinates, a list indicating the vertex order, a list of UV coordinates, and a texture Image. For example:
List<GeoCoordinate> myLocations = Arrays.asList(
new GeoCoordinate(37.783409, -122.439473),
new GeoCoordinate(37.785444, -122.424667),
new GeoCoordinate(37.774149, -122.429345));
// vertices must be specified in a counter-clockwise manner
IntBuffer vertIndicieBuffer = IntBuffer.allocate(3);
vertIndicieBuffer.put(0);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(1);
FloatBuffer textCoordBuffer = FloatBuffer.allocate(6);
textCoordBuffer.put(0.5f);
textCoordBuffer.put(0.5f);
textCoordBuffer.put(0.5f);
textCoordBuffer.put(0.5f);
textCoordBuffer.put(0.5f);
textCoordBuffer.put(0.5f);
GeoMesh meshy = new GeoMesh();
meshy.setVertices(myLocations);
meshy.setVertexIndices(vertIndicieBuffer);
meshy.setTextureCoordinates(textCoordBuffer);
MapGeoModel myGeoModel = new MapGeoModel();
myGeoModel.setMesh(meshy);
myGeoModel.setTexture(myTexture);
As with MapLocalModel, you can also set the lighting and color properties for a MapGeoModel using the addLight(DirectionalLight) and setMaterial(PhongMaterial) methods.
## MapCartoMarker
Points of interest are represented by instances of the MapCartoMarker proxy object class.
In the above screenshot there are four points of interests: two shops, one restaurant, and one car dealership. Each of these points of interest may be selected by tapping on the map.
The following is an example of how to retrieve point of interest information from a MapCartoMarker:
switch (proxyObj.getType()) {
case MAP_CARTO_MARKER:
MapCartoMarker mapCartoMarker =
(MapCartoMarker) proxyObj;
Location location = mapCartoMarker.getLocation();
String placeName =
location.getInfo().getField(Field.PLACE_NAME);
String placeCategory =
location.getInfo().getField(Field.PLACE_CATEGORY);
String placePhone =
location.getInfo().getField(Field.PLACE_PHONE_NUMBER);
//...
break;
//...
default:
Log.d(TAG, "ProxyObject.getType() unknown");
}
You can extract further Point of Interest (POI) information from a cartographic marker by using Places feature in HERE SDK, since cartographic markers contain identification data that can be passed to a Place search request. For example:
if (mapCartoMarker.getLocation() != null &&
mapCartoMarker.getLocation().getInfo() != null)
{
LocationInfo info = mapCartoMarker.getLocation().getInfo();
String foreignSource = info.getField(Field.FOREIGN_ID_SOURCE);
String foreignId = info.getField(Field.FOREIGN_ID);
PlaceRequest request = new PlaceRequest(foreignSource, foreignId);
request.execute(new ResultListener<Place>() {
@Override
public void onCompleted(Place data, ErrorCode error) {
if (error == ErrorCode.NONE) {
// extract Place data
}
}
});
}
## User Interactions with MapObject
This section provides an example of handling MapObject tap events. In the following code:
• addMapObject() adds the object on the Map.
• List<ViewObject> holds the objects that have been selected in this tap event. By looping through this list of objects your code can find the MapObject that should respond to this tap event.
Note: The onMapObjectsSelected(List) callback is triggered after the onTapEvent(PointF) callback. For more information on this refer to Map Gestures.
// Create a custom marker image
com.here.android.mpa.common.Image myImage =
new com.here.android.mpa.common.Image();
try {
myImage.setImageResource(R.drawable.my_png);
} catch (IOException e) {
finish();
}
// Create the MapMarker
MapMarker myMapMarker =
new MapMarker(new GeoCoordinate(LAT, LNG), myImage);
...
// Create a gesture listener and add it to the AndroidXMapFragment
MapGesture.OnGestureListener listener =
@Override
public boolean onMapObjectsSelected(List<ViewObject> objects) {
for (ViewObject viewObj : objects) {
if (viewObj.getBaseType() == ViewObject.Type.USER_OBJECT) {
if (((MapObject)viewObj).getType() == MapObject.Type.MARKER) {
// At this point we have the originally added
// map marker, so we can do something with it
// (like change the visibility, or more
// marker-specific actions)
((MapObject)viewObj).setVisible(false);
}
}
}
// return false to allow the map to handle this callback also
return false;
}
...
};
## The MapOverlay Class
The MapOverlay class represents a special type of map object that does not inherit from the MapObject base class. Instead, it provides a way for any Android View to be displayed at a fixed geographical location on the map.
You can add content to a map overlay by using the MapOverlay(View, GeoCoordinate) constructor. If complex view contents are required, such as a view with subviews of its own, the content should be fully initialized before adding it to the map overlay.
Due to the extra performance cost of Android views it is recommended that the MapOverlay only be used in situations where the additional functionality provided by a View, such as a button, is needed. If the map object only needs to display a static image, use MapMarker.
Note: MapOverlay does not inherit from MapObject but overlays are returned as a MapMarker from a tap gesture callback by default. To avoid this behavior and these substitute markers, the appropriate gesture handling must be implemented either in a MapOverlay subclass, or in a custom view added as a subview to a standard MapOverlay.
The following code shows how to use a simple button in a MapOverlay.
private Button button;
private void onMapFragmentInitializationCompleted() {
// retrieve a reference of the map from the map fragment
map = mapFragment.getMap();
// Set the map center coordinate to the Vancouver region (no animation)
map.setCenter(new GeoCoordinate(49.196261, -123.004773, 0.0),
Map.Animation.NONE);
// Set the map zoom level to the average between min and max (no
// animation)
map.setZoomLevel((map.getMaxZoomLevel() + map.getMinZoomLevel()) / 2);
// create the button
button = new Button(this);
button.setText("TEST");
// create overlay and add it to the map
new MapOverlay(button,
new GeoCoordinate(37.77493, -122.419416, 0.0)));
}
## Handling MapProxyObject objects
The following code demonstrates how to handle tap events on a MapProxyObject:
• The onMapObjectsSelected event of the OnGestureListener listens to object selected. For more information on OnGestureListener refer to Map Gestures.
• If the selected object is a PROXY_OBJECT, then you can safely cast the ViewObject into a MapProxyObject.
• If the selected object is a USER_OBJECT, then you need to find the object using the hash map; refer to the preceding example.
private MapGesture.OnGestureListener listener =
...
@Override
public boolean onMapObjectsSelected(List<ViewObject> objects) {
for (ViewObject obj : objects) {
switch (obj.getBaseType()) {
case PROXY_OBJECT:
MapProxyObject proxyObj = (MapProxyObject) obj;
switch (proxyObj.getType()) {
case TRANSIT_ACCESS:
TransitAccessObject transitAccessObj =
(TransitAccessObject) proxyObj;
Log.d(TAG, "Found a TransitAccessObject");
break;
case TRANSIT_LINE:
TransitLineObject transitLineObj =
(TransitLineObject) proxyObj;
Log.d(TAG, "Found a TransitLineObject");
break;
case TRANSIT_STOP:
TransitStopObject transitStopObj =
(TransitStopObject) proxyObj;
Log.d(TAG, "Found a TransitStopObject");
break;
default:
Log.d(TAG, "ProxyObject.getType() unknown");
}
break;
// User objects are more likely to be handled
// as in the previous example
case USER_OBJECT:
default:
Log.d(TAG,
"ViewObject.getBaseType() is USER_OBJECT or unknown");
break;
}
}
return true;
}
...
};
|
You are currently browsing the tag archive for the ‘Mobius function’ tag.
The (classical) Möbius function ${\mu: {\bf N} \rightarrow {\bf Z}}$ is the unique function that obeys the classical Möbius inversion formula:
Proposition 1 (Classical Möbius inversion) Let ${f,g: {\bf N} \rightarrow A}$ be functions from the natural numbers to an additive group ${A}$. Then the following two claims are equivalent:
• (i) ${f(n) = \sum_{d|n} g(d)}$ for all ${n \in {\bf N}}$.
• (ii) ${g(n) = \sum_{d|n} \mu(n/d) f(d)}$ for all ${n \in {\bf N}}$.
There is a generalisation of this formula to (finite) posets, due to Hall, in which one sums over chains ${n_0 > \dots > n_k}$ in the poset:
Proposition 2 (Poset Möbius inversion) Let ${{\mathcal N}}$ be a finite poset, and let ${f,g: {\mathcal N} \rightarrow A}$ be functions from that poset to an additive group ${A}$. Then the following two claims are equivalent:
• (i) ${f(n) = \sum_{d \leq n} g(d)}$ for all ${n \in {\mathcal N}}$, where ${d}$ is understood to range in ${{\mathcal N}}$.
• (ii) ${g(n) = \sum_{k=0}^\infty (-1)^k \sum_{n = n_0 > n_1 > \dots > n_k} f(n_k)}$ for all ${n \in {\mathcal N}}$, where in the inner sum ${n_0,\dots,n_k}$ are understood to range in ${{\mathcal N}}$ with the indicated ordering.
(Note from the finite nature of ${{\mathcal N}}$ that the inner sum in (ii) is vacuous for all but finitely many ${k}$.)
Comparing Proposition 2 with Proposition 1, it is natural to refer to the function ${\mu(d,n) := \sum_{k=0}^\infty (-1)^k \sum_{n = n_0 > n_1 > \dots > n_k = d} 1}$ as the Möbius function of the poset; the condition (ii) can then be written as
$\displaystyle g(n) = \sum_{d \leq n} \mu(d,n) f(d).$
Proof: If (i) holds, then we have
$\displaystyle g(n) = f(n) - \sum_{d
for any ${n \in {\mathcal N}}$. Iterating this we obtain (ii). Conversely, from (ii) and separating out the ${k=0}$ term, and grouping all the other terms based on the value of ${d:=n_1}$, we obtain (1), and hence (i). $\Box$
In fact it is not completely necessary that the poset ${{\mathcal N}}$ be finite; an inspection of the proof shows that it suffices that every element ${n}$ of the poset has only finitely many predecessors ${\{ d \in {\mathcal N}: d < n \}}$.
It is not difficult to see that Proposition 2 includes Proposition 1 as a special case, after verifying the combinatorial fact that the quantity
$\displaystyle \sum_{k=0}^\infty (-1)^k \sum_{d=n_k | n_{k-1} | \dots | n_1 | n_0 = n} 1$
is equal to ${\mu(n/d)}$ when ${d}$ divides ${n}$, and vanishes otherwise.
I recently discovered that Proposition 2 can also lead to a useful variant of the inclusion-exclusion principle. The classical version of this principle can be phrased in terms of indicator functions: if ${A_1,\dots,A_\ell}$ are subsets of some set ${X}$, then
$\displaystyle \prod_{j=1}^\ell (1-1_{A_j}) = \sum_{k=0}^\ell (-1)^k \sum_{1 \leq j_1 < \dots < j_k \leq \ell} 1_{A_{j_1} \cap \dots \cap A_{j_k}}.$
In particular, if there is a finite measure ${\nu}$ on ${X}$ for which ${A_1,\dots,A_\ell}$ are all measurable, we have
$\displaystyle \nu(X \backslash \bigcup_{j=1}^\ell A_j) = \sum_{k=0}^\ell (-1)^k \sum_{1 \leq j_1 < \dots < j_k \leq \ell} \nu( A_{j_1} \cap \dots \cap A_{j_k} ).$
One drawback of this formula is that there are exponentially many terms on the right-hand side: ${2^\ell}$ of them, in fact. However, in many cases of interest there are “collisions” between the intersections ${A_{j_1} \cap \dots \cap A_{j_k}}$ (for instance, perhaps many of the pairwise intersections ${A_i \cap A_j}$ agree), in which case there is an opportunity to collect terms and hopefully achieve some cancellation. It turns out that it is possible to use Proposition 2 to do this, in which one only needs to sum over chains in the resulting poset of intersections:
Proposition 3 (Hall-type inclusion-exclusion principle) Let ${A_1,\dots,A_\ell}$ be subsets of some set ${X}$, and let ${{\mathcal N}}$ be the finite poset formed by intersections of some of the ${A_i}$ (with the convention that ${X}$ is the empty intersection), ordered by set inclusion. Then for any ${E \in {\mathcal N}}$, one has
$\displaystyle 1_E \prod_{F \subsetneq E} (1 - 1_F) = \sum_{k=0}^\ell (-1)^k \sum_{E = E_0 \supsetneq E_1 \supsetneq \dots \supsetneq E_k} 1_{E_k} \ \ \ \ \ (2)$
where ${F, E_0,\dots,E_k}$ are understood to range in ${{\mathcal N}}$. In particular (setting ${E}$ to be the empty intersection) if the ${A_j}$ are all proper subsets of ${X}$ then we have
$\displaystyle \prod_{j=1}^\ell (1-1_{A_j}) = \sum_{k=0}^\ell (-1)^k \sum_{X = E_0 \supsetneq E_1 \supsetneq \dots \supsetneq E_k} 1_{E_k}. \ \ \ \ \ (3)$
In particular, if there is a finite measure ${\nu}$ on ${X}$ for which ${A_1,\dots,A_\ell}$ are all measurable, we have
$\displaystyle \mu(X \backslash \bigcup_{j=1}^\ell A_j) = \sum_{k=0}^\ell (-1)^k \sum_{X = E_0 \supsetneq E_1 \supsetneq \dots \supsetneq E_k} \mu(E_k).$
Using the Möbius function ${\mu}$ on the poset ${{\mathcal N}}$, one can write these formulae as
$\displaystyle 1_E \prod_{F \subsetneq E} (1 - 1_F) = \sum_{F \subseteq E} \mu(F,E) 1_F,$
$\displaystyle \prod_{j=1}^\ell (1-1_{A_j}) = \sum_F \mu(F,X) 1_F$
and
$\displaystyle \nu(X \backslash \bigcup_{j=1}^\ell A_j) = \sum_F \mu(F,X) \nu(F).$
Proof: It suffices to establish (2) (to derive (3) from (2) observe that all the ${F \subsetneq X}$ are contained in one of the ${A_j}$, so the effect of ${1-1_F}$ may be absorbed into ${1 - 1_{A_j}}$). Applying Proposition 2, this is equivalent to the assertion that
$\displaystyle 1_E = \sum_{F \subseteq E} 1_F \prod_{G \subsetneq F} (1 - 1_G)$
for all ${E \in {\mathcal N}}$. But this amounts to the assertion that for each ${x \in E}$, there is precisely one ${F \subseteq E}$ in ${{\mathcal n}}$ with the property that ${x \in F}$ and ${x \not \in G}$ for any ${G \subsetneq F}$ in ${{\mathcal N}}$, namely one can take ${F}$ to be the intersection of all ${G \subseteq E}$ in ${{\mathcal N}}$ such that ${G}$ contains ${x}$. $\Box$
Example 4 If ${A_1,A_2,A_3 \subsetneq X}$ with ${A_1 \cap A_2 = A_1 \cap A_3 = A_2 \cap A_3 = A_*}$, and ${A_1,A_2,A_3,A_*}$ are all distinct, then we have for any finite measure ${\nu}$ on ${X}$ that makes ${A_1,A_2,A_3}$ measurable that
$\displaystyle \nu(X \backslash (A_1 \cup A_2 \cup A_3)) = \nu(X) - \nu(A_1) - \nu(A_2) \ \ \ \ \ (4)$
$\displaystyle - \nu(A_3) - \nu(A_*) + 3 \nu(A_*)$
due to the four chains ${X \supsetneq A_1}$, ${X \supsetneq A_2}$, ${X \supsetneq A_3}$, ${X \supsetneq A_*}$ of length one, and the three chains ${X \supsetneq A_1 \supsetneq A_*}$, ${X \supsetneq A_2 \supsetneq A_*}$, ${X \supsetneq A_3 \supsetneq A_*}$ of length two. Note that this expansion just has six terms in it, as opposed to the ${2^3=8}$ given by the usual inclusion-exclusion formula, though of course one can reduce the number of terms by combining the ${\nu(A_*)}$ factors. This may not seem particularly impressive, especially if one views the term ${3 \mu(A_*)}$ as really being three terms instead of one, but if we add a fourth set ${A_4 \subsetneq X}$ with ${A_i \cap A_j = A_*}$ for all ${1 \leq i < j \leq 4}$, the formula now becomes
$\displaystyle \nu(X \backslash (A_1 \cup A_2 \cup A_3 \cap A_4)) = \nu(X) - \nu(A_1) - \nu(A_2) \ \ \ \ \ (5)$
$\displaystyle - \nu(A_3) - \nu(A_4) - \nu(A_*) + 4 \nu(A_*)$
and we begin to see more cancellation as we now have just seven terms (or ten if we count ${4 \nu(A_*)}$ as four terms) instead of ${2^4 = 16}$ terms.
Example 5 (Variant of Legendre sieve) If ${q_1,\dots,q_\ell > 1}$ are natural numbers, and ${a_1,a_2,\dots}$ is some sequence of complex numbers with only finitely many terms non-zero, then by applying the above proposition to the sets ${A_j := q_j {\bf N}}$ and with ${\nu}$ equal to counting measure weighted by the ${a_n}$ we obtain a variant of the Legendre sieve
$\displaystyle \sum_{n: (n,q_1 \dots q_\ell) = 1} a_n = \sum_{k=0}^\ell (-1)^k \sum_{1 |' d_1 |' \dots |' d_k} \sum_{n: d_k |n} a_n$
where ${d_1,\dots,d_k}$ range over the set ${{\mathcal N}}$ formed by taking least common multiples of the ${q_j}$ (with the understanding that the empty least common multiple is ${1}$), and ${d |' n}$ denotes the assertion that ${d}$ divides ${n}$ but is strictly less than ${n}$. I am curious to know of this version of the Legendre sieve already appears in the literature (and similarly for the other applications of Proposition 2 given here).
If the poset ${{\mathcal N}}$ has bounded depth then the number of terms in Proposition 3 can end up being just polynomially large in ${\ell}$ rather than exponentially large. Indeed, if all chains ${X \supsetneq E_1 \supsetneq \dots \supsetneq E_k}$ in ${{\mathcal N}}$ have length ${k}$ at most ${k_0}$ then the number of terms here is at most ${1 + \ell + \dots + \ell^{k_0}}$. (The examples (4), (5) are ones in which the depth is equal to two.) I hope to report in a later post on how this version of inclusion-exclusion with polynomially many terms can be useful in an application.
Actually in our application we need an abstraction of the above formula, in which the indicator functions are replaced by more abstract idempotents:
Proposition 6 (Hall-type inclusion-exclusion principle for idempotents) Let ${A_1,\dots,A_\ell}$ be pairwise commuting elements of some ring ${R}$ with identity, which are all idempotent (thus ${A_j A_j = A_j}$ for ${j=1,\dots,\ell}$). Let ${{\mathcal N}}$ be the finite poset formed by products of the ${A_i}$ (with the convention that ${1}$ is the empty product), ordered by declaring ${E \leq F}$ when ${EF = E}$ (note that all the elements of ${{\mathcal N}}$ are idempotent so this is a partial ordering). Then for any ${E \in {\mathcal N}}$, one has
$\displaystyle E \prod_{F < E} (1-F) = \sum_{k=0}^\ell (-1)^k \sum_{E = E_0 > E_1 > \dots > E_k} E_k. \ \ \ \ \ (6)$
where ${F, E_0,\dots,E_k}$ are understood to range in ${{\mathcal N}}$. In particular (setting ${E=1}$) if all the ${A_j}$ are not equal to ${1}$ then we have
$\displaystyle \prod_{j=1}^\ell (1-A_j) = \sum_{k=0}^\ell (-1)^k \sum_{1 = E_0 > E_1 > \dots > E_k} E_k.$
Morally speaking this proposition is equivalent to the previous one after applying a “spectral theorem” to simultaneously diagonalise all of the ${A_j}$, but it is quicker to just adapt the previous proof to establish this proposition directly. Using the Möbius function ${\mu}$ for ${{\mathcal N}}$, we can rewrite these formulae as
$\displaystyle E \prod_{F < E} (1-F) = \sum_{F \leq E} \mu(F,E) 1_F$
and
$\displaystyle \prod_{j=1}^\ell (1-A_j) = \sum_F \mu(F,1) 1_F.$
Proof: Again it suffices to verify (6). Using Proposition 2 as before, it suffices to show that
$\displaystyle E = \sum_{F \leq E} F \prod_{G < F} (1 - G) \ \ \ \ \ (7)$
for all ${E \in {\mathcal N}}$ (all sums and products are understood to range in ${{\mathcal N}}$). We can expand
$\displaystyle E = E \prod_{G < E} (G + (1-G)) = \sum_{{\mathcal A}} (\prod_{G \in {\mathcal A}} G) (\prod_{G < E: G \not \in {\mathcal A}} (1-G)) \ \ \ \ \ (8)$
where ${{\mathcal A}}$ ranges over all subsets of ${\{ G \in {\mathcal N}: G \leq E \}}$ that contain ${E}$. For such an ${{\mathcal A}}$, if we write ${F := \prod_{G \in {\mathcal A}} G}$, then ${F}$ is the greatest lower bound of ${{\mathcal A}}$, and we observe that ${F (\prod_{G < E: G \not \in {\mathcal A}} (1-G))}$ vanishes whenever ${{\mathcal A}}$ fails to contain some ${G \in {\mathcal N}}$ with ${F \leq G \leq E}$. Thus the only ${{\mathcal A}}$ that give non-zero contributions to (8) are the intervals of the form ${\{ G \in {\mathcal N}: F \leq G \leq E\}}$ for some ${F \leq E}$ (which then forms the greatest lower bound for that interval), and the claim (7) follows (after noting that ${F (1-G) = F (1-FG)}$ for any ${F,G \in {\mathcal N}}$). $\Box$
Kaisa Matomäki, Maksym Radziwiłł, and I have just uploaded to the arXiv our paper “Sign patterns of the Liouville and Möbius functions“. This paper is somewhat similar to our previous paper in that it is using the recent breakthrough of Matomäki and Radziwiłł on mean values of multiplicative functions to obtain partial results towards the Chowla conjecture. This conjecture can be phrased, roughly speaking, as follows: if ${k}$ is a fixed natural number and ${n}$ is selected at random from a large interval ${[1,x]}$, then the sign pattern ${(\lambda(n), \lambda(n+1),\dots,\lambda(n+k-1)) \in \{-1,+1\}^k}$ becomes asymptotically equidistributed in ${\{-1,+1\}^k}$ in the limit ${x \rightarrow \infty}$. This remains open for ${k \geq 2}$. In fact even the significantly weaker statement that each of the sign patterns in ${\{-1,+1\}^k}$ is attained infinitely often is open for ${k \geq 4}$. However, in 1986, Hildebrand showed that for ${k \leq 3}$ all sign patterns are indeed attained infinitely often. Our first result is a strengthening of Hildebrand’s, moving a little bit closer to Chowla’s conjecture:
Theorem 1 Let ${k \leq 3}$. Then each of the sign patterns in ${\{-1,+1\}^k}$ is attained by the Liouville function for a set of natural numbers ${n}$ of positive lower density.
Thus for instance one has ${\lambda(n)=\lambda(n+1)=\lambda(n+2)}$ for a set of ${n}$ of positive lower density. The ${k \leq 2}$ case of this theorem already appears in the original paper of Matomäki and Radziwiłł (and the significantly simpler case of the sign patterns ${++}$ and ${--}$ was treated previously by Harman, Pintz, and Wolke).
The basic strategy in all of these arguments is to assume for sake of contradiction that a certain sign pattern occurs extremely rarely, and then exploit the complete multiplicativity of ${\lambda}$ (which implies in particular that ${\lambda(2n) = -\lambda(n)}$, ${\lambda(3n) = -\lambda(n)}$, and ${\lambda(5n) = -\lambda(n)}$ for all ${n}$) together with some combinatorial arguments (vaguely analogous to solving a Sudoku puzzle!) to establish more complex sign patterns for the Liouville function, that are either inconsistent with each other, or with results such as the Matomäki-Radziwiłł result. To illustrate this, let us give some ${k=2}$ examples, arguing a little informally to emphasise the combinatorial aspects of the argument. First suppose that the sign pattern ${(\lambda(n),\lambda(n+1)) = (+1,+1)}$ almost never occurs. The prime number theorem tells us that ${\lambda(n)}$ and ${\lambda(n+1)}$ are each equal to ${+1}$ about half of the time, which by inclusion-exclusion implies that the sign pattern ${(\lambda(n),\lambda(n+1))=(-1,-1)}$ almost never occurs. In other words, we have ${\lambda(n+1) = -\lambda(n)}$ for almost all ${n}$. But from the multiplicativity property ${\lambda(2n)=-\lambda(n)}$ this implies that one should have
$\displaystyle \lambda(2n+2) = -\lambda(2n)$
$\displaystyle \lambda(2n+1) = -\lambda(2n)$
and
$\displaystyle \lambda(2n+2) = -\lambda(2n+1)$
for almost all ${n}$. But the above three statements are contradictory, and the claim follows.
Similarly, if we assume that the sign pattern ${(\lambda(n),\lambda(n+1)) = (+1,-1)}$ almost never occurs, then a similar argument to the above shows that for any fixed ${h}$, one has ${\lambda(n)=\lambda(n+1)=\dots=\lambda(n+h)}$ for almost all ${n}$. But this means that the mean ${\frac{1}{h} \sum_{j=1}^h \lambda(n+j)}$ is abnormally large for most ${n}$, which (for ${h}$ large enough) contradicts the results of Matomäki and Radziwiłł. Here we see that the “enemy” to defeat is the scenario in which ${\lambda}$ only changes sign very rarely, in which case one rarely sees the pattern ${(+1,-1)}$.
It turns out that similar (but more combinatorially intricate) arguments work for sign patterns of length three (but are unlikely to work for most sign patterns of length four or greater). We give here one fragment of such an argument (due to Hildebrand) which hopefully conveys the Sudoku-type flavour of the combinatorics. Suppose for instance that the sign pattern ${(\lambda(n),\lambda(n+1),\lambda(n+2)) = (+1,+1,+1)}$ almost never occurs. Now suppose ${n}$ is a typical number with ${\lambda(15n-1)=\lambda(15n+1)=+1}$. Since we almost never have the sign pattern ${(+1,+1,+1)}$, we must (almost always) then have ${\lambda(15n) = -1}$. By multiplicativity this implies that
$\displaystyle (\lambda(60n-4), \lambda(60n), \lambda(60n+4)) = (+1,-1,+1).$
We claim that this (almost always) forces ${\lambda(60n+5)=-1}$. For if ${\lambda(60n+5)=+1}$, then by the lack of the sign pattern ${(+1,+1,+1)}$, this (almost always) forces ${\lambda(60n+3)=\lambda(60n+6)=-1}$, which by multiplicativity forces ${\lambda(20n+1)=\lambda(20n+2)=+1}$, which by lack of ${(+1,+1,+1)}$ (almost always) forces ${\lambda(20n)=-1}$, which by multiplicativity contradicts ${\lambda(60n)=-1}$. Thus we have ${\lambda(60n+5)=-1}$; a similar argument gives ${\lambda(60n-5)=-1}$ almost always, which by multiplicativity gives ${\lambda(12n-1)=\lambda(12n)=\lambda(12n+1)=+1}$, a contradiction. Thus we almost never have ${\lambda(15n-1)=\lambda(15n+1)=+1}$, which by the inclusion-exclusion argument mentioned previously shows that ${\lambda(15n+1) = - \lambda(15n-1)}$ for almost all ${n}$.
One can continue these Sudoku-type arguments and conclude eventually that ${\lambda(3n-1)=-\lambda(3n+1)=\lambda(3n+2)}$ for almost all ${n}$. To put it another way, if ${\chi_3}$ denotes the non-principal Dirichlet character of modulus ${3}$, then ${\lambda \chi_3}$ is almost always constant away from the multiples of ${3}$. (Conversely, if ${\lambda \chi_3}$ changed sign very rarely outside of the multiples of three, then the sign pattern ${(+1,+1,+1)}$ would never occur.) Fortunately, the main result of Matomäki and Radziwiłł shows that this scenario cannot occur, which establishes that the sign pattern ${(+1,+1,+1)}$ must occur rather frequently. The other sign patterns are handled by variants of these arguments.
Excluding a sign pattern of length three leads to useful implications like “if ${\lambda(n-1)=\lambda(n)=+1}$, then ${\lambda(n+1)=-1}$” which turn out are just barely strong enough to quite rigidly constrain the Liouville function using Sudoku-like arguments. In contrast, excluding a sign pattern of length four only gives rise to implications like “`if ${\lambda(n-2)=\lambda(n-1)=\lambda(n)=+1}$, then ${\lambda(n+1)=-1}$“, and these seem to be much weaker for this purpose (the hypothesis in these implications just isn’t satisfied nearly often enough). So a different idea seems to be needed if one wishes to extend the above theorem to larger values of ${k}$.
Our second theorem gives an analogous result for the Möbius function ${\mu}$ (which takes values in ${\{-1,0,+1\}}$ rather than ${\{-1,1\}}$), but the analysis turns out to be remarkably difficult and we are only able to get up to ${k=2}$:
Theorem 2 Let ${k \leq 2}$. Then each of the sign patterns in ${\{-1,0,+1\}^k}$ is attained by the Möbius function for a set ${n}$ of positive lower density.
It turns out that the prime number theorem and elementary sieve theory can be used to handle the ${k=1}$ case and all the ${k=2}$ cases that involve at least one ${0}$, leaving only the four sign patterns ${(\pm 1, \pm 1)}$ to handle. It is here that the zeroes of the Möbius function cause a significant new obstacle. Suppose for instance that the sign pattern ${(+1, -1)}$ almost never occurs for the Möbius function. The same arguments that were used in the Liouville case then show that ${\mu(n)}$ will be almost always equal to ${\mu(n+1)}$, provided that ${n,n+1}$ are both square-free. One can try to chain this together as before to create a long string ${\mu(n)=\dots=\mu(n+h) \in \{-1,+1\}}$ where the Möbius function is constant, but this cannot work for any ${h}$ larger than three, because the Möbius function vanishes at every multiple of four.
The constraints we assume on the Möbius function can be depicted using a graph on the squarefree natural numbers, in which any two adjacent squarefree natural numbers are connected by an edge. The main difficulty is then that this graph is highly disconnected due to the multiples of four not being squarefree.
To get around this, we need to enlarge the graph. Note from multiplicativity that if ${\mu(n)}$ is almost always equal to ${\mu(n+1)}$ when ${n,n+1}$ are squarefree, then ${\mu(n)}$ is almost always equal to ${\mu(n+p)}$ when ${n,n+p}$ are squarefree and ${n}$ is divisible by ${p}$. We can then form a graph on the squarefree natural numbers by connecting ${n}$ to ${n+p}$ whenever ${n,n+p}$ are squarefree and ${n}$ is divisible by ${p}$. If this graph is “locally connected” in some sense, then ${\mu}$ will be constant on almost all of the squarefree numbers in a large interval, which turns out to be incompatible with the results of Matomäki and Radziwiłł. Because of this, matters are reduced to establishing the connectedness of a certain graph. More precisely, it turns out to be sufficient to establish the following claim:
Theorem 3 For each prime ${p}$, let ${a_p \hbox{ mod } p^2}$ be a residue class chosen uniformly at random. Let ${G}$ be the random graph whose vertices ${V}$ consist of those integers ${n}$ not equal to ${a_p \hbox{ mod } p^2}$ for any ${p}$, and whose edges consist of pairs ${n,n+p}$ in ${V}$ with ${n = a_p \hbox{ mod } p}$. Then with probability ${1}$, the graph ${G}$ is connected.
We were able to show the connectedness of this graph, though it turned out to be remarkably tricky to do so. Roughly speaking (and suppressing a number of technicalities), the main steps in the argument were as follows.
• (Early stage) Pick a large number ${X}$ (in our paper we take ${X}$ to be odd, but I’ll ignore this technicality here). Using a moment method to explore neighbourhoods of a single point in ${V}$, one can show that a vertex ${v}$ in ${V}$ is almost always connected to at least ${\log^{10} X}$ numbers in ${[v,v+X^{1/100}]}$, using relatively short paths of short diameter. (This is the most computationally intensive portion of the argument.)
• (Middle stage) Let ${X'}$ be a typical number in ${[X/40,X/20]}$, and let ${R}$ be a scale somewhere between ${X^{1/40}}$ and ${X'}$. By using paths ${n, n+p_1, n+p_1-p_2, n+p_1-p_2+p_3}$ involving three primes, and using a variant of Vinogradov’s theorem and some routine second moment computations, one can show that with quite high probability, any “good” vertex in ${[v+X'-R, v+X'-0.99R]}$ is connected to a “good” vertex in ${[v+X'-0.01R, v+X-0.0099 R]}$ by paths of length three, where the definition of “good” is somewhat technical but encompasses almost all of the vertices in ${V}$.
• (Late stage) Combining the two previous results together, we can show that most vertices ${v}$ will be connected to a vertex in ${[v+X'-X^{1/40}, v+X']}$ for any ${X'}$ in ${[X/40,X/20]}$. In particular, ${v}$ will be connected to a set of ${\gg X^{9/10}}$ vertices in ${[v,v+X/20]}$. By tracking everything carefully, one can control the length and diameter of the paths used to connect ${v}$ to this set, and one can also control the parity of the elements in this set.
• (Final stage) Now if we have two vertices ${v, w}$ at a distance ${X}$ apart. By the previous item, one can connect ${v}$ to a large set ${A}$ of vertices in ${[v,v+X/20]}$, and one can similarly connect ${w}$ to a large set ${B}$ of vertices in ${[w,w+X/20]}$. Now, by using a Vinogradov-type theorem and second moment calculations again (and ensuring that the elements of ${A}$ and ${B}$ have opposite parity), one can connect many of the vertices in ${A}$ to many of the vertices ${B}$ by paths of length three, which then connects ${v}$ to ${w}$, and gives the claim.
It seems of interest to understand random graphs like ${G}$ further. In particular, the graph ${G'}$ on the integers formed by connecting ${n}$ to ${n+p}$ for all ${n}$ in a randomly selected residue class mod ${p}$ for each prime ${p}$ is particularly interesting (it is to the Liouville function as ${G}$ is to the Möbius function); if one could show some “local expander” properties of this graph ${G'}$, then one would have a chance of modifying the above methods to attack the first unsolved case of the Chowla conjecture, namely that ${\lambda(n)\lambda(n+1)}$ has asymptotic density zero (perhaps working with logarithmic density instead of natural density to avoids some technicalities).
We now move away from the world of multiplicative prime number theory covered in Notes 1 and Notes 2, and enter the wider, and complementary, world of non-multiplicative prime number theory, in which one studies statistics related to non-multiplicative patterns, such as twins ${n,n+2}$. This creates a major jump in difficulty; for instance, even the most basic multiplicative result about the primes, namely Euclid’s theorem that there are infinitely many of them, remains unproven for twin primes. Of course, the situation is even worse for stronger results, such as Euler’s theorem, Dirichlet’s theorem, or the prime number theorem. Finally, even many multiplicative questions about the primes remain open. The most famous of these is the Riemann hypothesis, which gives the asymptotic ${\sum_{n \leq x} \Lambda(n) = x + O( \sqrt{x} \log^2 x )}$ (see Proposition 24 from Notes 2). But even if one assumes the Riemann hypothesis, the precise distribution of the error term ${O( \sqrt{x} \log^2 x )}$ in the above asymptotic (or in related asymptotics, such as for the sum ${\sum_{x \leq n < x+y} \Lambda(n)}$ that measures the distribution of primes in short intervals) is not entirely clear.
Despite this, we do have a number of extremely convincing and well supported models for the primes (and related objects) that let us predict what the answer to many prime number theory questions (both multiplicative and non-multiplicative) should be, particularly in asymptotic regimes where one can work with aggregate statistics about the primes, rather than with a small number of individual primes. These models are based on taking some statistical distribution related to the primes (e.g. the primality properties of a randomly selected ${k}$-tuple), and replacing that distribution by a model distribution that is easy to compute with (e.g. a distribution with strong joint independence properties). One can then predict the asymptotic value of various (normalised) statistics about the primes by replacing the relevant statistical distributions of the primes with their simplified models. In this non-rigorous setting, many difficult conjectures on the primes reduce to relatively simple calculations; for instance, all four of the (still unsolved) Landau problems may now be justified in the affirmative by one or more of these models. Indeed, the models are so effective at this task that analytic number theory is in the curious position of being able to confidently predict the answer to a large proportion of the open problems in the subject, whilst not possessing a clear way forward to rigorously confirm these answers!
As it turns out, the models for primes that have turned out to be the most accurate in practice are random models, which involve (either explicitly or implicitly) one or more random variables. This is despite the prime numbers being obviously deterministic in nature; no coins are flipped or dice rolled to create the set of primes. The point is that while the primes have a lot of obvious multiplicative structure (for instance, the product of two primes is never another prime), they do not appear to exhibit much discernible non-multiplicative structure asymptotically, in the sense that they rarely exhibit statistical anomalies in the asymptotic limit that cannot be easily explained in terms of the multiplicative properties of the primes. As such, when considering non-multiplicative statistics of the primes, the primes appear to behave pseudorandomly, and can thus be modeled with reasonable accuracy by a random model. And even for multiplicative problems, which are in principle controlled by the zeroes of the Riemann zeta function, one can obtain good predictions by positing various pseudorandomness properties of these zeroes, so that the distribution of these zeroes can be modeled by a random model.
Of course, one cannot expect perfect accuracy when replicating a deterministic set such as the primes by a probabilistic model of that set, and each of the heuristic models we discuss below have some limitations to the range of statistics about the primes that they can expect to track with reasonable accuracy. For instance, many of the models about the primes do not fully take into account the multiplicative structure of primes, such as the connection with a zeta function with a meromorphic continuation to the entire complex plane; at the opposite extreme, we have the GUE hypothesis which appears to accurately model the zeta function, but does not capture such basic properties of the primes as the fact that the primes are all natural numbers. Nevertheless, each of the models described below, when deployed within their sphere of reasonable application, has (possibly after some fine-tuning) given predictions that are in remarkable agreement with numerical computation and with known rigorous theoretical results, as well as with other models in overlapping spheres of application; they are also broadly compatible with the general heuristic (discussed in this previous post) that in the absence of any exploitable structure, asymptotic statistics should default to the most “uniform”, “pseudorandom”, or “independent” distribution allowable.
As hinted at above, we do not have a single unified model for the prime numbers (other than the primes themselves, of course), but instead have an overlapping family of useful models that each appear to accurately describe some, but not all, aspects of the prime numbers. In this set of notes, we will discuss four such models:
1. The Cramér random model and its refinements, which model the set ${{\mathcal P}}$ of prime numbers by a random set.
2. The Möbius pseudorandomness principle, which predicts that the Möbius function ${\mu}$ does not correlate with any genuinely different arithmetic sequence of reasonable “complexity”.
3. The equidistribution of residues principle, which predicts that the residue classes of a large number ${n}$ modulo a small or medium-sized prime ${p}$ behave as if they are independently and uniformly distributed as ${p}$ varies.
4. The GUE hypothesis, which asserts that the zeroes of the Riemann zeta function are distributed (at microscopic and mesoscopic scales) like the zeroes of a GUE random matrix, and which generalises the pair correlation conjecture regarding pairs of such zeroes.
This is not an exhaustive list of models for the primes and related objects; for instance, there is also the model in which the major arc contribution in the Hardy-Littlewood circle method is predicted to always dominate, and with regards to various finite groups of number-theoretic importance, such as the class groups discussed in Supplement 1, there are also heuristics of Cohen-Lenstra type. Historically, the first heuristic discussion of the primes along these lines was by Sylvester, who worked informally with a model somewhat related to the equidistribution of residues principle. However, we will not discuss any of these models here.
A word of warning: the discussion of the above four models will inevitably be largely informal, and “fuzzy” in nature. While one can certainly make precise formalisations of at least some aspects of these models, one should not be inflexibly wedded to a specific such formalisation as being “the” correct way to pin down the model rigorously. (To quote the statistician George Box: “all models are wrong, but some are useful”.) Indeed, we will see some examples below the fold in which some finer structure in the prime numbers leads to a correction term being added to a “naive” implementation of one of the above models to make it more accurate, and it is perfectly conceivable that some further such fine-tuning will be applied to one or more of these models in the future. These sorts of mathematical models are in some ways closer in nature to the scientific theories used to model the physical world, than they are to the axiomatic theories one is accustomed to in rigorous mathematics, and one should approach the discussion below accordingly. In particular, and in contrast to the other notes in this course, the material here is not directly used for proving further theorems, which is why we have marked it as “optional” material. Nevertheless, the heuristics and models here are still used indirectly for such purposes, for instance by
• giving a clearer indication of what results one expects to be true, thus guiding one to fruitful conjectures;
• providing a quick way to scan for possible errors in a mathematical claim (e.g. by finding that the main term is off from what a model predicts, or an error term is too small);
• gauging the relative strength of various assertions (e.g. classifying some results as “unsurprising”, others as “potential breakthroughs” or “powerful new estimates”, others as “unexpected new phenomena”, and yet others as “way too good to be true”); or
• setting up heuristic barriers (such as the parity barrier) that one has to resolve before resolving certain key problems (e.g. the twin prime conjecture).
See also my previous essay on the distinction between “rigorous” and “post-rigorous” mathematics, or Thurston’s essay discussing, among other things, the “definition-theorem-proof” model of mathematics and its limitations.
Remark 1 The material in this set of notes presumes some prior exposure to probability theory. See for instance this previous post for a quick review of the relevant concepts.
One of the basic general problems in analytic number theory is to understand as much as possible the fluctuations of the Möbius function ${\mu(n)}$, defined as ${(-1)^k}$ when ${n}$ is the product of ${k}$ distinct primes, and zero otherwise. For instance, as ${\mu}$ takes values in ${\{-1,0,1\}}$, we have the trivial bound
$\displaystyle |\sum_{n \leq x} \mu(n)| \leq x$
and the seemingly slight improvement
$\displaystyle \sum_{n \leq x} \mu(n) = o(x) \ \ \ \ \ (1)$
is already equivalent to the prime number theorem, as observed by Landau (see e.g. this previous blog post for a proof), while the much stronger (and still open) improvement
$\displaystyle \sum_{n \leq x} \mu(n) = O(x^{1/2+o(1)})$
is equivalent to the notorious Riemann hypothesis.
There is a general Möbius pseudorandomness heuristic that suggests that the sign pattern ${\mu}$ behaves so randomly (or pseudorandomly) that one should expect a substantial amount of cancellation in sums that involve the sign fluctuation of the Möbius function in a nontrivial fashion, with the amount of cancellation present comparable to the amount that an analogous random sum would provide; cf. the probabilistic heuristic discussed in this recent blog post. There are a number of ways to make this heuristic precise. As already mentioned, the Riemann hypothesis can be considered one such manifestation of the heuristic. Another manifestation is the following old conjecture of Chowla:
Conjecture 1 (Chowla’s conjecture) For any fixed integer ${m}$ and exponents ${a_1,a_2,\ldots,a_m \geq 0}$, with at least one of the ${a_i}$ odd (so as not to completely destroy the sign cancellation), we have
$\displaystyle \sum_{n \leq x} \mu(n+1)^{a_1} \ldots \mu(n+m)^{a_m} = o_{x \rightarrow \infty;m}(x).$
Note that as ${\mu^a = \mu^{a+2}}$ for any ${a \geq 1}$, we can reduce to the case when the ${a_i}$ take values in ${0,1,2}$ here. When only one of the ${a_i}$ are odd, this is essentially the prime number theorem in arithmetic progressions (after some elementary sieving), but with two or more of the ${a_i}$ are odd, the problem becomes completely open. For instance, the estimate
$\displaystyle \sum_{n \leq x} \mu(n) \mu(n+2) = o(x)$
is morally very close to the conjectured asymptotic
$\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = 2\Pi_2 x + o(x)$
for the von Mangoldt function ${\Lambda}$, where ${\Pi_2 := \prod_{p > 2} (1 - \frac{1}{(p-1)^2}) = 0.66016\ldots}$ is the twin prime constant; this asymptotic in turn implies the twin prime conjecture. (To formally deduce estimates for von Mangoldt from estimates for Möbius, though, typically requires some better control on the error terms than ${o()}$, in particular gains of some power of ${\log x}$ are usually needed. See this previous blog post for more discussion.)
Remark 2 The Chowla conjecture resembles an assertion that, for ${n}$ chosen randomly and uniformly from ${1}$ to ${x}$, the random variables ${\mu(n+1),\ldots,\mu(n+k)}$ become asymptotically independent of each other (in the probabilistic sense) as ${x \rightarrow \infty}$. However, this is not quite accurate, because some moments (namely those with all exponents ${a_i}$ even) have the “wrong” asymptotic value, leading to some unwanted correlation between the two variables. For instance, the events ${\mu(n)=0}$ and ${\mu(n+4)=0}$ have a strong correlation with each other, basically because they are both strongly correlated with the event of ${n}$ being divisible by ${4}$. A more accurate interpretation of the Chowla conjecture is that the random variables ${\mu(n+1),\ldots,\mu(n+k)}$ are asymptotically conditionally independent of each other, after conditioning on the zero pattern ${\mu(n+1)^2,\ldots,\mu(n+k)^2}$; thus, it is the sign of the Möbius function that fluctuates like random noise, rather than the zero pattern. (The situation is a bit cleaner if one works instead with the Liouville function ${\lambda}$ instead of the Möbius function ${\mu}$, as this function never vanishes, but we will stick to the traditional Möbius function formalism here.)
A more recent formulation of the Möbius randomness heuristic is the following conjecture of Sarnak. Given a bounded sequence ${f: {\bf N} \rightarrow {\bf C}}$, define the topological entropy of the sequence to be the least exponent ${\sigma}$ with the property that for any fixed ${\varepsilon > 0}$, and for ${m}$ going to infinity the set ${\{ (f(n+1),\ldots,f(n+m)): n \in {\bf N} \} \subset {\bf C}^m}$ of ${f}$ can be covered by ${O( \exp( \sigma m + o(m) ) )}$ balls of radius ${\varepsilon}$ (in the ${\ell^\infty}$ metric). (If ${f}$ arises from a minimal topological dynamical system ${(X,T)}$ by ${f(n) := f(T^n x)}$, the above notion is equivalent to the usual notion of the topological entropy of a dynamical system.) For instance, if the sequence is a bit sequence (i.e. it takes values in ${\{0,1\}}$), then there are only ${\exp(\sigma m + o(m))}$ ${m}$-bit patterns that can appear as blocks of ${m}$ consecutive bits in this sequence. As a special case, a Turing machine with bounded memory that had access to a random number generator at the rate of one random bit produced every ${T}$ units of time, but otherwise evolved deterministically, would have an output sequence that had a topological entropy of at most ${\frac{1}{T} \log 2}$. A bounded sequence is said to be deterministic if its topological entropy is zero. A typical example is a polynomial sequence such as ${f(n) := e^{2\pi i \alpha n^2}}$ for some fixed ${\sigma}$; the ${m}$-blocks of such polynomials sequence have covering numbers that only grow polynomially in ${m}$, rather than exponentially, thus yielding the zero entropy. Unipotent flows, such as the horocycle flow on a compact hyperbolic surface, are another good source of deterministic sequences.
Conjecture 3 (Sarnak’s conjecture) Let ${f: {\bf N} \rightarrow {\bf C}}$ be a deterministic bounded sequence. Then
$\displaystyle \sum_{n \leq x} \mu(n) f(n) = o_{x \rightarrow \infty;f}(x).$
This conjecture in general is still quite far from being solved. However, special cases are known:
• For constant sequences, this is essentially the prime number theorem (1).
• For periodic sequences, this is essentially the prime number theorem in arithmetic progressions.
• For quasiperiodic sequences such as ${f(n) = F(\alpha n \hbox{ mod } 1)}$ for some continuous ${F}$, this follows from the work of Davenport.
• For nilsequences, this is a result of Ben Green and myself.
• For horocycle flows, this is a result of Bourgain, Sarnak, and Ziegler.
• For the Thue-Morse sequence, this is a result of Dartyge-Tenenbaum (with a stronger error term obtained by Maduit-Rivat). A subsequent result of Bourgain handles all bounded rank one sequences (though the Thue-Morse sequence is actually of rank two), and a related result of Green establishes asymptotic orthogonality of the Möbius function to bounded depth circuits, although such functions are not necessarily deterministic in nature.
• For the Rudin-Shapiro sequence, I sketched out an argument at this MathOverflow post.
• The Möbius function is known to itself be non-deterministic, because its square ${\mu^2(n)}$ (i.e. the indicator of the square-free functions) is known to be non-deterministic (indeed, its topological entropy is ${\frac{6}{\pi^2}\log 2}$). (The corresponding question for the Liouville function ${\lambda(n)}$, however, remains open, as the square ${\lambda^2(n)=1}$ has zero entropy.)
• In the converse direction, it is easy to construct sequences of arbitrarily small positive entropy that correlate with the Möbius function (a rather silly example is ${\mu(n) 1_{k|n}}$ for some fixed large (squarefree) ${k}$, which has topological entropy at most ${\log 2/k}$ but clearly correlates with ${\mu}$).
See this survey of Sarnak for further discussion of this and related topics.
In this post I wanted to give a very nice argument of Sarnak that links the above two conjectures:
Proposition 4 The Chowla conjecture implies the Sarnak conjecture.
The argument does not use any number-theoretic properties of the Möbius function; one could replace ${\mu}$ in both conjectures by any other function from the natural numbers to ${\{-1,0,+1\}}$ and obtain the same implication. The argument consists of the following ingredients:
1. To show that ${\sum_{n, it suffices to show that the expectation of the random variable ${\frac{1}{m} (\mu(n+1)f(n+1)+\ldots+\mu(n+m)f(n+m))}$, where ${n}$ is drawn uniformly at random from ${1}$ to ${x}$, can be made arbitrary small by making ${m}$ large (and ${n}$ even larger).
2. By the union bound and the zero topological entropy of ${f}$, it suffices to show that for any bounded deterministic coefficients ${c_1,\ldots,c_m}$, the random variable ${\frac{1}{m}(c_1 \mu(n+1) + \ldots + c_m \mu(n+m))}$ concentrates with exponentially high probability.
3. Finally, this exponentially high concentration can be achieved by the moment method, using a slight variant of the moment method proof of the large deviation estimates such as the Chernoff inequality or Hoeffding inequality (as discussed in this blog post).
As is often the case, though, while the “top-down” order of steps presented above is perhaps the clearest way to think conceptually about the argument, in order to present the argument formally it is more convenient to present the arguments in the reverse (or “bottom-up”) order. This is the approach taken below the fold.
One of the basic problems in analytic number theory is to estimate sums of the form
$\displaystyle \sum_{p
as ${x \rightarrow \infty}$, where ${p}$ ranges over primes and ${f}$ is some explicit function of interest (e.g. a linear phase function ${f(p) = e^{2\pi i \alpha p}}$ for some real number ${\alpha}$). This is essentially the same task as obtaining estimates on the sum
$\displaystyle \sum_{n
where ${\Lambda}$ is the von Mangoldt function. If ${f}$ is bounded, ${f(n)=O(1)}$, then from the prime number theorem one has the trivial bound
$\displaystyle \sum_{n
but often (when ${f}$ is somehow “oscillatory” in nature) one is seeking the refinement
$\displaystyle \sum_{n
or equivalently
$\displaystyle \sum_{p
Thanks to identities such as
$\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log(\frac{n}{d}), \ \ \ \ \ (3)$
where ${\mu}$ is the Möbius function, refinements such as (1) are similar in spirit to estimates of the form
$\displaystyle \sum_{n
Unfortunately, the connection between (1) and (4) is not particularly tight; roughly speaking, one needs to improve the bounds in (4) (and variants thereof) by about two factors of ${\log x}$ before one can use identities such as (3) to recover (1). Still, one generally thinks of (1) and (4) as being “morally” equivalent, even if they are not formally equivalent.
When ${f}$ is oscillating in a sufficiently “irrational” way, then one standard way to proceed is the method of Type I and Type II sums, which uses truncated versions of divisor identities such as (3) to expand out either (1) or (4) into linear (Type I) or bilinear sums (Type II) with which one can exploit the oscillation of ${f}$. For instance, Vaughan’s identity lets one rewrite the sum in (1) as the sum of the Type I sum
$\displaystyle \sum_{d \leq U} \mu(d) (\sum_{V/d \leq r \leq x/d} (\log r) f(rd)),$
the Type I sum
$\displaystyle -\sum_{d \leq UV} a(d) \sum_{V/d \leq r \leq x/d} f(rd),$
the Type II sum
$\displaystyle -\sum_{V \leq d \leq x/U} \sum_{U < m \leq x/V} \Lambda(d) b(m) f(dm),$
and the error term ${\sum_{d \leq V} \Lambda(n) f(n)}$, whenever ${1 \leq U, V \leq x}$ are parameters, and ${a, b}$ are the sequences
$\displaystyle a(d) := \sum_{e \leq U, f \leq V: ef = d} \Lambda(d) \mu(e)$
and
$\displaystyle b(m) := \sum_{d|m: d \leq U} \mu(d).$
Similarly one can express (4) as the Type I sum
$\displaystyle -\sum_{d \leq UV} c(d) \sum_{UV/d \leq r \leq x/d} f(rd),$
the Type II sum
$\displaystyle - \sum_{V < d \leq x/U} \sum_{U < m \leq x/d} \mu(m) b(d) f(dm)$
and the error term ${\sum_{d \leq UV} \mu(n) f(N)}$, whenever ${1 \leq U,V \leq x}$ with ${UV \leq x}$, and ${c}$ is the sequence
$\displaystyle c(d) := \sum_{e \leq U, f \leq V: ef = d} \mu(d) \mu(e).$
After eliminating troublesome sequences such as ${a(), b(), c()}$ via Cauchy-Schwarz or the triangle inequality, one is then faced with the task of estimating Type I sums such as
$\displaystyle \sum_{r \leq y} f(rd)$
or Type II sums such as
$\displaystyle \sum_{r \leq y} f(rd) \overline{f(rd')}$
for various ${y, d, d' \geq 1}$. Here, the trivial bound is ${O(y)}$, but due to a number of logarithmic inefficiencies in the above method, one has to obtain bounds that are more like ${O( \frac{y}{\log^C y})}$ for some constant ${C}$ (e.g. ${C=5}$) in order to end up with an asymptotic such as (1) or (4).
However, in a recent paper of Bourgain, Sarnak, and Ziegler, it was observed that as long as one is only seeking the Mobius orthogonality (4) rather than the von Mangoldt orthogonality (1), one can avoid losing any logarithmic factors, and rely purely on qualitative equidistribution properties of ${f}$. A special case of their orthogonality criterion (which actually dates back to an earlier paper of Katai, as was pointed out to me by Nikos Frantzikinakis) is as follows:
Proposition 1 (Orthogonality criterion) Let ${f: {\bf N} \rightarrow {\bf C}}$ be a bounded function such that
$\displaystyle \sum_{n \leq x} f(pn) \overline{f(qn)} = o(x) \ \ \ \ \ (5)$
for any distinct primes ${p, q}$ (where the decay rate of the error term ${o(x)}$ may depend on ${p}$ and ${q}$). Then
$\displaystyle \sum_{n \leq x} \mu(n) f(n) =o(x). \ \ \ \ \ (6)$
Actually, the Bourgain-Sarnak-Ziegler paper establishes a more quantitative version of this proposition, in which ${\mu}$ can be replaced by an arbitrary bounded multiplicative function, but we will content ourselves with the above weaker special case. (See also these notes of Harper, which uses the Katai argument to give a slightly weaker quantitative bound in the same spirit.) This criterion can be viewed as a multiplicative variant of the classical van der Corput lemma, which in our notation asserts that ${\sum_{n \leq x} f(n) = o(x)}$ if one has ${\sum_{n \leq x} f(n+h) \overline{f(n)} = o(x)}$ for each fixed non-zero ${h}$.
As a sample application, Proposition 1 easily gives a proof of the asymptotic
$\displaystyle \sum_{n \leq x} \mu(n) e^{2\pi i \alpha n} = o(x)$
for any irrational ${\alpha}$. (For rational ${\alpha}$, this is a little trickier, as it is basically equivalent to the prime number theorem in arithmetic progressions.) The paper of Bourgain, Sarnak, and Ziegler also apply this criterion to nilsequences (obtaining a quick proof of a qualitative version of a result of Ben Green and myself, see these notes of Ziegler for details) and to horocycle flows (for which no Möbius orthogonality result was previously known).
Informally, the connection between (5) and (6) comes from the multiplicative nature of the Möbius function. If (6) failed, then ${\mu(n)}$ exhibits strong correlation with ${f(n)}$; by change of variables, we then expect ${\mu(pn)}$ to correlate with ${f(pn)}$ and ${\mu(pm)}$ to correlate with ${f(qn)}$, for “typical” ${p,q}$ at least. On the other hand, since ${\mu}$ is multiplicative, ${\mu(pn)}$ exhibits strong correlation with ${\mu(qn)}$. Putting all this together (and pretending correlation is transitive), this would give the claim (in the contrapositive). Of course, correlation is not quite transitive, but it turns out that one can use the Cauchy-Schwarz inequality as a substitute for transitivity of correlation in this case.
I will give a proof of Proposition 1 below the fold (which is not quite based on the argument in the above mentioned paper, but on a variant of that argument communicated to me by Tamar Ziegler, and also independently discovered by Adam Harper). The main idea is to exploit the following observation: if ${P}$ is a “large” but finite set of primes (in the sense that the sum ${A := \sum_{p \in P} \frac{1}{p}}$ is large), then for a typical large number ${n}$ (much larger than the elements of ${P}$), the number of primes in ${P}$ that divide ${n}$ is pretty close to ${A = \sum_{p \in P} \frac{1}{p}}$:
$\displaystyle \sum_{p \in P: p|n} 1 \approx A. \ \ \ \ \ (7)$
A more precise formalisation of this heuristic is provided by the Turan-Kubilius inequality, which is proven by a simple application of the second moment method.
In particular, one can sum (7) against ${\mu(n) f(n)}$ and obtain an approximation
$\displaystyle \sum_{n \leq x} \mu(n) f(n) \approx \frac{1}{A} \sum_{p \in P} \sum_{n \leq x: p|n} \mu(n) f(n)$
that approximates a sum of ${\mu(n) f(n)}$ by a bunch of sparser sums of ${\mu(n) f(n)}$. Since
$\displaystyle x = \frac{1}{A} \sum_{p \in P} \frac{x}{p},$
we see (heuristically, at least) that in order to establish (4), it would suffice to establish the sparser estimates
$\displaystyle \sum_{n \leq x: p|n} \mu(n) f(n) = o(\frac{x}{p})$
for all ${p \in P}$ (or at least for “most” ${p \in P}$).
Now we make the change of variables ${n = pm}$. As the Möbius function is multiplicative, we usually have ${\mu(n) = \mu(p) \mu(m) = - \mu(m)}$. (There is an exception when ${n}$ is divisible by ${p^2}$, but this will be a rare event and we will be able to ignore it.) So it should suffice to show that
$\displaystyle \sum_{m \leq x/p} \mu(m) f(pm) = o( x/p )$
for most ${p \in P}$. However, by the hypothesis (5), the sequences ${m \mapsto f(pm)}$ are asymptotically orthogonal as ${p}$ varies, and this claim will then follow from a Cauchy-Schwarz argument.
I’ve just uploaded to the arXiv the paper A remark on partial sums involving the Möbius function, submitted to Bull. Aust. Math. Soc..
The Möbius function ${\mu(n)}$ is defined to equal ${(-1)^k}$ when ${n}$ is the product of ${k}$ distinct primes, and equal to zero otherwise; it is closely connected to the distribution of the primes. In 1906, Landau observed that one could show using purely elementary means that the prime number theorem
$\displaystyle \sum_{p \leq x} 1 = (1+o(1)) \frac{x}{\log x} \ \ \ \ \ (1)$
(where ${o(1)}$ denotes a quantity that goes to zero as ${x \rightarrow \infty}$) was logically equivalent to the partial sum estimates
$\displaystyle \sum_{n \leq x} \mu(n) = o(x) \ \ \ \ \ (2)$
and
$\displaystyle \sum_{n \leq x} \frac{\mu(n)}{n} = o(1); \ \ \ \ \ (3)$
we give a sketch of the proof of these equivalences below the fold.
On the other hand, these three inequalities are all easy to prove if the ${o()}$ terms are replaced by their ${O()}$ counterparts. For instance, by observing that the binomial coefficient ${\binom{2n}{n}}$ is bounded by ${4^n}$ on the one hand (by Pascal’s triangle or the binomial theorem), and is divisible by every prime between ${n}$ and ${2n}$ on the other hand, we conclude that
$\displaystyle \sum_{n < p \leq 2n} \log p \leq n \log 4$
from which it is not difficult to show that
$\displaystyle \sum_{p \leq x} 1 = O( \frac{x}{\log x} ). \ \ \ \ \ (4)$
Also, since ${|\mu(n)| \leq 1}$, we clearly have
$\displaystyle |\sum_{n \leq x} \mu(n)| \leq x.$
Finally, one can also show that
$\displaystyle |\sum_{n \leq x} \frac{\mu(n)}{n}| \leq 1. \ \ \ \ \ (5)$
Indeed, assuming without loss of generality that ${x}$ is a positive integer, and summing the inversion formula ${1_{n=1} = \sum_{d|n} \mu(d)}$ over all ${n \leq x}$ one sees that
$\displaystyle 1 = \sum_{d \leq x} \mu(d) \left\lfloor \frac{x}{d}\right \rfloor = \sum_{d \leq x} \mu(d) \frac{x}{d} - \sum_{d
and the claim follows by bounding ${|\mu(d) \left \{ \frac{x}{d} \right\}|}$ by ${1}$.
In this paper I extend these observations to more general multiplicative subsemigroups of the natural numbers. More precisely, if ${P}$ is any set of primes (finite or infinite), I show that
$\displaystyle |\sum_{n \in \langle P \rangle: n \leq x} \frac{\mu(n)}{n}| \leq 1 \ \ \ \ \ (7)$
and that
$\displaystyle \sum_{n \in \langle P \rangle: n \leq x} \frac{\mu(n)}{n} = \prod_{p \in P} (1-\frac{1}{p}) + o(1), \ \ \ \ \ (8)$
where ${\langle P \rangle}$ is the multiplicative semigroup generated by ${P}$, i.e. the set of natural numbers whose prime factors lie in ${P}$.
Actually the methods are completely elementary (the paper is just six pages long), and I can give the proof of (7) in full here. Again we may take ${x}$ to be a positive integer. Clearly we may assume that
$\displaystyle \sum_{n \in \langle P \rangle: n \leq x} \frac{1}{n} > 1, \ \ \ \ \ (9)$
as the claim is trivial otherwise.
If ${P'}$ denotes the primes that are not in ${P}$, then Möbius inversion gives us
$\displaystyle 1_{n \in \langle P' \rangle} = \sum_{d|n; d \in \langle P \rangle} \mu(d).$
Summing this for ${1 \leq n \leq x}$ gives
$\displaystyle \sum_{n \in \langle P' \rangle: n \leq x} 1 = \sum_{d \in \langle P \rangle: d \leq x} \mu(d) \frac{x}{d} - \sum_{d \in \langle P \rangle: d \leq x} \mu(d) \left \{ \frac{x}{d} \right \}.$
We can bound ${|\mu(d) \left \{ \frac{x}{d} \right \}| \leq 1 - \frac{1}{d}}$ and so
$\displaystyle |\sum_{d \in \langle P \rangle: d \leq x} \mu(d) \frac{x}{d}| \leq \sum_{n \in \langle P' \rangle: n \leq x} 1 + \sum_{n \in \langle P \rangle: n \leq x} 1 - \sum_{n \in \langle P \rangle: n \leq x} \frac{1}{n}.$
The claim now follows from (9), since ${\langle P \rangle}$ and ${\langle P' \rangle}$ overlap only at ${1}$.
As special cases of (7) we see that
$\displaystyle |\sum_{d \leq x: d|m} \frac{\mu(d)}{d}| \leq 1$
and
$\displaystyle |\sum_{n \leq x: (m,n)=1} \frac{\mu(n)}{n}| \leq 1$
for all ${m,x}$. Since ${\mu(mn) = \mu(m) \mu(n) 1_{(m,n)=1}}$, we also have
$\displaystyle |\sum_{n \leq x} \frac{\mu(mn)}{n}| \leq 1.$
One might hope that these inequalities (which gain a factor of ${\log x}$ over the trivial bound) might be useful when performing effective sieve theory, or effective estimates on various sums involving the primes or arithmetic functions.
This inequality (7) is so simple to state and prove that I must think that it was known to, say, Landau or Chebyshev, but I can’t find any reference to it in the literature. [Update, Sep 4: I have learned that similar results have been obtained in a paper by Granville and Soundararajan, and have updated the paper appropriately.] The proof of (8) is a simple variant of that used to prove (7) but I will not detail it here.
Curiously, this is one place in number theory where the elementary methods seem superior to the analytic ones; there is a zeta function ${\zeta_P(s) = \sum_{n \in \langle P \rangle} \frac{1}{n^s} = \prod_{p \in P} (1-\frac{1}{p^s})^{-1}}$ associated to this problem, but it need not have a meromorphic continuation beyond the region ${\{ \hbox{Re}(s) > 1 \}}$, and it turns out to be remarkably difficult to use this function to establish the above results. (I do have a proof of this form, which I in fact found before I stumbled on the elementary proof, but it is far longer and messier.)
Ben Green and I have just uploaded to the arXiv our paper, “The Möbius function is asymptotically orthogonal to nilsequences“, which is a sequel to our earlier paper “The quantitative behaviour of polynomial orbits on nilmanifolds“, which I talked about in this post. In this paper, we apply our previous results on quantitative equidistribution polynomial orbits in nilmanifolds to settle the Möbius and nilsequences conjecture from our earlier paper, as part of our program to detect and count solutions to linear equations in primes. (The other major plank of that program, namely the inverse conjecture for the Gowers norm, remains partially unresolved at present.) Roughly speaking, this conjecture asserts the asymptotic orthogonality
$\displaystyle |\frac{1}{N} \sum_{n=1}^N \mu(n) f(n)| \ll_A \log^{-A} N$ (1)
between the Möbius function $\mu(n)$ and any Lipschitz nilsequence f(n), by which we mean a sequence of the form $f(n) = F(g^n x)$ for some orbit $g^n x$ in a nilmanifold $G/\Gamma$, and some Lipschitz function $F: G/\Gamma \to {\Bbb C}$ on that nilmanifold. (The implied constant can depend on the nilmanifold and on the Lipschitz constant of F, but it is important that it be independent of the generator g of the orbit or the base point x.) The case when f is constant is essentially the prime number theorem; the case when f is periodic is essentially the prime number theorem in arithmetic progressions. The case when f is almost periodic (e.g. $f(n) = e^{2\pi i \alpha n}$ for some irrational $\alpha$) was established by Davenport, using the method of Vinogradov. The case when f was a 2-step nilsequence (such as the quadratic phase $f(n) = e^{2\pi i \alpha n^2}$; bracket quadratic phases such as $f(n) = e^{2\pi \lfloor \alpha n \rfloor \beta n}$ can also be covered by an approximation argument, though the logarithmic decay in (1) is weakened as a consequence) was done by Ben and myself a few years ago, by a rather ad hoc adaptation of Vinogradov’s method. By using the equidistribution theory of nilmanifolds, we were able to apply Vinogradov’s method more systematically, and in fact the proof is relatively short (20 pages), although it relies on the 64-page predecessor paper on equidistribution. I’ll talk a little bit more about the proof after the fold.
There is an amusing way to interpret the conjecture (using the close relationship between nilsequences and bracket polynomials) as an assertion of the pseudorandomness of the Liouville function from a computational complexity perspective. Suppose you possess a calculator with the wonderful property of being infinite precision: it can accept arbitrarily large real numbers as input, manipulate them precisely, and also store them in memory. However, this calculator has two limitations. Firstly, the only operations available are addition, subtraction, multiplication, integer part $x \mapsto [x]$, fractional part $x \mapsto \{x\}$, memory store (into one of O(1) registers), and memory recall (from one of these O(1) registers). In particular, there is no ability to perform division. Secondly, the calculator only has a finite display screen, and when it shows a real number, it only shows O(1) digits before and after the decimal point. (Thus, for instance, the real number 1234.56789 might be displayed only as $\ldots 34.56\ldots$.)
Now suppose you play the following game with an opponent.
1. The opponent specifies a large integer d.
2. You get to enter in O(1) real constants of your choice into your calculator. These can be absolute constants such as $\sqrt{2}$ and $\pi$, or they can depend on d (e.g. you can enter in $10^{-d}$).
3. The opponent randomly selects an d-digit integer n, and enters n into one of the registers of your calculator.
4. You are allowed to perform O(1) operations on your calculator and record what is displayed on the calculator’s viewscreen.
5. After this, you have to guess whether the opponent’s number n had an odd or even number of prime factors (i.e. you guess $\lambda(n)$.)
6. If you guess correctly, you win $1; otherwise, you lose$1.
For instance, using your calculator you can work out the first few digits of $\{ \sqrt{2}n \lfloor \sqrt{3} n \rfloor \}$, provided of course that you entered the constants $\sqrt{2}$ and $\sqrt{3}$ in advance. You can also work out the leading digits of n by storing $10^{-d}$ in advance, and computing the first few digits of $10^{-d} n$.
Our theorem is equivalent to the assertion that as d goes to infinity (keeping the O(1) constants fixed), your probability of winning this game converges to 1/2; in other words, your calculator becomes asymptotically useless to you for the purposes of guessing whether n has an odd or even number of prime factors, and you may as well just guess randomly.
[I should mention a recent result in a similar spirit by Mauduit and Rivat; in this language, their result asserts that knowing the last few digits of the digit-sum of n does not increase your odds of guessing $\lambda(n)$ correctly.]
|
# Tag Info
13
$Encrypt(m|H(m))$ is not an operating mode providing authentication; forgeries are possible in some very real scenarios. Depending on the encryption used, that can be assuming only known plaintext. Here is a simple example with $Encrypt$ a stream cipher, including any block cipher in CTR or OFB mode. Mallory wants to sign some message $m$ of his choice. ...
12
Actually, that wikipedia article you mention in your question already answers your question: It is moderately common for companies and sometimes even standards bodies as in the case of the CSS encryption on DVDs – to keep the inner workings of a system secret. Some argue this "security by obscurity" makes the product safer and less vulnerable to attack. ...
11
I'll comment only the statement referring to an AES-256 replacement with 4096-bit key: According to our engineers, this will take 23840 times longer to crack than aes256 Bob writing that is not able to correctly transcribe even the numbers that engineer Alice allegedly spelled: most likely, $23840$ is intended to be $2^{3840}$, which is the ratio ...
8
This is actually to a great extent a question of terminology, and ultimately which security claims you are prepared to make, more than it is a practical question. For short: You may draw the line between the key space and the algorithm any way you want, but the way you draw that line will have implications regarding which security claims you are able to ...
7
ElGamal appears to be used instead of Diffie-Hellman (or IES) in OpenPGP mostly because when that format was put together, there were some unresolved intellectual property issues surrounding both RSA and Diffie-Hellman, while ElGamal was unproblematic. This trend for ElGamal seems to stick around, mostly by force of habit, e.g. when switching to ...
7
As @D.W. guessed, the branching program for a circuit essentially reveals the original circuit. It's not clear what you mean by "apply the whole obfuscation process to the circuit-revealing branching program," but the prospects for that do not seem good: evaluating the branching program is highly sequential (polynomial depth), and you would need to ...
6
Yes, this is a perfectly secure solution. The principle drawback is that the precision available is limited by the round-trip time to the trusted time server. If the trusted time server is 50ms away (i.e., 100ms round-trip time), which is a plausible situation that might arise in real life, then you cannot synchronize the client's time to within a ...
6
We simply have to trust this party because this scheme requires a trusted dealer (a party that distributes the shares to the secret to the participants - this can be you or some other party - but if its you you should trust yourself). We can use verifiable secret sharing, that allows the parties to check whether the shares they have obtained are consistent, ...
6
Some brief thoughts: Shared secret Generation: $$s=E_a(B)=E_b(A)$$ The shared secret is generated by encrypting the other users public key with your private key. This is effectively an ECDH step, which is very reasonable, and one of the key aims of C25519$^{[1]}$. Key Generation: $$s_0=\mathrm{SHA256}(s); s_i=\mathrm{SHA256}(s_{i-1})$$ First, using the ...
5
When publishing the algorithm other people can review your algorithm and design. With this they can find flaws in your design and therefore improve this. The principle behind this is simply that many eyes find more errors than two eyes. Why do you think does the NIST review new candidates for cipher suites (AES)? This is to improve the security by having ...
5
I think it is still possible to use UC in this case. Recall the setup for the UC framework. We have an ideal world and a real world. There are parties $P_1,\dots,P_n$ in each world and an environment $\mathcal{Z}$ in each. In the real world we have the adversary $\mathcal{A}$ while in the ideal world, we have an ideal functionality $\mathcal{F}$ and a ...
5
TLS 1.0 uses initialization vector (IV) to refer to two different processes. TLS 1.1 introduces a new type of IV that causes an entire block to be discarded and isn't directly comparable to the old series of IVs based on CBC residue. By simply changing an operation at the beginning of a record, the hope was apparently to make implementations easy to patch ...
5
You should encrypt the data using a well-vetted standard, like TLS (for data in motion) or GPG (for data at rest). Designing your own is more likely to lead to sadness. The format of the data that you protect in this way is up to you and can be broken down into structs and chunks and headers etc. to your heart's delight.
5
This is pretty much the schoolbook implementation of a shared random number generation (generate, commit, publish). So yeah, it's secure. But this only works for large random numbers, here's a small adaption that allows for arbitrary size integers: If you need an $n$-bit random number everyone should generate $n$-bit random numbers - this is independent of ...
5
Given: The attacker can call PRP() and the inverse function prp() on any message of his choosing. PRP is a pseudorandom permutation indistinguishable to the attacker from a random permutation. Assuming R and K are "sufficiently large", perfectly random, and never leaked to the attacker -- in particular, during a chosen-ciphertext attack, the decryptor only ...
5
QKD aims at exchanging key material to be used with encryption based on OTPs between two parties and thus to achieve perfect secrecy for transmitted messages. There are, however, several drawbacks for practical use in a wired setting of QKD (required hardware and their vulnerability to hacks, limited distance which does not support end-to-end ...
5
This is a classical example. Here is the proof system… Bob gives two gloves to Alice so that she is holding one in each hand. Bob can see the gloves at this point, but Bob doesn't tell Alice which is which. Alice then puts both hands behind her back. Next, she either switches the gloves between her hands, or leaves them be, with probability $1/2$ each. ...
5
RFC 6176 lists four reasons why SSL 2.0 must not be used, in its section 2: Message authentication uses MD5 [MD5]. Most security-aware users have already moved away from any use of MD5 [RFC6151]. Handshake messages are not protected. This permits a man-in-the- middle to trick the client into picking a weaker cipher suite than it would ...
5
No, this protocol does not provide perfect forward secrecy. Record the initial key transport message (shared via RSA-OAEP). If the attacker later gets access to the corresponding RSA private key, and decrypts the original key transport message, the entire symmetric key evolution sequence for that session will trivially unfold.
4
Private Set Intersection How about a private set intersection protocol? The banks input is a set of all of their account numbers, the user's input is their account number (a single member set). The output could be given to the user, or the bank, or both, depending on your needs. You would need a way to protect against guessing account numbers. For ...
4
Annex E.1 of RFC 5246 contains the following text which is a nice summary of the situation: Note: some server implementations are known to implement version negotiation incorrectly. For example, there are buggy TLS 1.0 servers that simply close the connection when the client offers a version newer than TLS 1.0. Also, it is known that some servers will ...
4
Short answer: Because the browser developers have long thought interoperability to be more important than security and standard compliance. Slightly longer answer: Some SSL/TLS server implementations do not negotiate the protocol version correctly, but terminate the connection with a fatal alert if the client attempts to negotiate a protocol version that ...
4
I am assuming that the vault shall store arbitrary-length messages and associate with each message a token consisting of six decimal digits. Otherwise, as has been noted (see below), the problem is probably either impossible or trivial. I interpret your requirements to mean that the detokenization algorithm is also available to an attacker that has gotten ...
4
The claims made are pretty much all nonsense or do not represent an accurate understanding of the state of the art. I'm not going to go into a point-by-point response; suffice it to say that I would not trust any advice or representations they may make about what is or isn't secure. Their system might be fine, or it might not be, but their public ...
4
The current specification says that tracker GET requests specify the following variables: uploaded=... (bytes) downloaded=... (bytes) left=... (bytes) This is great for public trackers but is poorly designed for private trackers. The problem is that the numbers don't always add up as they should and this can be for several reasons. For example, you might ...
4
Use the exponential variant of ElGamal, where the plaintext is encoded in the exponent. Elliptic curve ElGamal is fine. In fact, any public key cryptosystem which allows raising ciphertexts to a power such that this operation corresponds homomorphically to multiplication for the plaintext. Your commitments are $c_x = \mathsf{E}(x)$; $c_y = \mathsf{E}(y)$; ...
4
Two things going on that together may make plain-hash-then-encrypt insecure. First, the distinction between secure MACs and hashes, which is that a hash function may allow you to derive $H(m')$ from $H(m)$ even if you only know how $m'$ and $m$ differ. Length extension attacks on SHA-1 and SHA-2 are a practical way that can happen, but there could be others ...
4
I would say there are three general areas of necessary expertise for most crypto-related jobs: Knowledge of primitives and their use cases. Knowledge of protocols and understanding how to reason about their security. Deep and abiding understanding of how incredibly stupid people are, including oneself. The most that knowing the math is going to do for ...
3
There was a post on security.stackexchange last week about this. SSL/TLS with Certificate Authorities for all intents and purposes is now completely insecure from governments and any organisation who has a CA pre-trusted inside the standard web browsers. DNSSEC will also fall under the same scenario because at the top level you have a particular government ...
3
In contrast to asymmetric schemes (notably RSA and El Gamal) which require some sort of computation to generate the key, the only constraint one has when selecting a key for DES or AES (or 3DES) is to make it look indistinguishible from a random stream. That said both El Gamal and RSA require some randomness in key generation, but that phase does not depend ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
ISSN 1006-2157 CN 11-3574/R
### Quality characteristics and correlation analysis of medicinal preparations from Xiongshao Fang*
CHI Lei1,SUN Ya-shu1,YANG Shu-juan1,PAN Ting1,WEN Jing2,HUANG Ya-ting1,LI Huan-juan1,ZHANG Lu1,PENG Ping1,JIANG Yan-yan1,3,SHI Ren-bing1,3*
1. 1 Key Unit of Exploring Effective Substances of Classical and Famous Formulas of State Administration of Traditional Chinese Medicine,School of Chinese Materia Medica,Beijing University of Chinese Medicine,Beijing 100102;
2 School of Traditional Chinese Medicine,Capital University of Medical Sciences;
3 Quality Control Technology and Engineering Center of Chinese Medicine,Beijing Municipal Commission of Education
• Received:2014-10-25 Online:2015-03-28 Published:2015-03-28
Abstract: Objective To determin the key factors affecting the quality of Xiongshao Fang and the best preparation forms of medicinal in Xiongshao Fang, laying the groundwork of anti-migraine drug from Xiongshao Fang. Methods Based on drug system, the content of effective index components of preparation of Xiongshao Fang was detected by using HPLC-PDA. The quality of medicinal preparation from Xiongshao Fang were characterized based on the content, relative quantity, relative ratio of content, yield rate and anti-migraine effect, and the characterized results were given correlation analysis. Results The effect of anti-migraine of three medicinal preparations from Xiongshao Fang was listed in the following descending order of enrichment, ethanol extract, and water extract. The enrichment was the best preparation form of medicinal in Xiongshao Fang with the highest content of effective index components, lowest extract rate, lowest dosage and best effect of anti-migraine, resulted from the medicine-efficacy correlation analysis. Conclusion The method of the determination of effective index components from Xiongshao Fang is easy to handle, accurate and highly reproducible. Based on the basic composition and extraction rate of medicinal system, the key factor of increasing medicinal efficacy is promotion of the content and optimization of proportion of the relative effective components.
CLC Number:
• R284.1
|
# Approximating a step function with polynomials
The Weierstrass approximation theorem says any continuous function $f(x): [0,1] \to \mathbb{R}$ can be approximated uniformly by polynomials. Given any $\epsilon$, we can find $p(x) = x^n + \dots$ such that:
$$|f(x) - p(x)| < \epsilon$$
is always true. In practice how to do we find $p$? Let's say $f(x)$ was the step function:
$$f(x) = \left\{ \begin{array}{cc} 0 & x \leq 0 \\ 1 & x > 0\end{array} \right.$$
How do we find the polynomial with $\deg p = 100$ which minimizes the tolerance $\epsilon$ ?
$$\epsilon = \min_{\deg p = 10^2}\left[\max_{x \in [0,1]} | f(x) - p(x) | \right]$$
Hopefully I have defined a tractable problem... For example, does Lagrange Interpolation necessarily minimize $\epsilon$ ?
• You chose a poor example -- the theorem states that this is possible for continuous functions, but you chose a discontinuous function. Dec 28 '15 at 20:06
This problem is generally called the Minimax Problem. Unfortunately the step function is not continuous and therefore the Weierstrass approximation theorem does not apply. Any continuous approximation will have $\epsilon \ge 0.5$ since there is a jump of size 1 so the best you can do at that point is split the difference. In fact, $y = 1/2$ is as good as you can do for this problem due to the discontinuity although there are many higher order polynomials that fit better in other norms and just as well in the infinity norm that defines the minimax problem.
For continuous functions Chebyshev polynomials are a good approximation to the minimax solution. There is also a commonly used iterative algorithm call the Remez algorithm which approximately solves the minimax problem. The Remez algorithm leverages the equioscillation theorem which (to paraphrase) says that the polynomial of degree less than or equal to $n$ which has $n+2$ oscillations which are equally above or below the function (i.e. $f(x)\pm C$) on the desired interval is the one which solves the minimax problem for all polynomials of degree less than or equal to $n$. The Remez algorithm iteratively adjusts the coefficients of a polynomial to (approximately) achieve an equioscillating approximation function which is the solution to the minimax problem.
|
# If a 3 kg object moving at 9 m/s slows down to a halt after moving 27 m, what is the coefficient of kinetic friction of the surface that the object was moving over?
Apr 1, 2017
$\mu = 0.15$
#### Explanation:
The work being done on the object is Work due to friction so the following equation is going to be used:
$\textcolor{w h i t e}{a a a a a a a a a a a a}$Equation (a) ${W}_{f} = \Delta K E$
We can rewrite Equation (a) if we break down both sides step-by-step to become:
$\textcolor{w h i t e}{a a a a a a a a a a a a a a a a a a a}$Equation (b)
$\textcolor{w h i t e}{a a a a a a}$$\left(\mu \cdot m g\right) \cdot d \cdot \cos \theta = \left(\frac{1}{2} m {v}_{f}^{2} - \frac{1}{2} m {v}_{i}^{2}\right)$
$\mu = \text{coefficient of kinetic friction}$
$m = \text{mass (kg)}$
$g = \text{acceleration due to gravity} \left(\frac{m}{s} ^ 2\right)$
$d = \text{displacement} \left(m\right)$
$\theta = \text{angle between friction and displacement}$
${v}_{f} = \text{velocity final}$
${v}_{i} = \text{velocity initial}$
Since our object stopped, its final velocity becomes $0$ and therefore $\text{final KE}$ becomes $0$. Friction and displacement are opposite one another so $\cos \left({180}^{\circ}\right) = - 1$. Mass cancels on both sides. Rearrange, plug in, and solve.
$- \mu \cdot g \cdot d = - \frac{1}{2} {v}_{i}^{2}$
$\mu = \frac{0.5 \cdot {9}^{2}}{9.8 \cdot 27} = \frac{40.5}{264.6} = 0.15$
|
# AWA TV model W571 - blue screen with white stripes
Status
Not open for further replies.
#### stoney
##### Newbie level 4
hi my awa tv model w571 has blue screen with white stripes. It's been to repair shop 3 times in last 10 days and still not working. It was suggested to me by someone else that placing a wire wrap around the flyback might solve the problem.When suggested to technician he siad no. What else could be wrong?
#### banjo
A bright blue screen with widely spaced white lines means the blue gun in the picture tube is running full on. This is either a detect on the video driver board or a picture tube defect. As the set ages, oxides can form in the guns of the picture tube. If the oxide forms a bridge across the elements, the picture tube will display these symptoms.
The repair shop should have easily been able to troubleshoot the cause. Does the set work again when you get it back? Does the set work for a short time and then go blue?
My wild guess is that they are trying burn out the oxide with a picture tube rejuvenator and it has not been successful. Sometimes this works, and sometimes it does not.
--- Steve
#### stoney
##### Newbie level 4
hi
yes on the repair warranty it made reference to a blue gun. Yes when i've got it back it works ok for a while then goes again.
He says its an intermittent problem. Not much good to me!
i read something about a rejuvinator somewhere else and they said that usually doesn't work. They also mentioned something about a wire wrap around the flyback.when i mentioned that to techniciam when he picked up tv for the 3rd time!!!!!!! he said his testing didn't indicate any form of short to the cathode.
Thanks for replying to my original post!
#### banjo
The wire around the flyback is probably useful only for a heater to cathode short. Probably the technician has found that the short is between the control grids and does not involve the cathode directly.
Rejuvenating to remove a short works some of the time. It is worth a try. However, it should be done with the tube resting on its face. That way any dislodged particles are carried by gravity down into the bell of the tube where they cannot do any harm.
I have also been able to tap out some shorts with a plastic brush handle. This is somewhat dangerous, so do not attempt it yourself or if you do wear safety glasses and a heavy jacket. The idea is to again place the set on its face and operate it until the screen goes blue. Once the oxides are causing the short, the neck of the tube is tapped with the plastic brush handle to shake the tube elements and dislodge the particle. The danger is two fold. First, the tube is operating so lots of high voltage is present. Secondly, the tube is under high vacuum, if you tap too hard, the tube could implode and throw off glass fragments. When successfully tapped, the screen will immediately return to normal.
I would question the repairman as to what methods they tried so far. If they have already tried the above, then assuming that the warranty on the picture tube has expired, I would start looking for a new TV. Unfortunately, the replacement cost of the tube is usually more than another set.
---- Steve
#### stoney
##### Newbie level 4
banjo said:
The wire around the flyback is probably useful only for a heater to cathode short. Probably the technician has found that the short is between the control grids and does not involve the cathode directly.
Rejuvenating to remove a short works some of the time. It is worth a try. However, it should be done with the tube resting on its face. That way any dislodged particles are carried by gravity down into the bell of the tube where they cannot do any harm.
I have also been able to tap out some shorts with a plastic brush handle. This is somewhat dangerous, so do not attempt it yourself or if you do wear safety glasses and a heavy jacket. The idea is to again place the set on its face and operate it until the screen goes blue. Once the oxides are causing the short, the neck of the tube is tapped with the plastic brush handle to shake the tube elements and dislodge the particle. The danger is two fold. First, the tube is operating so lots of high voltage is present. Secondly, the tube is under high vacuum, if you tap too hard, the tube could implode and throw off glass fragments. When successfully tapped, the screen will immediately return to normal.
I would question the repairman as to what methods they tried so far. If they have already tried the above, then assuming that the warranty on the picture tube has expired, I would start looking for a new TV. Unfortunately, the replacement cost of the tube is usually more than another set.
---- Steve
Hi,thanks again for replying,am beginning to think ur right re new set this one's only 14 months old and yeah replacing the tube is expensive and not economical.
Here's what was written on the original repair warranty (which is now 12 days old of it's 30 day lifespan!):-
Dissasembled unit,trace failures to picture tube and tube socket
Dislodge internal foreign material creating heaters to cathode,short circuit on blue gun
repair tube socket on neck ''plater'' circuit board.
replace heater dropping resistor and check plater circuit.
** note I could be wrong re ''''plater'' it's a bit hard to read the handwriting.
Thanks again for replying - appreciated.
I get the impression he doesn't like to be told or suggested what to do re rapirs and i figure if i get him on the wrong side he'll just put my job to the end of the queue,so feel asthough i am in a no win situation.
It is frustrating that it works for a short while each time he's brought it back to me then goes again but as soon as he takes it back to the shop it seems to go ok again. I wonder if he's telling me the truth??How would I know. From people i've spoken to he does seem to have a fairly good reputation around town.
Thing is how long do I wait? What happens when his warranty runs out,like i said it's only for 30 days and it's 12 days old already. Is that what he's aiming for that it will run out so I have to pay him again? Sounds cynical I know but hell it's been almost 2 weeks now and out of that time i've had use of he tv for only 4 days!Just not good enough.If i take it to another techo then i ahve to pay all over again.
Can't even report this guy to office of fair trading because he is at least trying (I presume!) to fix the problem.He says its frustrating for him aswell as me.
Today he informed me he has faxed the manufacturer for info.Now I don't know if that's a good sign or not?!Is he really trying to get this fixed or is it a case of he doesn't know what to do and everything he's told me so far is just a case of stringing me along?!
You'd think a brand new tv would last more than 14 months ffs!?! Right?
Thanks for replying,you can send direct reply if you like to [email protected]
cheers
john
#### banjo
John,
If the set is only 14 months old, then there is a good chance the picture tube is still under parts warranty. I would take the set back to the repair shop and ask who to contact about the warranty on the picture tube. Tube warranties have been dropping every year, but they are still typically at least 2 years. Ask for the phone number of the manufacturer's area representative. Call this person and politely tell them that you are extremely disappointed in the performance of their product.
If the manufacturer had a bad run of tubes, they will usually replace them for the customer's that complain. Sometimes they will cover parts and labor to make the customer happy. Other times only the parts. Typically, the bench charge for replacing a tube is about 2 hours since numerous adjustments are required. You would have to pay this labor charge, but it should be much cheaper than a new set.
Make sure the repair shop understands that if they can help you get the tube under warranty, then you will pay the install labor. Otherwise, they really do not have a big incentive to help you.
The reason for taking the set back immediately is two fold: first, it not really useable to you. Secondly, when it is taking up shelf space for the repair shop and returns under their 30 day warranty period, they are more likely to give the matter a higher priority.
---- Steve
#### stoney
##### Newbie level 4
banjo said:
John,
If the set is only 14 months old, then there is a good chance the picture tube is still under parts warranty. I would take the set back to the repair shop and ask who to contact about the warranty on the picture tube. Tube warranties have been dropping every year, but they are still typically at least 2 years. Ask for the phone number of the manufacturer's area representative. Call this person and politely tell them that you are extremely disappointed in the performance of their product.
If the manufacturer had a bad run of tubes, they will usually replace them for the customer's that complain. Sometimes they will cover parts and labor to make the customer happy. Other times only the parts. Typically, the bench charge for replacing a tube is about 2 hours since numerous adjustments are required. You would have to pay this labor charge, but it should be much cheaper than a new set.
Make sure the repair shop understands that if they can help you get the tube under warranty, then you will pay the install labor. Otherwise, they really do not have a big incentive to help you.
The reason for taking the set back immediately is two fold: first, it not really useable to you. Secondly, when it is taking up shelf space for the repair shop and returns under their 30 day warranty period, they are more likely to give the matter a higher priority.
I contacted the technician again today and he said the problem I was having each time occurred briefly yesterday again (at least while in the repair shop again this time) so he did a test and his equipment indicated there's no short to the cathode so clearly the original repair was not the problem - right?(I am confused) and he now says it's definately not a short to the blue gun but something outside the picture tube.He replaced a couple of resistors at the bottom (whatever that means) and it's running fine but he wants to continue testing it. It seems to me it's guess work on his part and i have no tv still I said at this rate his warranty would be running out, it's now 13 days into the arranty but he said not to worry about that as all his work is guaranteed.Not exactly sure what that means.
He also said yesterday he'd contacted the manufacturer via fax but as of y/day had not received a reply When I asked when i was gonna get it back he said as soon as he had a definate awnser - doesn't exactly help me much.
But now he says it's definately not related to the tube so it would seem that it doesn't need replacing even if as you suggest the manufacturer's would do the right thing by me seeings how the set is so new. I will keep you updated.At least on here there's no smart ass comments like on the google elecrics groups.
Thanks
john
ps if you need a scan of original repair warranty I can scan and attach it next time,but I sort of covered it all in my previous post. Very dissapointed by all this.
---- Steve
#### banjo
John,
Given the symptoms, the problem is a DC bias issue. It could be internal to the tube or external. Externally, this signal is handled as a dc-coupled signal typically from the video output IC to the picture tube. If we assume that the video output IC is OK since the other two colors are fine, then this leaves the tube and the driver transistors and components in the blue gun circuit. The transistors can be eliminated by either replacement or substitution. The identical pairs of transistors are in the red and green circuits. If the technician swaps the transistors and the problem moves to another color, then the transistors are intermittent and the repair is straightforward.
If he is offering a 30 day warranty, that time period should start after the repair is finally complete, not from the initial request for service. However, it is up to lawyers to interpret the real legal meaning of his warranty statements.
My first question now is this repair shop authorized to do warranty repairs on this brand? Check with the manufacturer, usually they have a listing on their web site about what shops are warranty stations in your area. Perhaps he is not authorized to replace the tube under warranty and is therefore not willing to consider it the culprit. I would still contact the manufacturer about your problem, explain the symptoms and if they are having picture tube problems, they should be pretty helpful. If the current shop is not the authorized warranty repair station, call the repair shop that is authorized and get their opinion. The authorized shop will see the biggest volume of these sets. Therefore, they will know if tubes are going out. Also, authorized shops would receive any service bulletins notifying them of any field upgrades or fixes for recurring problems.
You may have to take your losses, pull the set out of this shop, and take it to another one. Changing a picture tube is a fairly big job for a TV shop. With the prices of TVs constantly dropping, the new shops probably don't change as many tubes as we used to do in the old days. (I am an engineer now, having left the TV business 15 years ago.) Perhaps he is not willing to tackle this size of job.
Regardless, call the manufacturer. If their full warranty was 1 year, then you are only just a couple of months past that. It is unreasonable that a TV less than 2 years old is unrepairable! Keep asking the manufacturer who can repair it and who else in their company you can talk to about the problem.
--- Steve
#### stoney
##### Newbie level 4
banjo said:
John,
Given the symptoms, the problem is a DC bias issue. It could be internal to the tube or external. Externally, this signal is handled as a dc-coupled signal typically from the video output IC to the picture tube. If we assume that the video output IC is OK since the other two colors are fine, then this leaves the tube and the driver transistors and components in the blue gun circuit. The transistors can be eliminated by either replacement or substitution. The identical pairs of transistors are in the red and green circuits. If the technician swaps the transistors and the problem moves to another color, then the transistors are intermittent and the repair is straightforward.
If he is offering a 30 day warranty, that time period should start after the repair is finally complete, not from the initial request for service. However, it is up to lawyers to interpret the real legal meaning of his warranty statements.
My first question now is this repair shop authorized to do warranty repairs on this brand? Check with the manufacturer, usually they have a listing on their web site about what shops are warranty stations in your area. Perhaps he is not authorized to replace the tube under warranty and is therefore not willing to consider it the culprit. I would still contact the manufacturer about your problem, explain the symptoms and if they are having picture tube problems, they should be pretty helpful. If the current shop is not the authorized warranty repair station, call the repair shop that is authorized and get their opinion. The authorized shop will see the biggest volume of these sets. Therefore, they will know if tubes are going out. Also, authorized shops would receive any service bulletins notifying them of any field upgrades or fixes for recurring problems.
You may have to take your losses, pull the set out of this shop, and take it to another one. Changing a picture tube is a fairly big job for a TV shop. With the prices of TVs constantly dropping, the new shops probably don't change as many tubes as we used to do in the old days. (I am an engineer now, having left the TV business 15 years ago.) Perhaps he is not willing to tackle this size of job.
Regardless, call the manufacturer. If their full warranty was 1 year, then you are only just a couple of months past that. It is unreasonable that a TV less than 2 years old is unrepairable! Keep asking the manufacturer who can repair it and who else in their company you can talk to about the problem.
--- Steve
Hi,
according to thier yellow pages ad they do repairs to all makes and models and when i first contacted them they did ask what brand so am now assuming from your post that they are ''allowed'' to do repairs on this brand.
Yes I know some brands/companies have certain outlets for thier models,there's only 1 listed for AWA tv's in my area but it's not easily accessible which is why I contacted who I did.
When I first contacted AWA by phone and email via thier website they just informed me ''we don't do repairs on tv's'' and to contact place i bought it from and they'd tell me - how useless is that?! A lot of different brands don't even have ''official'' repairers in my area (and i don't live out in the sticks!) - and i'm in the capital city for god sakes!
Well that's good on the warranty thing that it starts from when job is finally finsihed. Just wish it was,it's been 2 weeks now. I think worse case scenario is if when i get it back it goes again I will either have to take it elsewhere or buy a new one!
Techo said it's been working fine in the shop (though got the recurring fault again other day - see my last post, so he replaced a couple of resistors) and hes had it running virtually all the time the store's been open. I wonder if that's a good thing?But then surely if I wanted to I could have the tv running several hours a day every day and it shouldn't be a problem.
Obviously the fixes he did initially weren't the cause of the problem,just hope that now he's replaced a couple more resistors i'm not gonna be charged more $$I think he's holding on to it for so long to make sure nothing else goes wrong causing the problem while he's got it? Thing is though, as they say, how long's a piece of string?! Techo did say he contacted the manufaturer couple of days ago by fax for more info but at that time he'd not head back from them and he didn't mention it when I called yesterday,so either still hasn't had a reply or I'm getting the run around.Must admit the techo does seem honest though. Unless his constant ''frustration'' comment is a facade. Do you have access to a manual for this particular model? I can't find one listed online anywhere. As you say a new tv should last more than 14 months. I get the impression the techo doesn't like me to make suggestions as to the problem/cause and what he should try. Sort of like,i'm the techo and i'm having problems,so how would you know type of thing. I just want it fixed and returned. Maybe the longer he has it is a good thing,so he can cover all options?! Hard to judge. Strange not much info on AWA tv's online yet its a brand that's been around for years (I think mostly imported now though) and i've always liked AWA never had any problems,so it's why i've stuck with them. Thanks for reply,I will keep you updated - it seems AWA don't seem to care,like most big companies all thier interested in is the$$ profit!
John
#### stoney
##### Newbie level 4
Re: AWA TV model W571 blue screen with white stripes UPDATE
Hi,
an update - cause FINALLY found!
Techo rang this arvo and said cause had finally be found.I'll try and explain it as best I can remember.
There is a pin inside a socket attached to the blue gun which was becoming ''grounded'' and thus swamping all the other colours with blue,hence my problem.
Now this supposedly isn't as common as most think (typical,never anything easy!)and requires a spare part which has to be ordered from interstate - typical again!So in the meantime he's washed this socket with some kind of solution to prevent the pin from Grounding and once the pin arrives from manufacturers he will replace it. In the meantime am getting the set back and techo said he would call me when the part arrives and will pick up set again and replace the part. So we're on track now - AT LEAST!
Status
Not open for further replies.
|
# Problem #2158
2158 An $8$-foot by $10$-foot floor is tiled with square tiles of size $1$ foot by $1$ foot. Each tile has a pattern consisting of four white quarter circles of radius $1/2$ foot centered at each corner of the tile. The remaining portion of the tile is shaded. How many square feet of the floor are shaded? $[asy]unitsize(2cm); defaultpen(linewidth(.8pt)); fill(unitsquare,gray); filldraw(Arc((0,0),.5,0,90)--(0,0)--cycle,white,black); filldraw(Arc((1,0),.5,90,180)--(1,0)--cycle,white,black); filldraw(Arc((1,1),.5,180,270)--(1,1)--cycle,white,black); filldraw(Arc((0,1),.5,270,360)--(0,1)--cycle,white,black);[/asy]$$\textrm{(A)}\ 80-20\pi \qquad \textrm{(B)}\ 60-10\pi \qquad \textrm{(C)}\ 80-10\pi \qquad \textrm{(D)}\ 60+10\pi \qquad \textrm{(E)}\ 80+10\pi$ This problem is copyrighted by the American Mathematics Competitions.
Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
|
# Levy process:2:$E\left[\exp\left(\theta^*X_\tau -\Psi(\theta^*)\tau\right)1_{\{\tau < \infty\}}\right] = e^{\theta^*x}P\left( \tau < \infty \right)$
This a sequel of this exercise, which label is repeated below.
My question follows.
Exercise
Let $X_t$ be the compound Poisson process $$X_t = t - \sum_{i=1}^{N_t} \xi_i, \tag{1}$$ where $N$ is a Poisson process with rate $\lambda$ and the $\xi_i$ are i.i.d., positive, with common distribution $F$.
The characteristic exponent of $X_t$ is $$\Psi(\theta) = \theta - \lambda \int_{(0, \infty)} \left( 1 - e^{-\theta x}\right)F(dx). \tag{2}$$ Assume $\Psi$ has a root $\theta^* \ne 0$. Define the stopping time $$\tau = \inf_{t > 0} \left\{ X_t > x \right\} , x > 0. \tag{3}$$
Show $$E\left[\exp\left(\theta^*X_\tau -\Psi(\theta^*)\tau\right)1_{\{\tau < \infty\}}\right] = e^{\theta^*x}P\left( \tau < \infty \right). \tag{4}$$
This is exercise 1.9 in Kyprianou, Fluctuation of Levy Process.
The solution is given in the book.
Question
One of the step to get (4), is to use the fact that $$M_t = e^{\theta^*X_{t \land \tau} -\Psi(\theta^*)(t \land \tau)}, \tag{A}$$ is a martingale, to get $$E\left( M_t \right) = M_0 = 1. \tag{B}$$ As $\Psi(\theta^*)=0$, taking the limit as $t$ goes to $\infty$ in (B) gives $$E\left[ e^{\theta^*X_\tau}\right]= 1. \tag{C}$$ Now, infer that $$E\left[e^{\theta^*X_\tau }1_{\{\tau < \infty\}}\right] = 1 . \tag{D}$$
I don't see how to go from (C) to (D).
I know $$E\left[ e^{\theta^*X_\tau}\right]=E\left[e^{\theta^*X_\tau }1_{\{\tau < \infty\}}\right] + E\left[e^{\theta^*X_\tau }1_{\{\tau = \infty\}}\right], \tag{E}$$ but why would the second term be zero?
-
|
# Matthew van Eerde's web log
• #### Enumerating mixer devices, mixer lines, and mixer controls
The WinMM multimedia APIs include an API for enumerating and controlling all the paths through the audio device; things like bass boost, treble control, pass-through audio from your CD player to your headphones, etc. This is called the "mixer" API and is the forerunner of the IDeviceTopology API.
I wrote a quick app to enumerate all the mixer devices on the system; for each mixer device, enumerate each mixer line (that is, each source and destination); for each mixer line, enumerate all the controls (volume, mute, equalization, etc.); and for each control, query the associated text (if any) and the current value.
Source and binaries attached.
Pseudocode:
mixerGetNumDevs()
for each mixer device
mixerGetDevCaps(dev)
for each destination (line) on the device
mixerGetLineInfo(dest)
mixerGetLineControls(dest)
for each control on the line
if the control supports per-item description
mixerGetControlDetails(control, MIXER_GETCONTROLDETAILSF_LISTTEXT)
log the per-item description
mixerGetControlDetails(control, MIXER_GETCONTROLDETAILSF_VALUE)
log the value(s)
Usage:
>mixerenum.exe
Mixer devices: 5
Device ID: 0
Manufacturer identifier: 1
Product identifier: 104
Driver version: 6.2
Support: 0x0
Destinations: 1
-- Destination 0: Master Volume --
Destination: 0
Source: -1
Line ID: 0xffff0000
Status: MIXERLINE_LINEF_ACTIVE (1)
User: 0x00000000
Channels: 1
Connections: 2
Controls: 2
Short name: Volume
Long name: Master Volume
-- Target: --
Type: MIXERLINE_TARGETTYPE_UNDEFINED (0)
Device ID: 0
Manufacturer identifier: 65535
Product identifier: 65535
Driver version: 0.0
Product name:
-- Control 1: Mute --
Type: MIXERCONTROL_CONTROLTYPE_MUTE (0x20010002)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Mute
Long name: Mute
-- Values --
FALSE
-- Control 2: Volume --
Type: MIXERCONTROL_CONTROLTYPE_VOLUME (0x50030001)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Volume
Long name: Volume
-- Values --
0xffff on a scale of 0x0 to 0xffff
-- 1: HDMI Audio (Contoso --
Device ID: 1
Manufacturer identifier: 1
Product identifier: 104
Driver version: 6.2
Product name: HDMI Audio (Contoso
Support: 0x0
Destinations: 1
-- Destination 0: Master Volume --
Destination: 0
Source: -1
Line ID: 0xffff0000
Status: MIXERLINE_LINEF_ACTIVE (1)
User: 0x00000000
Component Type: MIXERLINE_COMPONENTTYPE_DST_DIGITAL (1)
Channels: 1
Connections: 2
Controls: 2
Short name: Volume
Long name: Master Volume
-- Target: --
Type: MIXERLINE_TARGETTYPE_UNDEFINED (0)
Device ID: 0
Manufacturer identifier: 65535
Product identifier: 65535
Driver version: 0.0
Product name:
-- Control 1: Mute --
Type: MIXERCONTROL_CONTROLTYPE_MUTE (0x20010002)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Mute
Long name: Mute
-- Values --
FALSE
-- Control 2: Volume --
Type: MIXERCONTROL_CONTROLTYPE_VOLUME (0x50030001)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Volume
Long name: Volume
-- Values --
0xffff on a scale of 0x0 to 0xffff
-- 2: Speakers (Contoso --
Device ID: 2
Manufacturer identifier: 1
Product identifier: 104
Driver version: 6.2
Product name: Speakers (Contoso
Support: 0x0
Destinations: 1
-- Destination 0: Master Volume --
Destination: 0
Source: -1
Line ID: 0xffff0000
Status: MIXERLINE_LINEF_ACTIVE (1)
User: 0x00000000
Component Type: MIXERLINE_COMPONENTTYPE_DST_SPEAKERS (4)
Channels: 1
Connections: 2
Controls: 2
Short name: Volume
Long name: Master Volume
-- Target: --
Type: MIXERLINE_TARGETTYPE_UNDEFINED (0)
Device ID: 0
Manufacturer identifier: 65535
Product identifier: 65535
Driver version: 0.0
Product name:
-- Control 1: Mute --
Type: MIXERCONTROL_CONTROLTYPE_MUTE (0x20010002)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Mute
Long name: Mute
-- Values --
FALSE
-- Control 2: Volume --
Type: MIXERCONTROL_CONTROLTYPE_VOLUME (0x50030001)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Volume
Long name: Volume
-- Values --
0xffff on a scale of 0x0 to 0xffff
Device ID: 3
Manufacturer identifier: 1
Product identifier: 104
Driver version: 6.2
Support: 0x0
Destinations: 1
-- Destination 0: Master Volume --
Destination: 0
Source: -1
Line ID: 0xffff0000
Status: MIXERLINE_LINEF_ACTIVE (1)
User: 0x00000000
Component Type: MIXERLINE_COMPONENTTYPE_DST_WAVEIN (7)
Channels: 1
Connections: 1
Controls: 2
Short name: Volume
Long name: Master Volume
Type: MIXERLINE_TARGETTYPE_WAVEIN (2)
Device ID: 0
Manufacturer identifier: 1
Product identifier: 101
Driver version: 6.2
-- Control 1: Mute --
Type: MIXERCONTROL_CONTROLTYPE_MUTE (0x20010002)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Mute
Long name: Mute
-- Values --
FALSE
-- Control 2: Volume --
Type: MIXERCONTROL_CONTROLTYPE_VOLUME (0x50030001)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Volume
Long name: Volume
-- Values --
0xf332 on a scale of 0x0 to 0xffff
-- 4: Microphone (Contoso --
Device ID: 4
Manufacturer identifier: 1
Product identifier: 104
Driver version: 6.2
Product name: Microphone (Contoso
Support: 0x0
Destinations: 1
-- Destination 0: Master Volume --
Destination: 0
Source: -1
Line ID: 0xffff0000
Status: MIXERLINE_LINEF_ACTIVE (1)
User: 0x00000000
Component Type: MIXERLINE_COMPONENTTYPE_DST_WAVEIN (7)
Channels: 1
Connections: 1
Controls: 2
Short name: Volume
Long name: Master Volume
-- Target: Microphone (Contoso --
Type: MIXERLINE_TARGETTYPE_WAVEIN (2)
Device ID: 1
Manufacturer identifier: 1
Product identifier: 101
Driver version: 6.2
Product name: Microphone (Contoso
-- Control 1: Mute --
Type: MIXERCONTROL_CONTROLTYPE_MUTE (0x20010002)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Mute
Long name: Mute
-- Values --
FALSE
-- Control 2: Volume --
Type: MIXERCONTROL_CONTROLTYPE_VOLUME (0x50030001)
Status: MIXERCONTROL_CONTROLF_UNIFORM (0x1)
Item count: 0
Short name: Volume
Long name: Volume
-- Values --
0xf332 on a scale of 0x0 to 0xffff
• #### Sample app for RECT functions
Riffing on Raymond Chen's post today about SubtractRect I threw together a toy app which demonstrates three rectangle functions: UnionRect, IntersectRect, and SubtractRect.
Usage:
>rects.exe
rects.exe
union (left1 top1 right1 bottom1) (left2 top2 right2 bottom2) |
intersect (left1 top1 right1 bottom1) (left2 top2 right2 bottom2) |
subtract (left1 top1 right1 bottom1) (left2 top2 right2 bottom2)
Sample output:
>rects.exe union (2 2 6 6) (4 4 8 8)
(left = 2; top = 2; right = 6; bottom = 6)
union (left = 4; top = 4; right = 8; bottom = 8)
= (left = 2; top = 2; right = 8; bottom = 8)
Source and binaries (amd64 and x86) attached.
Still no pictures though.
Exercise: implement BOOL SymmetricDifferenceRect(_Out_ RECT *C, const RECT *A, const RECT *B).
• #### An attempt to explain the twin prime conjecture to a five-year-old
Back in April, Zhang Yitang came up with a result that is a major step toward proving the twin prime conjecture that there are infinitely many primes p for which p + 2 is also prime.
In a reddit.com/r/math thread on the subject, I made the following comment as an attempt to explain the twin prime conjecture to a five-year-old:
ELI5 attempt at the twin prime conjecture
• by yourself (you get all the cookies)
• with two people (each person gets 50 cookies)
• with four people (each person gets 25 cookies)
• with five people (each person gets 20 cookies)
• with ten people (each person gets ten cookies)
• with 20 people (each person gets five cookies)
• with 25 people (each person gets four cookies)
• with 50 people (each person gets two cookies)
• with 100 people (each person gets one cookie)
If you're the only person at your party, it's a sad party.
If everyone at the party gets only one cookie, it's a sad party.
If someone gets more than someone else, it's a sad party.
You don't want your party to be sad, so you have to be careful to have the right number of people to share your cookies.
If you have two cookies, or three, or five, or seven, or eleven, then it's not possible to have a happy party. There's no "right number of people."
People used to wonder whether you could be sure to have a happy party if you just had enough cookies. A famous person named Euclid figured out that, no matter how many cookies you had, even if it was, like, more than a million, you might be unlucky and have a sad number of cookies.
If it's a birthday party, the birthday kid's mom might give the birthday kid an extra cookie. (Or they might get something else instead.) That would be OK.
If it's a birthday party, then, yes, you can be sure to have a happy party if you just had enough cookies. In fact, even three cookies would be enough; you could have the birthday kid, and one friend; they would each have one cookie, and the birthday kid would get the extra one.
But Sam and Jane have a problem. They're twins, and they always have the same birthday. One year they had 13 cookies, and it was a big problem. 13 is a sad number. Even if they both had an extra cookie, that would leave 11, and that is still a sad number.
(If you allow the birthday kid to have two extra cookies, that would leave nine; they could invite one more person, give everyone three cookies, and then Sam and Jane could each have two extras. But this is not a happy party because the guests will get upset that the birthday kids got two extra cookies. I mean, come on!)
Sam and Jane wondered whether they could be sure to have a happy party if they just had enough cookies.
So they asked their mom, who is, like, super smart.
But even she didn't know.
In fact, no-one knows. They don't think so. But they're not, like, super-sure.
• #### Buffer size alignment and the audio period
I got an email from someone today, paraphrased below:
Q: When I set the sampling frequency to 48 kHz, and ask Windows what the audio period is, I get exactly 10 milliseconds. When I set it to 44.1 kHz, I get very slightly over 10 milliseconds: 10.1587 milliseconds, to be precise. Why?
A: Alignment.
A while back I talked a bit about the WASAPI exclusive-mode alignment dance. Some audio drivers have a requirement that they deal in buffer sizes which are multiples of a certain byte size - for example, a common alignment restriction for HD Audio hardware is 128 bytes.
A more general audio requirement is that buffer sizes be a multiple of the size of a PCM audio frame.
For example, suppose the audio format of a stream is stereo 16-bit integer. A single PCM audio frame will be 2 * 2 = 4 bytes. The first two bytes will be the 16-bit signed integer with the sample value for the left channel; the last two bytes will be the right channel.
As another example, suppose the audio format of a stream is 5.1 32-bit floating point. A single PCM audio frame will be 6 * 4 = 24 bytes. Each of the six channels are a four-byte IEEE floating-point value; the channel order in Windows will be {Left, Right, Center, Low-Frequency Effects, Side Left, Side Right}.
The audio engine tries to run at as close to a 10 millisecond cadence as possible, subject to the two restrictions above. Given a "desired minimum interval" (in milliseconds), and a streaming format, and an "alignment requirement" in bytes, you can calculate the closest achievable interval (without going under the desired interval) as follows:
Note: this only works for uncompressed formats
aligned_buffer(desired_milliseconds, format, alignment_bytes)
desired_frames = nearest_integer(desired_milliseconds / 1000.0 * format.nSamplesPerSec)
alignment_frames = least_common_multiple(alignment_bytes, format.nBlockAlign) / format.nBlockAlign
actual_frames = ceiling(desired_frames / alignment_frames) * alignment_frames
actual_milliseconds = actual_frames / format.nSamplesPerSec * 1000.0
Here's a table of the actual buffer size (in frames and milliseconds), given a few typical inputs:
Desired (milliseconds) Format Alignment (bytes) Desired frames Alignment (frames) Actual (frames) Actual (milliseconds) 10 44.1 kHz stereo 16-bit integer 128 441 32 448 10.16 10 48 kHz stereo 16-bit integer 128 480 32 480 10 10 44.1 kHz 5.1 16-bit integer 128 441 32 448 10.16 10 48 kHz 5.1 16-bit integer 128 480 32 480 10 10 44.1 kHz 5.1 24-bit integer 128 441 64 448 10.16 10 48 kHz 5.1 24-bit integer 128 480 64 512 10.66
So to be really precise, the buffer size is actually 640/63 = 10.158730 milliseconds.
• #### More on IAudioSessionControl and IAudioSessionControl2, plus: how to log a GUID
I decided to go back and push this a little further and see what information there was to dig out. Pseudocode:
CoCreate(IMMDeviceEnumerator)
MMDevice = IMMDeviceEnumerator::GetDefaultAudioEndpoint(...)
AudioSessionManager2 = MMDevice::Activate(...)
AudioSessionEnumerator = AudioSessionManager2::GetSessionEnumerator()
for each session in AudioSessionEnumerator {
AudioSessionControl = AudioSessionEnumerator::GetSession(...)
if (AudioSessionStateActive != AudioSessionControl::GetState()) { continue; }
AudioSessionControl::GetIconPath (usually blank)
AudioSessionControl::GetDisplayName (usually blank)
AudioSessionControl::GetGroupingParam
AudioSessionControl2 = AudioSessionControl::QueryInterface(...)
AudioSessionControl2::GetSessionIdentifier (treat this as an opaque string)
AudioSessionControl2::GetSessionInstanceIdentifier (treat this as an opaque string)
AudioSessionControl2::GetProcessId (some sessions span multiple processes)
AudioSessionControl2::IsSystemSoundsSession
AudioMeterInformation = AudioSessionControl::QueryInterface(...)
AudioMeterInformation::GetPeakValue
for each top level window in the process pointed to by AudioSessionControl2::GetProcessId {
Use WM_GETTEXTLENGTH and WM_GETTEXT to get the window text, if any
}
}
Here's the output of the new version of meters.exe.
>meters.exe
-- Active session #1 --
Icon path:
Display name:
Process ID: 11812 (single-process)
System sounds session: no
Peak value: 0.2837
-- Active session #2 --
Icon path:
Display name:
Grouping parameter: {a2e2e0f5-81bb-407e-b701-f4f3695f9dac}
Process ID: 15148 (single-process)
Session identifier: {0.0.0.00000000}.{125eeed2-3cd2-48cf-aac9-8ae0157564ad}|\Device\HarddiskVolume1\Program Files (x86)\Internet Explorer\iexplore.exe%b{00000000-0000-0000-0000-000000000000}
Session instance identifier: {0.0.0.00000000}.{125eeed2-3cd2-48cf-aac9-8ae0157564ad}|\Device\HarddiskVolume1\Program Files (x86)\Internet Explorer\iexplore.exe%b{00000000-0000-0000-0000-000000000000}|1%b15148
System sounds session: no
Peak value: 0.428589
HWND: 0x0000000001330B12
HWND: 0x0000000000361CA2
HWND: 0x00000000019A07A8
HWND: 0x0000000001411BF2
HWND: 0x0000000000B60706
HWND: 0x000000000231165A
HWND: 0x0000000002631472
HWND: 0x0000000000441D94
-- Active session #3 --
Icon path:
Display name:
Grouping parameter: {e191c91d-dc24-468d-b542-0d5f12ce8c48}
Process ID: 2324 (multi-process)
System sounds session: no
Peak value: 0.294137
HWND: 0x0000000002900C86 Windows Media Player
-- Active session #4 --
Icon path: @%SystemRoot%\System32\AudioSrv.Dll,-203
Display name: @%SystemRoot%\System32\AudioSrv.Dll,-202
Grouping parameter: {e7d6e107-ca03-4660-a067-1a1f3dc1619c}
Process ID: 0 (multi-process)
System sounds session: yes
Peak value: 0.0502903
-- Active session #5 --
Icon path:
Display name:
Grouping parameter: {2a3e30fb-2ded-471e-9c2f-cbd8572b2af2}
Process ID: 15948 (single-process)
Session instance identifier: {0.0.0.00000000}.{125eeed2-3cd2-48cf-aac9-8ae0157564ad}|\Device\HarddiskVolume1\Program Files (x86)\VideoLAN\VLC\vlc.exe%b{00000000-0000-0000-0000-000000000000}|1%b15948
System sounds session: no
Peak value: 0.287567
HWND: 0x0000000000C8160C Opening Ceremony - VLC media player
Active sessions: 5
Part of this was logging the grouping parameter, which is a GUID. I've seen a lot of code that converts the string to a GUID and logs it using %s. Another way is to use a couple of macros:
#define GUID_FORMAT L"{%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}"
#define GUID_VALUES(g) \
g.Data1, g.Data2, g.Data3, \
g.Data4[0], g.Data4[1], g.Data4[2], g.Data4[3], \
g.Data4[4], g.Data4[5], g.Data4[6], g.Data4[7]
...
GUID someGuid = ...;
LOG(L"The value of someGuid is " GUID_FORMAT L".", GUID_VALUES(someGuid));
Standard caveats about not using side effects inside a macro apply. For example, this would be a bug:
for (GUID *p = ...) {
LOG(L"p = " GUID_FORMAT L".", GUID_VALUES(*(p++)); // BUG!
}
Source, amd64 binaries, and x86 binaries attached.
• #### Getting the package full name of a Windows Store app, given the process ID
Last time I talked about enumerating audio sessions and showed an example which listed several Desktop apps and one Windows Store app.
It's possible to guess that this is a Windows Store app by the presence of the WWAHost.exe string in the session instance identifier. Don't rely on this, though; the session identifiers are opaque strings, and their formula can change at any time.
We were able to get some additional information on the Desktop apps by enumerating their top-level windows and reading the window text. But how do we get more information on the Windows Store app? And how do we even know it's a Windows Store app without cracking the session identifier?
By using the Application Model APIs - for example, GetPackageFullName.
Pseudocode:
... get a process ID...
OpenProcess(PROCESS_QUERY_LIMITED_USER_INFORMATION, FALSE, pid);
GetPackageFullName(...)
if APPMODEL_ERROR_NO_PACKAGE then the process has no associated package and is therefore not a Windows Store app.
Updated sample output:
-- Active session #4 --
Icon path:
Display name:
Grouping parameter: {8dbd87b0-9fce-4c27-b7ff-4b20b0dae1a3}
Process ID: 11644 (single-process)
System sounds session: no
Peak value: 0.395276
Package full name: Microsoft.ZuneMusic_2.0.132.0_x64__8wekyb3d8bbwe
Source and binaries attached.
• #### Even if someone's signaling right, they still have the right of way
I was driving to work this morning and I had an experience which vindicated my paranoia, and may even have passed it on to someone else.
I was heading East on NE 80th St approaching 140th Ave NE in Redmond. This is a two-way stop; drivers on 140th have the right of way and do not stop. Drivers on 80th (me) have to stop.
I came to a full stop and signaled right (I wanted to head South on 140th). A driver (let's call him Sam) pulled up behind me, also signaling right. There were three cars heading South on 140th, all of them signaling right (they wanted to head West on 80th).
At this point I had a conversation with myself that went something like this.
Well, Matt, you could turn right now. All those cars are turning right, so they won't hit you.
But wait, Matt. Those cars have the right of way. Sure they're signaling right. But that doesn't mean they'll actually turn right.
Yup, you're right, Matt. Better to wait to see what actually happens.
So I waited, and sure enough, all three cars actually turned right. So I suppose I could have gone. And more cars were feeding in to 140th from Redmond Way. And all of these cars were signaling right. And one was a school bus.
At this point Sam (remember Sam?) got impatient and honked his horn. This shocked me a little.
I imagine anyone who is from New York or Los Angeles is shaking their heads at me now. Not for waiting, but for being shocked. "He honked his horn? So what?"
(This is a cultural difference. In New York or Los Angeles, if you're waiting at a red light, you will get honked at as soon as the other guy's light turns yellow. But in Washington, the guy behind you will calmly wait through two full greens, then politely knock on your window and ask if everything is OK.)
I trust the school bus even less than the cars, so I let the school bus go.
The car behind the school bus is a minivan. He's signaling right, too. But I let him go as well... and he goes straight!
Behind the minivan, there's enough of a gap that I feel comfortable pulling out, so I do. And Sam pulls up to the line.
As I'm cruising down 140th, I glance in my rear-view mirror. I see a line of cars coming down 140th to the intersection I just left, all signaling right...
... and I see my friend Sam...
... patiently waiting.
May the Force be with you, Sam.
• #### Car Talk puzzler analysis - palindromic odometers
Last week's Car Talk question was again of mathematical interest - it dealt with palindromic numbers on a car's odometer. (This is not the first palindromic odometer question they've done.)
Read Car Talk's "Tommy's Drive to Work" puzzler question
Short version:
Tommy goes for a short drive (under an hour; possibly well under an hour) in his 19-year old new car. He notices that the odometer reading at the beginning and the end of the drive are both palindromes (we're given that his odometer shows six digits.) The question is, how many miles did he drive?
You're supposed to do a bunch of math, and then experience an epiphany when you realize that, against all odds, there's a single unique answer, even though the odometer readings themselves cannot be uniquely determined.
Alas, the problem is flawed. There are two perfectly good (and different) answers.
The first reading is given as a palindrome - the first three digits (let's call them a, b, c) are the same as the last three digits in reverse (c, b, a). So we can write the first number as abccba.
The second reading is also given as a palindrome, and we're given that it's a different palindrome (we infer that the odometer is not broken, and he drove at least a mile.) So we can write the second number as xyzzyx.
We want to find whether it's possible to get these two readings while driving a reasonably small distance. The distance driven is the difference between the odometer readings: xyzzyxabccba.
xyzzyxabccba
= (100001 x + 10010 y + 1100 z) – (100001 a + 10010 b + 1100 c)
= 100001 (xa) + 10010 (yb) + 1100 (zc)
Because xyzzyx > abccba, we know that xa. Also, if x = a, we know that yb. Finally, if x = a and y = b, we know that z > c (z cannot equal c in this case.) Let's look at these three cases separately.
Case 1: x = a, y = b, z > c
xyzzyxabccba
= 100001 (xa) + 10010 (yb) + 1100 (zc)
= 100001 (0) + 10010 (0) + 1100 (zc)
= 1100 (zc)
z and c can range from 0 to 9, and z > c, so (zc) can range from 1 to 9. So the smallest the distance can be is 1100 miles - for example, 250052 → 259952 (1100 miles.) This is a bit far for an hour's drive.
Case 2: x = a, y > b, no conditions imposed on z vs. c
xyzzyxabccba
= 100001 (xa) + 10010 (yb) + 1100 (zc)
= 100001 (0) + 10010 (yb) + 1100 (zc)
= 10010 (yb) + 1100 (zc)
y, b, z, and c can range from 0 to 9; y > b; and no conditions are imposed on z vs. c. (yb) can range from 1 to 9, (zc) can range from -9 to 9. So the smallest the distance can be is 100101 + 1100 (-9) = 110 miles - for example, 379973 → 380083 (110 miles.) It's certainly possible to drive 110 miles in an hour but not without attracting the attention of the local police!
Case 3: x > a, no conditions imposed on y vs. b or z vs. c
xyzzyxabccba
= 100001 (xa) + 10010 (yb) + 1100 (zc)
x, a, y, b, z, and c can range from 0 to 9; x > a; and no conditions are imposed on y vs. b, or z vs. c. (xa) can range from 1 to 9, (yb) and (zc) can range from -9 to 9. So the smallest the distance can be is 100001 (1) + 10010 (-9) + 1100 (-9) = 11 miles - for example, 499994 → 500005 (11 miles.) This is much more reasonable.
The intended answer, then, is 11 miles. There are 9 possible ways to get this distance with palindromic starting and ending readings:
099990 → 100001 (11 miles)
199991 → 200002 (11 miles)
299992 → 300003 (11 miles)
399993 → 400004 (11 miles)
499994 → 500005 (11 miles)
599995 → 600006 (11 miles)
699996 → 700007 (11 miles)
799997 → 800008 (11 miles)
899998 → 900009 (11 miles)
## Not a flaw
Do we consider numbers with unmatched leading zeros (like 001221) to be palindromes? I would say no. If we did this, it would open up a whole new field of answers such as 77 → 101 (24 miles.) The problem statement seems to imply that all six digits of the number, including leading zeros, need to participate in the palindrome. We can probably infer that Tommy's odometer is an analog odometer that actually shows leading zeros, rather than a digital odometer which hides them; this is consistent with the car being 19 years old.
## The flaw
All well and good. Alas, there's a flaw in this beautiful analysis - namely, odometers wrap. It's entirely possible for the assumption that xyzzyx > abccba to break down if Tommy started his drive with a number close to 999999, and his odometer wrapped around to 000000 during the drive.
999999 → 000000 (1 mile.)
There are other "wrapped" palindromes, but the next smallest are 998899 → 000000 (1101 miles) and 999999 → 001100 (1101 miles) which are even worse than case 1.
Could a 19-year-old car have a million miles on it?
That comes out to an average of 144.1 miles a day, which is high, but achievable (it's only 6 mph.) It's true that Tommy is unlikely to have put this many miles on a car if he lives only a mile from work, but remember that this is his new car (that is, it only recently came into his possession.)
• #### Linearity of Windows volume APIs - render session and stream volumes
We have talked about some of the volume APIs Windows exposes. We have also talked about what it means for a volume control to be linear in magnitude, linear in power, or linear in dB. We have also talked about how to read IAudioMeterInformation and how the limiter can attenuate full-scale signals.
The last post had a volume-linearity.exe which, when called with --signal, showed that IAudioMeterInformation is linear in amplitude.
Today we'll look at the --stream, --channel, and --session arguments, which explore the linearity of IAudioStreamVolume, IChannelAudioVolume, and ISimpleAudioVolume respectively. Each of these modes plays a half-scale square wave, then set the volume API to various levels, and reads the resulting IAudioMeterInformation. We use a half-scale square wave to avoid running afoul of the limiter; we expect a meter reading of 0.5 when the volume is set to 1. The graphs below have their meter readings doubled to account for the fact that we're using a half-scale square wave rather than a full-scale.
Here's what we get for IAudioStreamVolume, graph-inated for your convenience:
And IChannelAudioVolume:
And ISimpleAudioVolume:
We already know that IAudioMeterInformation is linear in amplitude. We now know that IAudioStreamVolume, IChannelAudioVolume, and ISimpleAudioVolume have a linear effect (with slope 1 and intercept 0) on IAudioMeterInformation. We infer that IAudioStreamVolume, IChannelAudioVolume, and ISimpleAudioVolume are linear in amplitude.
• #### Nitpicking Sam Loyd - a wheel within a wheel
In August 1878 Sam Loyd published this mate in two and dedicated it to a friend of his named Wheeler:
Mate in two; Black to move and mate in two; Selfmate in two; Black to move and selfmate in two
While the mates appear to stand up, the problem position is not legal. White has three a-pawns; this implies at least three Black pieces were captured by a White pawn. But Black has fifteen pieces on the board; only one is missing!
Looking at Black pawn captures - the b2-, c-, and d- pawns together account for three pawn captures. This seems OK at first glance since White has three pieces missing. But all the missing White pieces are pawns, and they are from the right half of the board... so they must have promoted. This implies more pawn captures to either get the Black pawns out of the way or to get the White pawns around them. (The promoted pieces could have been captured by the Black pawns, or the original pieces could have been captured in which case the promoted pieces are on the board now.)
Finally, the h-pawns on h5 and h6 could not have got into their present position without at least one pawn capture by White, or at least two pawn captures by Black.
• #### Command-line app to set the desktop wallpaper
Working on Windows, I find myself installing Windows a lot.
I find that I like to change a lot of the settings that Windows offers to non-default values. (That is, I'm picky.)
I have a script which automates some of these things, which I add to now and again. Some of the bits of the script are straightforward, but once in a while the tweak itself is of interest.
One of the things I love about my work setup is the many large monitors. So, one of the things I like to change is the desktop wallpaper image.
Changing the desktop wallpaper required some code, which makes it "of interest." Here's the code.
// main.cpp
#include <windows.h>
#include <winuser.h>
#include <stdio.h>
int _cdecl wmain(int argc, LPCWSTR argv[]) {
if (1 != argc - 1) {
wprintf(L"expected a single argument, not %d\n", argc);
return -__LINE__;
}
if (!SystemParametersInfo(
SPI_SETDESKWALLPAPER,
0,
const_cast<LPWSTR>(argv[1]),
SPIF_SENDCHANGE
)) {
DWORD dwErr = GetLastError();
wprintf(L"SystemParametersInfo(...) failed with error %d\n", dwErr);
return -__LINE__;
}
wprintf(L"Setting the desktop wallpaper to %s succeeded.\n", argv[1]);
return 0;
}
Binaries attached.
Warning: if you pass a relative path to this tool, it won't qualify it for you, and the SystemParametersInfo call won't either - so the wallpaper you want won't be set, though all the calls will succeed. Make sure to specify a fully-qualified path.
• #### How to create a shortcut from the command line
Working on Windows, I install Windows a lot. This means a lot of my customizations have to be re-applied every time I install. To save myself some time I created a script which applies some of them. Last time I showed how to set the desktop wallpaper from a command-line app.
This time, a script to create a shortcut. The example usage creates a shortcut to Notepad and puts that in the "SendTo" folder. I find this very useful because I often need to edit text files that have non-".txt" assocations. (There are also other shortcuts I create with it.)
Here's the script:
>create-shortcut.vbs
If WScript.Arguments.Count < 2 Or WScript.Arguments.Count > 3 Then
WScript.Echo "Expected two or three arguments; got " & WScript.Arguments.Count
WScript.Echo "First argument is the file to create"
WScript.Echo "Second is the command to link to"
WScript.Echo "Third, if present, is the arguments to pass"
WScript.Quit
End If
Set shell = WScript.CreateObject("WScript.Shell")
If WScript.Arguments.Count = 3 Then
End If
• #### Generating primes using the Sieve of Eratosthenes plus a few optimizations
When solving Project Euler problems I frequently need to iterate over prime numbers less than a given n. A Sieve of Eratosthenes method quickly and easily finds the small prime numbers; there are more complicated methods that find larger prime numbers, but with a couple of tweaks the Sieve of Eratosthenes can get quite high.
A naive implementation for finding the set of primes below n will:
1. Allocate an array of n booleans, initialized to false.
2. Allocate an empty list
3. For each i in the range 2 to n:
1. If the boolean value at this index in the array is true, i is composite. Skip to the next value and check that.
2. If the boolean value at this index in the array is false, i is prime!
3. Add i to the list of primes
4. For each multiple of i in the range 2i to n, set the boolean value at that index in the array to true
There are a handful of simple optimizations that can be made to this naive implementation:
1. Step 3d) will have no effect until the multiple of i reaches i2, so the range can be changed to "i2 to n"
2. As a direct consequence of this, step 3d) can be skipped entirely once i2 passes n.
3. Instead of allocating an array of n booleans, an array of nbits will suffice.
4. All the even-indexed bits are set to true on the first pass. Manually recognize that 2 is prime, and only allocate bits for odd-numbered values. Change the outer loop in 3) to "in the range 3 to n", incrementing by two each time. Change the loop 3d) to increment by 2i each time.
5. Storing the list of primes takes a lot of memory - more than the sieve. Don't bother creating a list of primes, just write an enumerator that travels the sieve directly.
With these optimizations I can enumerate primes from 2 up to 5 billion (5 * 109) in about seven minutes. Source and binaries attached.
>primes 5000000000
Will enumerate primes <= 5000000000 = 5e+009
Memory for sieve: 298.023 MB
Initialization complete: 983 milliseconds since start
Sieving to 70711
Sieving complete: 4.70292 minutes since start
Picking up the rest to 5000000000
Pickup complete: 6.12252 minutes since start
Primes: 234954223
1: 2
23495423: 442876981
46990845: 920233121
70486267: 1410555607
93981689: 1909272503
117477111: 2414236949
140972533: 2924158169
164467955: 3438252577
187963377: 3955819157
211458799: 4476550979
234954221: 4999999883
Enumerating complete: 7.43683 minutes since start
Freeing CPrimes object
There are more complicated sieves like the Sieve of Atkin which perform better but at the cost of being much more complex. So far I haven't had to resort to any of those.
• #### Programmatically grabbing a screenshot of the primary display
It's sometimes difficult to explain to people what my job actually is. "I test Windows sound." "Cool. How does that work?"
A product like Windows has a lot of components that interact with each other. If everything works, the user doesn't know that most of these components even exist; everything is invisible and seamless.
Most testing involves the connection ("interface") between two components. "I test APIs." To the uninitiated, this is just a word. It sounds like "I test wakalixes." "You test what, now?"
There are two interfaces which are easier to explain. There's the software-to-hardware interface, where the driver talks to the hardware. "I test the HD Audio, USB Audio, and Bluetooth audio class drivers." "Huh?" "They make the speakers and the microphone work." "Oh, cool. So you sit around and use Skype all day?"
But the easiest of all to explain is the user interface. "I make sure that the Sound Recorder app, the volume slider, and the Sound control panel work." "Oh, that! I had this annoying problem once where..."
What does the test result for an invisible interface look like? A lot of logging. "I expected this call to succeed; it returned this HRESULT." "I poked the hardware like this and got a bluescreen." "There seems to be an infinite loop here." Lots of text.
Boring.
UI testing has logging too. But with UI testing you can also... TAKE PICTURES! A UI bug is a lot easier to understand (triage, and fix) if there's a screenshot attached (preferably with a big red rectangle highlighting the problem.)
It is therefore valuable to have an automatable utility that can take a screenshot and dump it to a file. Here's one I cribbed together from the "Capturing an Image" sample code on http://msdn.microsoft.com/en-us/library/dd183402(v=VS.85).aspx. Source and binaries attached.
This version only captures the main display, and not secondary monitors (if any.)
Pseudocode:
screen_dc = GetDC(nullptr);
memory_dc = CreateCompatibleDC(screen);
rect = GetClientRect(GetDesktopWindow());
hbmp = CreateCompatibleBitmap(screen_dc, rect);
SelectObject(memory_dc, hbmp);
BitBlt(memory_dc, rect, screen_dc);
bmp = GetObject(hbmp);
bytes = allocate enough memory
bytes = GetDIBits(screen_dc, bmp, hbmp)
file = CreateFile();
WriteFile(bytes);
• #### Mark your variadic logging function with __format_string to have PREfast catch format specifier errors
There are a handful of Problems (with a capital P) which occur over and over again in programming. One of them is Logging.
It is incredibly convenient to use the variadic printf function to log strings with values of common types embedded in them:
// spot the bug
LOG(L"Measurement shows %lg% deviation", 100.0 * abs(expected - actual) / expected);
However, printf is very error prone. It is very easy to use the wrong format specifier like %d instead of %Id, or to forget to escape a special character like % or \.
In particular, the above line contains a bug.
Static code analysis tools like PREfast are quite good at catching these kinds of errors. If my LOG macro was something like this, PREfast would catch the bug:
#define LOG(fmt, ...) wprintf(fmt L"\n", __VA_ARGS__)
This works because PREfast knows that the first argument to wprintf is a format string, and can match up the format specifiers with the trailing arguments and verify that they match.
If you implement your own variadic logger function, though, PREfast doesn't necessarily know that the last explicit argument is a format specifier - you have to tell it. For example, PREfast will NOT catch format specifier issues if the LOG macro is defined like this:
// PREfast doesn't know Format is a format string
interface IMyLogger { virtual void Log(LPCWSTR Format, ...) = 0; };
extern IMyLogger *g_Logger;
#define LOG(fmt, ...) g_Logger->Log(fmt, __VA_ARGS__)
How do you tell it? Well, let's look at the declaration of wprintf. It's in (SDK)\inc\crt\stdio.h:
_CRTIMP __checkReturn_opt int __cdecl wprintf(__in_z __format_string const wchar_t * _Format, ...);
The relevant part here is __format_string. So the fixed IMyLogger declaration looks like this:
// Now PREfast can catch format specifier issues
interface IMyLogger { virtual void Log(__format_string LPCWSTR Format, ...) = 0; };
extern IMyLogger *g_Logger;
#define LOG(fmt, ...) g_Logger->Log(fmt, __VA_ARGS__)
• #### Beep sample
A question came in today about the Beep(...) API1 not being able to set the frequency of the beep that was generated. In order to confirm that it worked I whipped up a quick sample which would take the frequency (and duration) on the command line. Source and binaries attached.
For fun I added the ability to pass in the frequency using Scientific pitch notation. Note that A4 is about 431 Hz using this scale, rather than the more standard 440 Hz2.
for (int i = 1; i + 1 < argc; i += 2) {
ULONG frequency;
HRESULT hr = HertzFromScientificPitchNotation(argv[i], &frequency);
if (FAILED(hr)) { return -__LINE__; }
ULONG duration;
hr = UlongFromString(argv[i + 1], &duration);
if (FAILED(hr)) { return -__LINE__; }
if (!Beep(frequency, duration)) {
LOG(L"Beep(%u, %u) failed: GetLastError() = %u", frequency, duration, GetLastError());
return -__LINE__;
}
}
So, for example, you can play a certain well-known tune via Beep() using this command:
>beep.exe C3 2000 G3 2000 C4 4000 E4 500 Eb4 3500 C3 500 G2 500 C3 500 G2 500 C3 500 G2 500 C3 2000
1 More on the Beep(...) API:
The official Beep(...) documentation
A couple of blog posts from Larry Osterman:
Beep Beep
What’s up with the Beep driver in Windows 7?
2 If you want the more standard pitch, change this line:
double freq = 256.0;
To this:
double freq = 440.0 * pow(semitoneRatio, -9.0); // C4 is 9 semitones below A4
• #### Excruciating rhymes
I was watching How the Grinch Stole Christmas and I was struck by this rhyme (from the song "You're a Mean One, Mr. Grinch:")
You're a nauseous super naus!
You're a dirty crooked jockey, and you drive a crooked hoss
-- Dr. Seuss, How the Grinch Stole Christmas
I tried to see what other particularly excruciating rhymes I could remember. I came up with two:
You know, that little guy, he's got me feeling all contempt-ey:
He takes his boat out loaded up and brings it back in empty.
-- Phil Vischer, Lyle the Kindly Viking
And then of course:
In short, when I've a smattering of elemental strategy,
You'll say a better Major-General had never sat a-gee!
-- W. S. Gilbert, The Pirates of Penzance
• #### Teaching someone to fish and the AKS primality test
This morning my wife (whom I love and adore) woke me up at 3:00 AM with an urgent question.
"Hey!" she said, shaking me awake.
"Is 19 prime?"
...
Like a fool, I answered her. "Yes. Yes it is." Off she went.
This is a true response, but not a correct response. I realized shortly afterwards that a correct response would look more like:
I'm glad you asked me that. dear. Eratosthenes, the Greek mathematician, discovered a very efficient way to list primes in about 200 BC that is still in use today. You start by writing out all the numbers from 1 to 19: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19. 1 is a very special number (it's the multiplicative identity, or what algebraists would call a unit) so we put a square around it. The first number we didn't consider was 2, so we circle it - that means it's prime - and then cross out every multiple of 2 after that. Going back, the first number we didn't consider was 3... and so on until we get 1 2 3 X 5 X 7 X X X 11 X 13 X X X 17 X 19. A common optimization is to realize that after circling a prime p, the first number you cross out (that wasn't crossed out before) is always p2, which means that after circling a number you can immediately jump to its square, and also means you can stop crossing out altogether once you hit p> N...
This would allow her to fish rather than waking me up when she wanted a fish.
An even better response would have been:
It's funny you've asked me that. Number theorists and cryptanalysts have considered this question for thousands of years. Eratosthenes' method (see above) is a very simple way to find all the primes below a given number, but an efficient way to determine whether a given number is prime was found only very recently.
In practice, the test that is usually used is the randomized version of the Miller-Rabin test. Although this is nondeterministic, it is very fast indeed, and will tell you to a very high degree of certainty whether the given number is prime. This usually suffices.
There is a deterministic version of the Miller-Rabin test too, which is guaranteed to tell you with perfect certainty whether the given number is prime. But it only works if you believe in the generalized Riemann hypothesis. Most mathematicians nowadays believe the hypothesis, but no-one has (yet) been able to prove it.
Amazingly, in 2002 three mathematicians named Manindra Agrawal, Neeraj Kayal, and Nitin Saxena came up with a deterministic, proven, polynomial-time (specifically, polynomial in the number of digits in the input) method for telling whether a given number is prime. This is known as the AKS primality test. The most striking thing about this test is its simplicity - if something this straightforward can be found after thousands of years of looking, what else out there remains to be found?
Such a response would probably prevent her from waking me again for any mathematical problem at all. Boo-ya.
My own Perl implementation follows:
use strict;
# use bignum;
print <<USAGE and exit 0 unless @ARGV;
$0 [-v] n Use the AKS primality test to check whether n is prime -v adds verbose log spew USAGE sub is_power($$); sub ceil_log2(); sub first_r($$); sub check_gcds($$); sub check_polynomials($$$);
sub gcd($$); sub totient(); sub polypow($$\@);
sub polymult($$\@\@); sub polyeq(\@\@); my verbose = ARGV[0] eq "-v"; shift @ARGV if verbose; die "Expected only one argument" unless 1 == @ARGV; my n = shift; # step 0: restrict to integers >= 2 print "n is not an integer and so is NEITHER PRIME NOR COMPOSITE\n" and exit 0 unless int(n) == n; print "n < 2 and so is NEITHER PRIME OR COMPOSITE\n" and exit 0 unless n >= 2; # step 1: check if the number is a power of some lower number. # this can be done quickly by iterating over the exponent (2, 3, ...) # and doing a binary search on the base. # we start at the top and work down for performance reasons; # several subroutines need to know ceil(log2(n)) so we calculate it once and pass it around. my log2_n = ceil_log2(n); is_power(n, log2_n) and exit 0; print "Not a power.\n"; # step 2: find the smallest r such that o_r(n) > (log2 n)^2 # where o_r(n) is the multiplicative order of n mod r # that is, the smallest k such that n^k == 1 mod r my r = first_r(n, log2_n); print "r = r\n"; # step 3: for all a between 2 and r inclusive, check whether gcd(a, n) > 1 check_gcds(n, r) or exit 0; # step 4: if r >= n, we're done if (r >= n) { print "r >= n so n is PRIME\n"; exit 0; } # step 5: for all a between 1 and floor( sqrt(phi(r)) log2(n) ) # check whether (x + a)^n = x^n + a mod x^r - 1, n check_polynomials(n, r, log2_n) or exit 0; # step 6: if we got this far, n is prime print "n is PRIME\n"; sub is_power($$) {
my $n = shift; my$log2_n = shift; # actually ceil(log2(n))
print "Checking for power-ness...\n";
# we consider numbers of the form b^i
# we iterate over the exponent i
# starting at i = ceil(log2(n)) and working down to i = 2
#
# for each exponent we do a binary search on the base
# the lowest the base can be is 2
# and the highest the base can be (initially) is 2
#
# we set up bounds on the base that are guaranteed to
# surround the actual base
my $b_low = 1; # 1 ^ ceil(log2(n)) = 1 < n my$b_high = 3; # 3 ^ ceil(log2(n)) > 2 ^ log2(n) = n
for (my $i =$log2_n; $i >= 2;$i--) {
print "\tb^$i\n" if$verbose;
# let's check that the bounds are really correct
die "$b_low ^$i is not < $n" unless$b_low ** $i <$n;
die "$b_high ^$i is not > $n" unless$b_high ** $i >$n;
# do a binary search to find b such that b ^ i = n
while ($b_high -$b_low > 1) {
print "\t\tb^$i: b is between$b_low and $b_high\n" if$verbose;
my $b = int(($b_low + $b_high)/2); my$t = $b **$i;
if ($t ==$n) {
print "$n =$b^$i;$n is COMPOSITE\n";
return 1;
}
($t >$n ? $b_high :$b_low) = $b; } # as we pass from the exponent (say, 5) # to the exponent below (say, 4) # we need to reconsider our bounds # # b_low can remain the same because b ^ (i - 1) is even less than b ^ i # OPEN ISSUE: can we even raise b_low? # # but we need to raise b_high since b ^ i > n does NOT imply b ^ (i - 1) > n # # we'll square b_high; b ^ i > n => (b ^ 2) ^ (i - 1) = b ^ (2 i - 2) > n # since i >= 2 # # OPEN ISSUE: is there a better way to raise this higher bound? Does this help much?$b_high *= $b_high; } # nope, not a power return 0; } sub ceil_log2($) {
my $n = shift; my$i = 0;
my $t = 1; until ($t >= $n) {$i++;
$t *= 2; } return$i;
}
sub first_r($$) { my n = shift; my log2_n = shift; # actually ceil(log2(n)) my s = log2_n ** 2; print "Looking for the first r where o_r(n) > s...\n"; # for each r we want to find the smallest k such that # n^k == 1 mod r my r; for (r = 2; ; r++) { # print "\tTrying r...\n"; # find the multiplicative order of n mod r my k = 1; my t = n % r; until (1 == t or k > s) { t = (t * n) % r; k++; } if (k > s) { # print "\to_r(n) is at least k\n"; last; } else { # print "\to_r(n) = k\n"; } } return r; } sub check_gcds($$) {
my ($n,$r) = @_;
print "Checking GCD($n, a) for a = 2 to$r...\n";
for (my $a = 2;$a <= $r;$a++) {
my $g = gcd($n, $a); next if ($g == $n); # this is OK if (1 !=$g) {
print "gcd($n,$a) = $g;$n is COMPOSITE\n";
return 0;
}
}
print "All GCDs are 1 or $n\n"; return 1; } sub gcd($$) { my (x, y) = @_; (x, y) = (y, x) unless x > y; while (y) { (x, y) = (y, x % y); } return x; } sub check_polynomials($$$) {
my $n = shift; my$r = shift;
my $log2_n = shift; # actually ceil(log2(n)) # iterate over a from 1 to floor( sqrt(phi(r)) log2(n) ) # for each a, check whether the polynomial equality holds: # (x + a)^n = x^n + a mod (x^r - 1, n) # if it fails to hold, the number is composite # # first we need to evaluate phi(r) so we can determine the upper bound # OPEN ISSUE: this seems to be a potential weakness in the algorithm # because the usual way to evaluate phi(r) is to find the prime factorization of r # and then form the product r*PI(1 - 1/p) where the product ranges over all primes # which divide r my$phi = totient($r); # a < sqrt(phi(r)) * log2(n) => a^2 < phi(r) * (log2(n))^2 my$a2_max = $phi *$log2_n * $log2_n; print "Checking polynomials up to roughly ", int sqrt($a2_max), "...\n";
for (my $a = 1;$a * $a <=$a2_max; $a++) { print "\ta =$a...\n" if $verbose; # polynomials are of the form (c0, c1, c2, ..., ci, ...) # which corresponds to c0 + c1 x + c2 x^2 + ... + ci x^i + ...) my @x = (0, 1); my @x_plus_a = ($a % $n, 1); my @lhs = polypow($n, $r, @x_plus_a); # POTENTIAL OPTIMIZATION: # x^n + a mod (x^r - 1) is just x^(n % r) + a # and we know n % r != 0 my @rhs = polypow($n, $r, @x); # x^n$rhs[0] = ($rhs[0] +$a) % $n; # + a next if polyeq(@lhs, @rhs); print "(x +$a)^$n is not equal to x^$n + $a mod(x^$r - 1, $n)\n"; print "So$n is COMPOSITE\n";
return 0;
}
return 1;
}
sub totient($) { my$r = shift;
print "Finding the Euler totient of $r\n"; # we'll do a trial division to find the totient # there are faster ways that use a sieve # but we don't know how big r is my$t = $r; # by construction p will always be prime when it is used # OPEN ISSUE: this might be slow for (my$p = 2; $r > 1;$p++) {
next if $r %$p;
print "\t$p is a factor\n" if$verbose;
# decrease the totient
$t /=$p;
$t *=$p - 1;
# decrease r
$r /=$p; # we know there's at least one factor of p
$r /=$p until $r %$p; # there might be more
}
print "Totient is $t\n"; return$t;
}
sub polypow($$\@) { my n = shift; # this is both the mod and the exponent my r = shift; my @base = @{ +shift }; my exp = n; my @result = (1); # 1 # print "\t(", join(" ", @base), ")^exp mod (x^r - 1, n)\n" if verbose; # basic modpow routine, but with polynomials while (exp) { if (exp % 2) { @result = polymult(n, r, @result, @base); } exp = int (exp / 2); @base = polymult(n, r, @base, @base); } # print "\t= (", join(" ", @result), ")\n" if verbose; return @result; } sub polymult($$\@\@) {
my $n = shift; my$r = shift;
my @first = @{ +shift };
my @second = @{ +shift };
# print "\t\t(", join(" ", @first), ") * (", join(" ", @second), ") mod (x^$r - 1,$n)\n" if $verbose; my @result = (); # first do a straight multiplication first * second my$s = @second - 1;
for (my $i = @first - 1;$i >= 0; $i--) { for (my$j = $s;$j >= 0; $j--) { my$k = $i +$j;
$result[$k] += $first[$i] * $second[$j];
$result[$k] %= $n; } } # then do a straight mod x^r - 1 # consider a polynomial # c0 + ... + ck x^k # with k >= r # we can subtract ck (x^r - 1) # without changing the mod value # the net effect is to eliminate the x^k term # and add ck to the x^(k - r) term for (my$i = @result - 1; $i >=$r; $i--) { my$j = $i -$r;
$result[$j] += $result[$i];
$result[$j] %= $n; pop @result; } # eliminate any leading zero terms for (my$i = @result - 1; 0 == $result[$i]; $i--) { pop @result; } # print "\t\t= (", join(" ", @result), ")\n" if$verbose;
return @result;
}
sub polyeq(\@\@) {
my @lhs = @{ +shift };
my @rhs = @{ +shift };
# print "(", join(" ", @lhs), ") = (", join(" ", @rhs), ")?\n" if $verbose; return 0 unless @lhs == @rhs; for (my$i = @lhs - 1; $i >= 0;$i--) {
return 0 unless $lhs[$i] == $rhs[$i];
}
return 1;
}
Here's the output when I run it on 19:
>perl -w aks.pl 19
Checking for power-ness...
Not a power.
Looking for the first r where o_r(19) > 25...
r = 19
Checking GCD(19, a) for a = 2 to 19...
All GCDs are 1 or 19
19 >= 19 so 19 is PRIME
And here's the output with a bigger input:
>perl -w aks.pl 99
Checking for power-ness...
Not a power.
Looking for the first r where o_r(997) > 100...
r = 103
Checking GCD(997, a) for a = 2 to 103...
All GCDs are 1 or 997
Finding the Euler totient of 103
Totient is 102
Checking polynomials up to roughly 100...
997 is PRIME
• #### unattend.xml: turning on Remote Desktop automatically
Here are the portions of my unattend.xml file which are needed to turn on Remote Desktop automatically.
This piece flips the "no remote desktop" kill switch to "allow."
<settings pass="specialize">
...
<component name="Microsoft-Windows-TerminalServices-LocalSessionManager" ...>
<fDenyTSConnections>false</fDenyTSConnections>
That's not enough; it is also necessary to poke a hole in the firewall to allow inbound connections. I use an indirect string for the Group name, to allow for installing localized builds. This points to the "Remote Desktop" feature group.
<settings pass="specialize">
...
<component name="Networking-MPSSVC-Svc" ...>
<FirewallGroups>
<Active>true</Active>
<Profile>all</Profile>
<Group>@FirewallAPI.dll,-28752</Group>
If your user account is a member of the "Administrators" group, you're done:
<settings pass="oobeSystem">
<component name="Microsoft-Windows-Shell-Setup" ...>
<UserAccounts>
<LocalAccounts>
<PlainText>true</PlainText>
But if you're like me and you don't want to live in the Administrators group, you need to join the Remote Desktop Users group to be able to log in remotely:
<settings pass="oobeSystem">
<component name="Microsoft-Windows-Shell-Setup" ...>
<UserAccounts>
<DomainAccounts>
<Domain>redmond.corp.microsoft.com</Domain>
<Group>RemoteDesktopUsers</Group>
<Name>MatEer</Name>
• #### Weighing the Sun and the Moon
In an earlier post I mentioned how the Cavendish experiment allowed us to weigh the Earth - to determine the mass of the Earth mE. Newton knew the acceleration due to gravity on the surface of the Earth and was able to use that to find the product G mE; Cavendish determined G directly, and was thus able to solve for mE. He would also have been able to find the mass of the sun as follows:
mE aE = G mE mS / rE2
G, rE, and aE = vE2 / rE are known, so we can solve for mS.
But calculating the mass of the moon is trickier.
Once we were able to put a satellite around the moon we could measure its orbital radius and speed, deduce the acceleration, and use that plus the known G to calculate the mass of the moon. But prior to that we were limited to techniques like:
The moon does not exactly orbit the Earth, but instead orbits the center of mass of the moon/Earth system. By careful observation we can determine where this center of mass is. We can then measure the distance between the center of mass and the Earth's center. This plus the known mass of the Earth and the distance of the Earth from the Moon allows us to determine the mass of the Moon.
If we're lucky enough to see a foreign object come close to the moon, we can determine how much it is accelerated by the Moon. This will allow us to determine the mass of the Moon using the technique above. (We won't be able to determine the mass of the foreign object, but we don't need it.)
When the USSR launched Sputnik, American scientists really wanted to know what its mass was. But because none of the techniques above were useful, they were unable to determine it.
• #### Programmatically rearranging displays
Most of my test machines and my laptop have a single display; but I have two dev machines which are each connected to two displays.
When I clean install Windows, I sometimes need to rearrange the displays:
Since I clean install Windows frequently, I wrote myself a little C++ app which does this programmatically using EnumDisplayDevices / EnumDisplaySettings / ChangeDisplaySettingsEx.
Source and binaries attached.
Pseudocode:
for (each device returned by EnumDisplayDevices) {
grab the position and the height/width using EnumDisplaySettings
}
calculate the desired position of the secondary monitor
Set it using ChangeDisplaySettingsEx with DM_POSITION
Call as:
>swapmonitors
Moved secondary monitor to (1920, 0)
EDIT: Oops: 0 should be CDS_GLOBAL | CDS_UPDATEREGISTRY, to make the settings apply to all users, and to persist across display resets / reboots
LONG status = ChangeDisplaySettingsEx(
nameSecondary,
&mode,
nullptr, // reserved
CDS_GLOBAL | CDS_UPDATEREGISTRY, // was 0
nullptr // no video parameter
);
• #### Windows Sound test team rowing morale event
Last Friday the Windows Sound test team went kayaking. We went to the Agua Verde paddle club and kayaked around Union Bay for a while.
Here's the route we took:
More detail:
http://connect.garmin.com/activity/179545084
• #### Grabbing large amounts of text from STDIN in O(n) time
Last time I blogged about an O(n log n) solution to finding the longest duplicated substring in a given piece of text; I have since found an O(n) algorithm, which I linked to in the comments.
But my blog post used an O(n2) algorithm to read the text from STDIN! It looked something like this:
while (!done) {
grab 2 KB of text
allocate a new buffer which is 2 KB bigger
copy the old text and the new text together into the new buffer
free the old buffer
}
There are two better algorithms:
while (!done) {
grab an amount of text equal to the amount we've grabbed so far
allocate a new buffer which is twice as large as the last buffer
copy the old text and the new text together into the new buffer
free the old buffer
}
And:
while (!done) {
grab 2 KB of text
add this to the end of a linked list of text chunks
}
allocate a buffer whose size is the total size of all the chunks added together
walk the linked list and copy the text of each chunk into the buffer
Both "better" algorithms are O(n) but the latter wastes less space.
Here's the improved code:
struct Chunk {
WCHAR text[1024];
Chunk *next;
Chunk() : next(nullptr) { text[0] = L'\0'; }
};
class DeleteChunksOnExit {
public:
DeleteChunksOnExit() : m_p(nullptr) {}
~DeleteChunksOnExit() {
Chunk *here = m_p;
while (here) {
Chunk *next = here->next;
delete here;
here = next;
}
}
void Set(Chunk *p) { m_p = p; }
private:
Chunk *m_p;
};
...
Chunk *tail = nullptr;
DeleteChunksOnExit dcoe;
size_t total_length = 0;
bool done = false;
while (!done) {
Chunk *buffer = new Chunk();
if (nullptr == buffer) {
LOG(L"Could not allocate memory for buffer");
return nullptr;
}
// this runs on the first pass only
tail = buffer;
dcoe.Set(buffer);
} else {
tail->next = buffer;
tail = buffer;
}
if (fgetws(buffer->text, ARRAYSIZE(buffer->text), stdin)) {
total_length += wcslen(buffer->text);
} else if (feof(stdin)) {
done = true;
} else {
return nullptr;
}
}
// gather all the allocations into a single string
size_t size = total_length + 1;
WCHAR *text = new WCHAR[size];
if (nullptr == text) {
LOG(L"Could not allocate memory for text");
return nullptr;
}
DeleteArrayOnExit<WCHAR> deleteText(text);
WCHAR *temp = text;
for (Chunk *here = head; here; here = here->next) {
if (wcscpy_s(temp, size, here->text)) {
LOG(L"wcscpy_s returned an error");
return nullptr;
}
size_t len = wcslen(here->text);
temp += len;
size -= len;
}
deleteText.Cancel();
return text;
}
• #### Changing the desktop wallpaper using IDesktopWallpaper
About a year ago I wrote about how to change the desktop wallpaper using SystemParametersInfo(SPI_SETDESKWALLPAPER).
Windows 8 desktop apps (not Store apps) can use the new IDesktopWallpaper API to get a more fine level of control. So I wrote an app which uses the new API, though I just set the background on all monitors to the same image path, and I don't exercise any of the advanced features of the API.
Pseudocode:
CoInitialize
CoCreateInstance(DesktopWallpaper)
pDesktopWallpaper->SetWallpaper(NULL, full-path-to-image-file)
pDesktopWallpaper->Release()
CoUninitialize
Usage:
>desktopwallpaper.exe "%userprofile%\pictures\theda-bara.bmp"
Setting the desktop wallpaper to C:\Users\MatEer\pictures\theda-bara.bmp succeeded.
Source and binaries attached
• #### Programmatically adding a folder to a shell library (e.g., the Music library)
I wrote a selfhost tool which allows me to add a folder (for example, C:\music) to a shell library (for example, the Music library.)
This was before I found out about the shlib shell library sample which Raymond Chen blogged about. If you're looking for a sample on how to manipulate shell libraries, prefer that one to this.
Pseudocode:
CoInitialize
pShellLibrary->Commit()
CoUninitialize
Usage:
>shelllibrary
|
Iterative Approximation of Equilibrium Points of Evolution Equations
29
Abstract
Suppose that E is a real Banach space .which is both uniformly convex and q-uniformly smooth and that T is a Lipschitz pseudocontractive self-mapping of a closed convex and bounded subset K of E. Suppose F(T) denotes the set of fixed points of T and U denotes the sunny nonexpansive retraction of K onto F(T) and w any point of K, it is proved that the sequence {Xn} n=0∞generated from an arbitrary x0 € K by Xn+1 = βnw +(1+βn) 1/n+1 ∑3=0n{(1-a3)1+a3T}Xn, (where I denotes the identity operator on E and {an}∞n=0, and {βn}∞n=0 are real sequences in (0,1], satisfying certain conditions) converges strongly to Uw. This result i s similar to, and in some sence, is an improvement on the theorems of Chidume (Proc. Amer. Math. Soc. 129(8) (2001) .2245-2251) and Ishikawa (Proc. Amer. Math. Soc. 44(l) (l974), 147-150). Furthermore, suppose that E is an arbitrary real normed linear space and A : E + 2E is a uniformly continuous and uniformly quasi-accretive multi-valued map with nonempty closed values such that the range of (I – A) is bounded and the inclusion f € Ax has a solution x* € E for an arbitrary but fixed f € E. Then it is proved that the sequence { xn)∞n=0 generated from an arbitrary xo € E by Xn+1= (1-cn)Xn+cnCn1 Cn € (1-A)xn Ɐ n ≥0 (where {cn)∞n=0 is a, real sequence in [ O , l ) satisfying certain conditions) converges strongly to x*. Moreover, suppose E is an arbitrary real normed linear space and T : D(T) c E → E is locally Lipschitzian and uniformly hemicontractive map with open domain D(T) and a fixed point x* € D(T). Then there exists a neighbourhood B of x* such that the sequence { xn, ) ∞n=0 generated from a3 arbitrary x0 € B c D(T) by xn+1=(1-cn)xn+cnTxn,Ɐ n ≥ 0 (where {cn}∞n=0 is a real sequence in [O,1) satisfying certain conditions) remains in B and converges strongly to x*. These results are improvements on the results of Alber and Delabriere (Operator Theory, Advances and Applications 98 (1997) ,7-22), Bruck (Bull. Amer. Math. Soc. 79(1973),1259-1262), Chidume and Moore (J. Math. Anal. Appl 245(l) (2000) ,142-160) and OsiIike (Nonlinear Analysis 36 (1) (1999) ,I-9). Finally, if E is a real Banach space and T : E → E a map with F(T) := {x € E : Tx = x}≠0 and satisfying the accretive-type condition (x – Tx, j(x – x*)) ≥גּ║x- Tx║2 for all x € E, x* € F(T) and גּ > 0, then a necessary and sufficient condition for the convergence of the sequence {xn} ∞n=0=, generated from an arbitrary x0 € E by xn+l = (1 – cn)xn + c,Txn, Ɐ n ≥ 0 . (where {cn}∞n=0 is a real sequence in [0, I) satisfying certain conditions) to a fixed point of T is established. This result extends the results of Maruster (Proc. Amer. Math. Soc.66 (1977), 69-73) and Chidume (J. Nigerian Math. Soc. 3(1984),57-62) and resolves a question raised by Chidume (J. Nigerian Math. Soc. 3(1984),57-62).
Get Full Work
Disclaimer:
Using this Service/Resources:
You are allowed to use the original model papers you will receive in the following ways:
1. As a source for additional understanding of the subject.
2. As a source for ideas for your own research work (if properly referenced).
3. For PROPER paraphrasing (see your university definition of plagiarism and acceptable paraphrase)
4. Direct citing (if referenced properly)
|
# How do you use Heron's formula to find the area of a triangle with sides of lengths 9 , 5 , and 11 ?
Mar 15, 2016
≈ 22.185
#### Explanation:
This is a two step process
step 1 : Find half the perimeter (s) of the triangle
step 2 : calculate the area (A)
let a = 9 , b = 5 and c = 11
step 1 : s $= \frac{a + b + c}{2} = \frac{9 + 5 + 11}{2} = \frac{25}{2} = 12.5$
step 2 : $A = \sqrt{s \left(s - a\right) \left(s - b\right) \left(s - c\right)}$
$= \sqrt{12.5 \left(12.5 - 9\right) \left(12.5 - 5\right) \left(12.5 - 11\right)}$
rArr A = sqrt(12.5xx3.5xx7.5xx1.5) ≈ 22.185
|
Site Overlay
# Rayleigh criterion for resolution pdf
The Rayleigh criterion stated in the equation $\theta=\frac{\lambda}{D}\\$ gives the smallest possible angle θ between point sources, or the best obtainable resolution. Once this angle is found, the distance between stars can be calculated, since we are given how far away they are. Summary. Diffraction limits resolution. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. Limits of Resolution: The Rayleigh Criterion * OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License Abstract Discuss the Rayleigh criterion. Light di racts as it moves through space, bending around obstacles, interfering constructively and de .
# Rayleigh criterion for resolution pdf
The Rayleigh criterion stated in the equation $\theta=\frac{\lambda}{D}\\$ gives the smallest possible angle θ between point sources, or the best obtainable resolution. Once this angle is found, the distance between stars can be calculated, since we are given how far away they are. The Rayleigh criterion for the diffraction limit to resolution states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. See (b). The first minimum is at an angle of θ = 1. 22 λ / D size 12{θ=1 "." "22"λ/D} {}. The Rayleigh Criterion. The Rayleigh criterion is the generally accepted criterion for the minimum resolvable detail - the imaging process is said to be diffraction-limited when the first diffraction minimum of the image of one source point coincides with the maximum of another. Summary. Diffraction limits resolution. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the center of the diffraction pattern of one is directly over the first minimum of the diffraction pattern of the other. Limits of Resolution: The Rayleigh Criterion * OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License Abstract Discuss the Rayleigh criterion. Light di racts as it moves through space, bending around obstacles, interfering constructively and de . Resolution beyond Rayleigh's criterion: a modern resolution measure with applications to single molecule imaging Sripad Ram, Prashant Prabhat, Jerry Chao, E. Sally Wardand RaimundJ.Rayleigh's criterion is extensively used in optical microscopy for determining the resolution of microscopes. This criterion imposes a resolution. PDF | Rayleigh's criterion states that it becomes essentially difficult to theorem, puts a fundamental limit on the power of optical resolution. The Rayleigh criterion is often used to describe the resolving power in light This criterion describes the dependence of the resolution on the wavelength of. Abstract: In the field of image science, Rayleigh criterion of resolution has been and is still being extensively used in the assessment of performance of optical. Lecture 5, p 2. Today. Circular Diffraction. ○Angular resolution (Rayleigh's criterion). ○Minimum spot size. Interferometers. ○Michelson. ○Applications. resolution n: 1 the act or process of resolving 2 the action of solving, also The resolution question [Rayleigh, ]: when do we cease . Rayleigh criterion. The results of very simple experiments to evaluate Lord Rayleigh Resolution Criterion resolutions than those of Rayleigh Criterion were obtained owing to the. General term for overall effects of Earthʼs atmosphere on images seen by “ ground-based'' telescopes. • Refractive index, n, of air is pressure and. Diffraction limits resolution. For a circular aperture, lens, or mirror, the Rayleigh criterion states that two images are just resolvable when the. PDF | 20 minutes read | Rayleigh's criterion, although extensively used, This inadequacy has necessitated a reassessment of the resolution. Omega web camera drivers, yu-gi-oh online 3 duel accelerator mac, bcl music everywhere you go, pes 6 pc tpb, driver wireless windows 7 ultimate, amazon android app store, ignazio silone fontamara adobe, army of lovers israelism firefox, hume tumse pyar kitna falak ringtone s
## watch the video Rayleigh criterion for resolution pdf
🔴 RAYLEIGH'S CRITERION for Limit of Resolution and Resolving power -- in HINDI, time: 13:30
Tags: Anos 80 nat geo completo adobe, Series agents of shield, Chaho ya na s, Medialink black panther cccam config, Games no no install
## 1 thoughts on “Rayleigh criterion for resolution pdf”
1. Zologar says:
What rare good luck! What happiness!
|
CBSE Sample Papers NCERT Books
# Chapter 3 Matrices
Matrix is one of the most fundamental chapters of Mathematics, which will help us not only in mathematics but also in other branches like mechanics, optics, computer science, etc. It solves tedious calculations very easily. Topics incorporated in this chapter are notation, order, types and equality of matrix, zero matrix, transpose, symmetric and skew symmetric matrices, addition and multiplication of matrices, properties, row and column operations, Invertible matrices and proof of uniqueness of inverse, if it exists.
|
# Rubik's cube computing algorithm
Is it possible to solve a 3x3 Rubik's cube using a tree similar to a 'mini-max game tree' (listing all possible unique moves, then listing further moves from this and so on), or is the sample space too large, given that most combinations require only 17/18 moves to be solved (and the maximum is proved to be 20)?
• The sample space is finite, hence there is no obstacle – Hagen von Eitzen Nov 28 '14 at 10:38
• @HagenvonEitzen Yes there are obstacles: Assuming one move consists of a quarter or a 180° turn, there are 18 possible moves. If you want to list all possible move 'paths' up to a length of 20 we will get a tree that has about $1.3 \cdot 10^{25}$ nodes on the last layer. Assuming we need 1 byte (which is way to few) for each state we end up needing more than $10^13$ terabytes of memory. This is where I see a possible obstacle. – flawr Nov 28 '14 at 10:44
• @flawr the question asks if it is possible, not whether it is feasible – AakashM Nov 28 '14 at 10:54
• @AakashM I meant feasible, not just possible. – ghosts_in_the_code Nov 28 '14 at 11:20
• @flawr You don't need to store the entire tree. As long as the last node is present, it should be enough to calculate the next node. Once a node is not needed anymore, it can be deleted. So memory should not be a problem; I'm asking about the time. – ghosts_in_the_code Nov 28 '14 at 11:23
|
Comment
Share
Q)
# If $P=\{x:sinx-cosx=\sqrt2 cosx\}$ and $Q=\{x:sinx+cosx=\sqrt 2 sinx\}$ then which of the following is true ?
(A) $P\subset Q$
(B) $Q-P\neq \phi$
(C) $Q\subset P$
(D) $P=Q$
|
## Category: On Being a Mathematician
### So Long, and Thanks for All the Theorems
Since I first learned about graph theory in my intro algorithms class, I have been intensely focused towards learning more about graphs and doing mathematics as a passion and a profession. At the end of this semester, I close the book on that part of my career, leave my academic position, and return to my original plan: developing software.
Read the rest of this entry »
### On the arXiv
Whenever I write a paper, I put it on the arXiv. The arXiv is an open-access paper repository run by Cornell University. It’s pretty fantastic to know that almost anyone can have an immediate international audience by posting a paper to the arXiv. My use is two-fold: I upload papers and I look forward to the evening paper dump in my RSS feed five times a week. It is a great way to be actively connected to the research world. As such, I try to convince my coauthors to upload papers to the arXiv and have never had one complaint.
That is, until something interesting happened. I’ll tell the story very quickly, then talk about pros and cons for using the arXiv. This will entirely skip any copyright issues with journals, and focus on the benefits of public/private preprints. Please add your comments!
Read the rest of this entry »
### On Reproducibility
Recently, Stephen Hartke spoke about our work on uniquely $K_r$-saturated graphs, and afterwards he spoke to a very famous mathematician, who said, roughly:
“Why did you talk about the computation? You should just talk about the results and the proofs and hide the fact that you used computation.”
While this cannot be an exact quote, I believe it is a common attitude among mathematicians. While the focus is on solving problems, the use of computation is seen as a negative, so the final research papers make little mention of their computational methods. This is a huge problem in my opinion. While computations can report exact results, and sometimes even prove results, every execution is an experiment. It is unknown before the execution what the results will be, or how long it will take.
Computational combinatorics is a combination of mathematics, engineering, and science: We prove things, we build things, and we experiment. Since computer proofs are experiments, it is important that they be reproducible. Today, I want to discuss a bit about how we can improve our presentation of algorithms and computation in order to make results more reproducible.
Read the rest of this entry »
### Rainbow Arithmetic Progressions II: The Collaboration
In the previous post I briefly discussed the algorithms that played a role in the recent paper Rainbow arithmetic progressions. Today, I want to take a detour from our typical discussion of computational methods and instead discuss the collaboration that led to this paper.
It is important to explicitly say that while this paper was very collaborative, and used the expertise and hard work of many individuals, this blog post was written entirely by me during office hours and in a hurry. Any opinions, errors, or other reason to be angry with the content here is entirely my fault and not the fault of my fantastic coauthors. While I’m at it, let me actually list my coauthors by name:
Steve Butler, Craig Erickson, Leslie Hogben, Kirsten Hogenson, Lucas Kramer, Richard L. Kramer, Jephian C.-H. Lin, Ryan R. Martin, Nathan Warnberg, and Michael Young.
Read the rest of this entry »
|
# MLE method for a statistical model with two random variables
Suppose for fixed $t>0$ we have two discrete random variables $X_t$ and $X_{t+1}$ where $X_t$ takes on $11$ values $$X_t\in\{0,1,2,\cdots, 10\}.$$ Also, we have $$P(X_{t+1}=X_t)=1-\lambda,\quad P(X_{t+1}=\min\{X_t+1,10\})=\lambda.$$ What is the ML estimator for $\lambda$?
The question is motivated by a model about bus engine replacement in microeconometrics. I have searched along this Wikipedia article and this lecture note. But I don't see how to do it since two random variables are involved. Could anyone come up with a theoretical reference for such an issue (MLE method for a statistical model with two random variables)? (Or this could be trivially reduced to the one-variable case?)
• Do you just have two random variables or several consisting of the complete histories of more than one bus engine with the values of $X_1, X_2,..., X_t$ ? That would also mean that we would assume that $t$ also takes on the discrete values $1, 2, 3, ..., t$ ? And does $X_1=0$ with probability 1 ?
– JimB
Nov 29 '15 at 5:01
• I'm now focusing on the case that $t$ is fixed. Thus there are only two random variables.
– Jack
Nov 29 '15 at 14:57
This is more of an extended comment than an answer.
I wonder if you meant to give conditional probabilities such that $P(X_{n+1}=x_n | X_n = x_n)=1-\lambda$ and $P(X_{n+1}=\min{(x_t+1,10)}|X_n=x_n)=\lambda$ with $P(X_1=0)=1$.
That way the likelihood of an observed history with $t=6$ of 0 0 1 1 2 3 would be $(1-\lambda)^2 \lambda^3$ with the maximum likelihood estimator being 3/5.
In general with an arbitrary value of $t$ and $m$ occurrences where there was a change from $X_i$ to $X_{i+1}$ the likelihood would be $(1-\lambda)^{t-m-1} \lambda^m$, the maximum likelihood estimator of $\lambda$ would be $m/(t-1)$ (except when $X_t=10$ as then $X_{t+1}$ must also equal 10 resulting in no information about $\lambda$).
But if you only know say two values $X_5$ and $X_6$, then you only have a sample size of 1 and the maximum likelihood estimator of $\lambda$ is 1 if $x_6 > x_5$ and 0 otherwise (unless $x_5=10$ when means that $x_6$ also equals 10 with probability 1 and for that case there is no information about $\lambda$).
Alternatively, if you have $n$ multiple sets of $X_t$ and $X_{t+1}$ (where $X_t < 10$ and if you can assume independence among the sets),then you simply have a binomial distribution and the maximum likelihood estimator is simply the number of observations of increases (m) divided by n.
I still think you need to give more specifics.
|
Merkle root with one transaction
1
I am trying to build a merkle root which contains only the coinbase transaction from bitcoind
After creating the coinbase transaction i need to convert it to little endian, hash it twice and then put it in the blockheader as it is, or i must reconvert it again before i put it in the blockheader
I mean:
step 1: convert coinbase transaction to little endian
step 2 : double SHA256
step 3 : reconvert to little endian or let it as it's?
step 4 : put it on the blockheader
I am a little confused
Thank you for help
2
I think you may be missing that when a node in the Merkle tree doesn't have a hashing partner it is concatenated with itself before hashing. To calculate the Merkle root of a transaction tree with a single transaction txid0, you would need to hash:
Merkle root = sha256d(txid0 + txid0)
I'm not sure about the endianess conversions.
ah, thank you, i thought we do this when it's an odd number of transactions, and according to the documentation, it says when it contains only the coinbase transaction, the merkle root will be the same – Hamita – 2020-07-24T18:14:27.590
0
I put the answer for my question here, if someone else need it, the right answer is: Convert before hashing and after hashing
|
# Approximating the variance of the integral of a white noise Gaussian process
Let $X(t)$ be a stationary Gaussian process with mean $\mu$, variance $\sigma^2$ and stationary correlation function $\rho(t_1-t_2)$. If $X(t)$ is a white noise process the correlation function is given by the Dirac delta function $\rho(t_1-t_2) = \delta(t_1-t_2)$.
The integral of this process is given by:
$$I = \int_0^L X(t) \, dt$$
According to this CrossValidated post the variance of $I$ is given by:
$$\text{Var}[I] = L\sigma^2$$
However this does not agree with the results I obtained through simulation. The approach is to discretise the white noise Gaussian process into $N$ independent normal variables. The integral can then be approximated through:
$$I = \int_0^L X(t) \approx \frac{L}{N}\sum_{i=1}^NX_i$$
Where $X_i$ are indepedent random variables $X_i \sim \mathcal{N}(\mu,\sigma^2)$. In simulation I find that as $N$ grows large then $\text{Var}[I] \rightarrow 0$. Why does it not approach $L\sigma^2$? What is the problem with my approximation?
The disparity arises from the fact that your discretization of the continuous process does not assign the appropriate variance to the $X_i$. Here's the key (for heuristics, see here, Section 3.2):
If $\{X(t)\}_{t\in\mathbb{R}}$ is a continuous Gaussian process such that \begin{align}&E[X(t)]=0\\ &E[X(t)X(t')]=\sigma^2 \delta(t-t')\end{align} then a discrete sampling of the process, with a uniform sampling interval $\Delta$, viz., $$X(t),X(t+1\Delta),X(t+2\Delta),...$$ is simulated as an i.i.d sequence of Gaussian$(\text{mean}=0,\text{variance}=\frac{\sigma^2}{\Delta})$ random variables, the quality of the simulation increasing as $\Delta\to 0$.
(The power spectral density of the i.i.d. sequence approaches that of the continuous process that it simulates -- flat and equal to the same constant value $\sigma^2$ -- except that for the i.i.d. sequence it is "band limited", i.e. vanishing outside of a finite-width interval.)
Thus, in the present case, to simulate $$I = \int_0^L X(t) \, dt \approx \sum_{i=1}^N X(i\Delta)\,\Delta$$ where $\Delta=\frac{L}{N}$, one would use $$\hat{I} = \Delta\sum_{i=1}^N X_i$$ with $X_1,X_2,\ldots$ i.i.d. $\text{Gaussian}(\text{mean}=\mu,\text{variance}=\frac{\sigma^2}{\Delta})$. Then \begin{align} \text{Var}[\hat{I}] = \Delta^2\,N\,\frac{\sigma^2}{\Delta} = (N\Delta)\sigma^2 = L\sigma^2. \end{align}
I think somewhere between your first StackExchange post reference in your CrossValidated post there's been some confusion about the process in question.
It seems to me the answer to your CrossValidated post regards a Wiener process/Brownian motion with independent increments in the process. Or perhaps the person providing that answer mistook the integral of the process you describe for such a process.
To me your numerical result seems correct.
• This Wikipage states that the Wiener process $W(t)$ is the integral of a white noise Gaussian process (with unit variance?). So I am interested in the Wiener process at point L i.e. $W(L)$. The variance of the Wiener process is $t$ and so $\text{Var}[W[L]} = L$ and so if the white noise process has variance $\sigma^2$ then $\text{Var} = L\sigma^2$ which agrees with the cross-validated post. I think there is a problem with my approximation. @KeithWM
– egg
Aug 15 '16 at 13:50
|
Let $$d\in \mathbb N \text{ and } f(z)= \sum_{\alpha\in \mathbb{N}_0^d} c_\alpha z^\alpha$$ be a convergent multivariable power series in $$z=(z_1,\dots,z_d)$$. We present two independent conditions on the positive coefficients $$c_\alpha$$ which imply that $$f(z)=\frac{1}{1-\sum_{\alpha\in \mathbb{N}_0^d} q_\alpha z^\alpha}$$ for non-negative coefficients $q_\alpha$. It turns out that functions of the type
$$f(z)= \int_{[0,1]^d} \frac{1}{1-\sum_{j=1}^d t_j z_j} d\mu(t)$$ satisfy one of our conditions, whenever $$d\mu(t) = d\mu_1(t_1) \times \dots \times d\mu_d(t_d)$$ is a product of probability measures $$\mu_j \text{ on }[0,1]$$. The results have applications in the theory of Nevanlinna-Pick kernels. This is joint work with Jesse Sautel.
|
# zbMATH — the first resource for mathematics
The length of a set in the sphere whose polynomial hull contains the origin. (English) Zbl 0762.32007
Definition. (a) A set $$X<\mathbb{R}^ k$$ is 1-rectifiable if it is the image of a bounded subset $$U\subset\mathbb{R}$$ under a Lipschitz continuous mapping $$f:U\to\mathbb{R}^ k$$.
(b) $$X$$ is $$({\mathcal H}^ 1,1)$$-rectifiable if $${\mathcal H}^ 1(X)<\infty$$ and $${\mathcal H}^ 1$$-almost all of $$X$$ can be covered by a countable union of 1-rectifiable sets.
$${\mathcal H}^ 1$$ denotes here a 1-dimensional Hausdorff measure.
The main result of the paper is Theorem. If $$X$$ is a compact $$({\mathcal H}^ 1,1)$$-rectifiable subset of the unit sphere $$S\subset\mathbb{C}^ n$$ such that the origin $$0\in\mathbb{C}^ n$$ belongs to the polynomial hull $$\widehat X$$, then $${\mathcal H}^ 1(X)\geq 2\pi$$.
##### MSC:
3.2e+21 Polynomial convexity, rational convexity, meromorphic convexity in several complex variables
##### Keywords:
set in the sphere; polynomial hull
Full Text:
##### References:
[1] Alexander, H., Polynomial approximation and hulls in sets of finite linear measure in $$C$$^{n}, Amer. J. math., 93, 65-74, (1971) · Zbl 0221.32011 [2] Alexander, H., The polynomial hull of a set of finite linear measure in $$C$$^{n}, J. analyse math., 47, 238-242, (1986) · Zbl 0615.32009 [3] Alexander, H., The polynomial hull of a rectifiable curve in $$C$$^{n}, Amer. J. math., 110, 629-640, (1988) · Zbl 0659.32017 [4] Bishop, E., Conditions for analyticity, Michigan math. J., 11, 289-304, (1964) · Zbl 0143.30302 [5] Burago, Yu.D.; Zalgaller, V.A., Geometric inequalities, Grundlehren der math. wiss., 285, (1988), Springer Berlin-Heidelberg-New York · Zbl 0633.53002 [6] Chirka, E.M., Complex analytic sets. math. and its appl., 46, (1989), Kluwer Dordrecht, Original in Russian, Nauka, Moskva (1985) [7] Federer, H., Geometric measure theory, Grundlehren der math. wiss., 153, (1969), Springer Berlin-Heidelberg-New York · Zbl 0176.00801 [8] Fornaess, J.E., The several complex variables problem List, Ann arbor, 10, (1991), Sept. [9] Sibony, N., Valeurs au bord de fonctions holomorphes et ensembles polynomialement convexes, Lecture notes in math, 578, (1977), Springer Berlin-Heidelberg-New York · Zbl 0382.32004 [10] Stolzenberg, G., Polynomially and rationally convex sets, Acta math., 109, 259-289, (1963) · Zbl 0122.08404 [11] Stout, E.L., Removable sets for holomorphic functions of several complex variables, Publ. mat., 33, 345-362, (1989) · Zbl 0692.32010 [12] Alexander, H., Quasi-isoperimetric inequality for polynomial hulls, (1992), Preprint · Zbl 0794.32017
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
# Exercise 1: Graphs
For this exercise, we will use practice plotting different types of graphs on multiple built-in datasets in R.
## PlantGrowth
• Take a look at the PlantGrowth dataset by typing it in the console.
• Use the table() function to get counts on the number of plants in each treatment groups.
• Your output should be as followed
table(PlantGrowth$group) ## ## ctrl trt1 trt2 ## 10 10 10 ### Histogram • Plot one histogram of dried weigth (weight) of plants from all 3 treatment groups. • Your histogram should look similar (might not be exactly the same) to the plot below. hist(x = PlantGrowth$weight, probability = TRUE, border = "dodgerblue",
main = "Histogram of the Plants' Dried Weight",
xlab = "Dried Weight (lbs.)", breaks = 10)
grid()
box()
### Boxplot
• To compare the treatments, we want to have a plot that can compare the dried weight of plants from each treatment group.
• Plot a boxplot of Weight vs. Group.
• Your plot should look like this:
boxplot(formula = weight ~ group, data = PlantGrowth,
main = "Boxplot of Plants' Dried Weights vs. Treatments",
ylab = "Dried Weight (lbs.)", xlab = "Treatment Group",
col = c("darkorange", "dodgerblue", "lightgrey"))
grid()
## trees
• Next, we’re taking a look at the trees dataset.
### Scatterplot
• Plot a scatterplot of Height vs. Girth (tree diameter).
• An example plot is shown below.
plot(formula = Height ~ Girth, data = trees,
main = "Black Cherry Trees Girth vs Height",
xlab = "Girth (in)", ylab = "Height (ft)",
col = "dodgerblue", pch = 19)
grid()
### Boxplot
• To get summary statistics from a vector (or a column of a data frame, which is a vector), we can use fivenum().
• For example, the five main summary statistics of Black Cherry Trees’ Girth are
fivenum(trees$Girth) ## [1] 8.30 11.05 12.90 15.25 20.60 • That means: min = 8.3, q1 = 11.05, median = 12.9, q3 = 15.25, max = 20.6. • We will use these numbers to divide the dataset into 5 groups based on their Girth measurement. Create a new column Group in trees that contain the following value: • group1: if the girth is in [8.3, 11.05) • group2: [11.05, 12.90) • group3: [12.90, 15.25) • group4: [15.25, 20.60] • So how do we do that in R? • We can use a function called cut(). Google this function or look at the documentation by typing ?cut in the RStudio console. • The data frame trees should look like this after you add the Group column. trees$Group <- cut(trees$Girth, breaks = fivenum(trees$Girth),
labels = c("group1", "group2", "group3", "group4"),
right = FALSE, include.lowest = TRUE)
trees
## Girth Height Volume Group
## 1 8.3 70 10.3 group1
## 2 8.6 65 10.3 group1
## 3 8.8 63 10.2 group1
## 4 10.5 72 16.4 group1
## 5 10.7 81 18.8 group1
## 6 10.8 83 19.7 group1
## 7 11.0 66 15.6 group1
## 8 11.0 75 18.2 group1
## 9 11.1 80 22.6 group2
## 10 11.2 75 19.9 group2
## 11 11.3 79 24.2 group2
## 12 11.4 76 21.0 group2
## 13 11.4 76 21.4 group2
## 14 11.7 69 21.3 group2
## 15 12.0 75 19.1 group2
## 16 12.9 74 22.2 group3
## 17 12.9 85 33.8 group3
## 18 13.3 86 27.4 group3
## 19 13.7 71 25.7 group3
## 20 13.8 64 24.9 group3
## 21 14.0 78 34.5 group3
## 22 14.2 80 31.7 group3
## 23 14.5 74 36.3 group3
## 24 16.0 72 38.3 group4
## 25 16.3 77 42.6 group4
## 26 17.3 81 55.4 group4
## 27 17.5 82 55.7 group4
## 28 17.9 80 58.3 group4
## 29 18.0 80 51.5 group4
## 30 18.0 80 51.0 group4
## 31 20.6 87 77.0 group4
• Now, plot a boxplot of Height vs. Group.
• Your plot should look similar to this.
boxplot(formula = Height ~ Group, data = trees,
main = "Boxplot of Girth Group vs. Height",
xlab = "Girth Group", ylab = "Height (ft)",
col = c("purple", "pink", "violet", "lightgrey"))
## warpbreaks
• Use str() to take a quick look at the warpbreaks dataset.
• Use table() to count how many experiments were done for each combination of wool and tension.
• Your output should be as followed.
## tension
## wool L M H
## A 9 9 9
## B 9 9 9
### Histogram
• Plot a histogram of the number of breaks breaks for all combination of wool and tension.
• Your graph should look similar to the plot shown below.
hist(x = warpbreaks\$breaks, main = "Histogram of Number of Breaks",
probability = TRUE, breaks = 10,
xlab = "Number of Breaks", border = "darkorange")
### Boxplot
• Plot a boxplot of breaks vs. tension.
• Your plot might look like this.
boxplot(formula = breaks ~ tension, data = warpbreaks,
main = "Boxplot of Number of Breaks vs. Tension Level",
xlab = "Tension Level", ylab = "Numeber of Breaks",
col = c("darkred", "red", "darkorange"))
### Another Boxplot
• Plot a boxplot of breaks vs. wool.
• Your plot might look like this.
boxplot(formula = breaks ~ wool, data = warpbreaks,
main = "Boxplot of Number of Breaks vs. Wool Type",
xlab = "Wool Type", ylab = "Numeber of Breaks",
col = c("darkgreen", "yellowgreen"))
### Another Boxplot (Again!)
• Now, plot a boxplot of breaks vs. combination of wool and tension (hint: wool*tension).
boxplot(formula = breaks ~ wool*tension, data = warpbreaks,
main = "Boxplot of Number of Breaks vs. Wool + Tension",
xlab = "Wool Type + Tension Level", ylab = "Numeber of Breaks",
col = c("darkgreen", "yellowgreen"))
|
# ODE for functions with values in locally convex TVS
Given an ODE for a function $u \in C^1(I,V)$, where $V$ is some locally convex TVS (topological vector space) and $I \subset \mathbb{R}$, i.e.
$\frac{d}{dt} u = f(t,u)$
for some function $f: I \times V \to V$. Are there results concerning the uniqueness of the initial value problem? Can someone give me some references or outline the idea of how to prove uniqueness? What is the suitable condition on $f$ that replaces Lipschitz continuity for ODE's with values in Banach spaces?
An explicit example: When solving the heat equation $\frac{\partial u}{\partial t} - \Delta u = 0$ in the class $C^\infty(\mathbb{R}^+, S'(\mathbb{R}^n)$ using the Fourier transform ($S'$ denotes the tempered distributions), one gets an ODE
$\frac{d}{dt} u = -|\cdot|^2 u$.
Does the initial value problem have a unique solution?
-
Bruce Driver's book on Analysis has an entire Part devoted to calculus and ODEs on Banach Spaces. – Willie Wong May 2 '12 at 11:34
For the issue of uniqueness, Lemmert's paper sciencedirect.com/science/article/pii/0362546X86901094 seems to address it (partially). – Willie Wong May 2 '12 at 11:45
|
# 3. Budgeted sales for the first six months for Meixner Corp. are listed below: APRIL 7,000...
###### Question:
3. Budgeted sales for the first six months for Meixner Corp. are listed below: APRIL 7,000 JANUARY 6,000 FEBRUARY 7,000 MARCH 8,000 MAY 5,000 JUNE 4,000 UNITS: Meixner Corp. has a policy of maintaining an inventory of finished goods equal to 40 percent of the next month's budgeted sales. If Meixner Corp. plans to produce 6,000 units in June, what are budgeted sales for July? a. 3,600 units 1,000 units c. 9,000 units d. 8,000 units 4. Peck Co. manufactures card tables. The company has a policy of maintaining a finished goods inventory equal to 40 percent of the next month's planned sales. Each card table requires 3 hours of labor. The budgeted labor rate for the coming year is $13 per hour. Planned sales for the months of April, May, and June are respectively 4,000; 5,000; and 3,000 units. The budgeted direct labor cost for June for Peck Co. is$136,500. What are budgeted sales for July for Peck Co.? a. 3,500 units b. 4,250 units c. 4,000 units d. 3.750 units
#### Similar Solved Questions
##### Cev LinureqHcFchAchel 0457
cev Linureq Hc Fch Achel 0457...
##### How do you prove that the function 1/x is continuous at x=1?
How do you prove that the function 1/x is continuous at x=1?...
##### Magnitudes of vectors u and v and the angle $\theta$ between the vectors are given. Find the sum of $\mathbf{u}+\mathbf{v} .$ Give the magnitude to the nearest tenth and give the direction by specifying to the nearest degree the angle that the resultant makes with $\mathbf{u}$. $|\mathbf{u}|=32,|\mathbf{v}|=74, \theta=72^{\circ}$
Magnitudes of vectors u and v and the angle $\theta$ between the vectors are given. Find the sum of $\mathbf{u}+\mathbf{v} .$ Give the magnitude to the nearest tenth and give the direction by specifying to the nearest degree the angle that the resultant makes with $\mathbf{u}$. |\mathbf{u}|=32,|\m...
##### Describe a barrier or problem in a health care organization that can be addressed through health...
Describe a barrier or problem in a health care organization that can be addressed through health care administration. How can business research support in helping develop a program or change in order to resolve the barrier or problem? Please provide a reference. Thank you...
##### SLL, 25 pts] (back variable is NOT available) (10 pts) Introduce a function that deletes last...
SLL, 25 pts] (back variable is NOT available) (10 pts) Introduce a function that deletes last node. If no node available, do nothing. (15 pts) Introduce a function that deletes all duplicate nodes. For example, 1- 1->1-3->2->4>3->2->1 will become 1->3->2->4 a. b....
##### By using Laplace Transform solve a) y "+4y '+ 3y = 6e y ()=L,y '(0) =-2 dr dy b) +x-y =0_ -6x+ 6e X(0) 0 ,Y(u)-35 Points for each item)Best Wishes
By using Laplace Transform solve a) y "+4y '+ 3y = 6e y ()=L,y '(0) =-2 dr dy b) +x-y =0_ -6x+ 6e X(0) 0 ,Y(u)-3 5 Points for each item) Best Wishes...
##### Note 0 % The arguments 5 spetel instead and with 2 08 stepCiacee Archimedes plot (x,Y) interval use 01 polar polar described anat with by function polar with OMJ 8 coordinates
Note 0 % The arguments 5 spetel instead and with 2 08 stepCiacee Archimedes plot (x,Y) interval use 01 polar polar described anat with by function polar with OMJ 8 coordinates...
##### (30 pts ) Find the Laplace or inverse Laplace tranformation.(b): s-'{v-2a+}; (c): 2-'{da-}
(30 pts ) Find the Laplace or inverse Laplace tranformation. (b): s-'{v-2a+}; (c): 2-'{da-}...
##### Retirement Plan Now that you have a house, it’s time for you to plan for retirement....
Retirement Plan Now that you have a house, it’s time for you to plan for retirement. Your plan is to take a certain amount from your salary at the end of each year and invest it in a 401(K) mutual fund. Then when you get sick of your job and want to retire, you will have a fund that you can wi...
##### Can you help me solve this problem? accounting 200 SA-11 accounts along with some ancillary information...
can you help me solve this problem? accounting 200 SA-11 accounts along with some ancillary information and ask the student to deduce what happened to cash during the period. This procedure amounts to making some educated guesses about the effect of various transactions on cash. To do thi...
##### What is a carrying capacity? Mathematically, how does it appear on the graph of a population function?
What is a carrying capacity? Mathematically, how does it appear on the graph of a population function?...
##### 3. A high school with 1200 students is placing students with an IQ score of 130...
3. A high school with 1200 students is placing students with an IQ score of 130 and above in an accelerated class. A standardized IQ test has a mean of 100 and a standard deviation of 15. Assuming a normal population, approximately how many students will be assigned to the accelerated class? 4. A re...
##### What is the basic psychological need all humans experience, according to Rogers? a. esteem b. self-actualization...
What is the basic psychological need all humans experience, according to Rogers? a. esteem b. self-actualization c. positive regard d. Food, water, and shelter QUESTION 2 Mention four therapeutic conditions necessary for psychological change (according to Rogers) QUESTI...
##### You have just graduated from college and been offered your first job. The following table gives the salary schedule for the first few years of employment: Bonuses are not included: Let y represent your salary after xyears of employment: Years of Employment, X Salary y (S) 32,500 34,125 35,750Find the slope of the line and explain what it means:m1625 Your starting salary is $1,625. m 1625 Your salary increases by$1625 for each year of employment: m 3250 Your salary increases by 53250 for each ye
You have just graduated from college and been offered your first job. The following table gives the salary schedule for the first few years of employment: Bonuses are not included: Let y represent your salary after xyears of employment: Years of Employment, X Salary y (S) 32,500 34,125 35,750 Find t...
##### Find the derivative. Assume that 9 is an unknown function with Input varlable * . If you need to Include the derivative of g in your answer there are two ways to type it:Type Ihe prime nolatlon; g : On most keyboards the quote symbol next to the the [Enter] key Type the full Leibniz notation, dg In fraction fon: WebAssign Is happiest if you type space before and aftor.3x2 49 |[0/1 Points]DETAILSPREVIOUS ANSWERSFind the derivative.'sint)Submit Answer[-/1 Polnts]DETAILSFind the derivatlve. As
Find the derivative. Assume that 9 is an unknown function with Input varlable * . If you need to Include the derivative of g in your answer there are two ways to type it: Type Ihe prime nolatlon; g : On most keyboards the quote symbol next to the the [Enter] key Type the full Leibniz notation, dg In...
##### Question 26 An invoice . O is a demand for immediate payment O must be paid...
Question 26 An invoice . O is a demand for immediate payment O must be paid within 10 days from the date of the invoice O is the bill that the purchaser receives from the vendor O is the purchaser's request for payment from the seller Moving to another question will save this response....
##### Find all solutions of the equation in the interval [0, 21) .2cos8+v2 =0Write your answer in radians in terms of I, If there is more than one solution_ separate them with commasDD;
Find all solutions of the equation in the interval [0, 21) . 2cos8+v2 =0 Write your answer in radians in terms of I, If there is more than one solution_ separate them with commas DD;...
##### Eco - Eliminator Manufacturing produces a chemical pesticide and uses process costing. There are three processing...
Eco - Eliminator Manufacturing produces a chemical pesticide and uses process costing. There are three processing departments-Mixing, Refining, and Packaging. On January 1, the first department - Mixing—had no beginning inventory. During January, 42,000 fl. oz. of chemicals were started in pro...
##### Point) Solve the system -6010 25with the initial valuex(0)6(-22/5)e^(35t)+(-52/5)e^(-35t)x(t)(-22/5Je^(35t)-(-52/5)e^(-35t)
point) Solve the system -60 10 25 with the initial value x(0) 6(-22/5)e^(35t)+(-52/5)e^(-35t) x(t) (-22/5Je^(35t)-(-52/5)e^(-35t)...
|
# galpy.util.plot.scatterplot¶
galpy.util.plot.scatterplot(x, y, *args, **kwargs)[source]
NAME:
scatterplot
PURPOSE:
make a ‘smart’ scatterplot that is a density plot in high-density regions and a regular scatterplot for outliers
INPUT:
x, y
xlabel - (raw string!) x-axis label, LaTeX math mode, no $s needed ylabel - (raw string!) y-axis label, LaTeX math mode, no$s needed
xrange
yrange
bins - number of bins to use in each dimension
weights - data-weights
aspect - aspect ratio
conditional - normalize each column separately (for probability densities, i.e., cntrmass=True)
gcf=True does not start a new figure (does change the ranges and labels)
contours - if False, don’t plot contours
justcontours - if True, only draw contours, no density
cntrcolors - color of contours (can be array as for dens2d)
cntrlw, cntrls - linewidths and linestyles for contour
cntrSmooth - use ndimage.gaussian_filter to smooth before contouring
levels - contour-levels; data points outside of the last level will be individually shown (so, e.g., if this list is descending, contours and data points will be overplotted)
onedhists - if True, make one-d histograms on the sides
onedhistx - if True, make one-d histograms on the side of the x distribution
onedhisty - if True, make one-d histograms on the side of the y distribution
onedhistcolor, onedhistfc, onedhistec
onedhistxnormed, onedhistynormed - normed keyword for one-d histograms
onedhistxweights, onedhistyweights - weights keyword for one-d histograms
cmap= cmap for density plot
hist= and edges= - you can supply the histogram of the data yourself, this can be useful if you want to censor the data, both need to be set and calculated using scipy.histogramdd with the given range
retAxes= return all Axes instances
OUTPUT:
plot to output device, Axes instance(s) or not, depending on input
HISTORY:
2010-04-15 - Written - Bovy (NYU)
|
# Overview¶
The feed package provides useful tools when building trading environments. The primary reason for using this package is to help build the mechanisms that generate observations from an environment. Therefore, it is fitting that their primary location of use is in the Observer component. The Stream API provides the granularity needed to connect specific data sources to the Observer.
# What is a Stream?¶
A Stream is the basic building block for the DataFeed, which is also a stream itself. Each stream has a name and a data type and they can be set after the stream is created. Streams can be created through the following mechanisms:
• generators
• iterables
• sensors
• direct implementation of Stream
For example, if you wanted to make a stream for a simple counter. We will make it such that it will start at 0 and increment by 1 each time it is called and on reset will set the count back to 0. The following code accomplishes this functionality through creating a generator function.
from tensortrade.feed import Stream
def counter():
i = 0
while True:
yield i
i += 1
s = Stream.source(counter)
In addition, you can also use the built-in count generator from the itertools package.
from itertools import count
s = Stream.source(count(start=0, step=1))
These will all create infinite streams that will keep incrementing by 1 to infinity. If you wanted to make something that counted until some finite number you can use the built in range function.
s = Stream.source(range(5))
This can also be done by giving in a list directly.
s = Stream.source([1, 2, 3, 4, 5])
The direct approach to stream creation is by subclassing Stream and implementing the forward, has_next, and reset methods. If the stream does not hold stateful information, then reset is not required to be implemented and can be ignored.
class Counter(Stream):
def __init__(self):
super().__init__()
self.count = None
def forward(self):
if self.count is None:
self.count = 0
else:
self.count += 1
return self.count
def has_next(self):
return True
def reset(self):
self.count = None
s = Counter()
There is also a way of creating streams which serves the purpose of watching a particular object and how it changes over time. This can be done through the sensor function. For example, we can use this to directly track performance statistics on our portfolio. Here is a specific example of how we can use it to track the number of orders the are currently active inside the order management system.
from tensortrade.env.default.actions import SimpleOrders
action_scheme = SimpleOrders()
s = Stream.sensor(action_scheme.broker, lambda b: len(b.unexecuted))
As the agent and the environment are interacting with one another, this stream will be able to monitor the number of active orders being handled by the broker. This stream can then be used by either computing performance statistics and supplying them to a Renderer or simply by including it within the observation space.
Now that we have seen the different ways we can create streams, we need to understand the ways in which we can aggregate new streams from old. This is where the data type of a stream becomes important.
# Using Data Types¶
The purpose of the data type of a stream, dtype, is to add additional functionality and behavior to a stream such that it can be aggregated with other streams of the same type in an easy and intuitive way. For example, what if the number of executed orders from the broker is not important by itself, but is important with respect to the current time of the process. This can be taken into account if we create a stream for keeping count of the active orders and another one for keeping track of the step in the process. Here is what that would look like.
from itertools import count
n = Stream.source(count(0, step=1), dtype="float")
n_active = Stream.sensor(action_scheme.broker, lambda b: len(b.unexecuted), dtype="float")
s = (n_active / (n + 1)).rename("avg_n_active")
Suppose we find that this is not a useful statistic and instead would like to know how many of the active order have been filled since the last time step. This can be done by using the lag operator on our stream and finding the difference between the current count and the count from the last time step.
n_active = Stream.sensor(action_scheme.broker, lambda b: len(b.unexecuted), dtype="float")
s = (n_active - n_active.lag()).rename("n_filled")
As you can see from the code above, we were able to make more complex streams by using simple ones. Take note, however, in the way we use the rename function. We only really want to rename a stream if we will be using it somewhere else where its name will be useful (e.g. in our feed). We do not want to name all the intermediate streams that are used to build our final statistic because the code will become too cumbersome and annoying. To avoid these complications, streams are created to automatically generate a unique name on instantiation. We leave the naming for the user to decide which streams are useful to name.
Since the most common data type is float in these tasks, the following is a list of supported special operations for it:
• Let s, s1, s2 be streams.
• Let c be a constant.
• Let n be a number.
• Unary:
• -s, s.neg()
• abs(s), s.abs()
• s**2, pow(s, n)
• Binary:
• s1 + s2, s1.add(s2), s + c, c + s
• s1 - s2, s1.sub(s2), s - c, c - s
• s1 * s2, s1.mul(s2), s * c, c * s
• s1 / s2, s1.div(s2), s / c, c / s
There are many more useful functions that can be utilized, too many to list in fact. You can find all of the. however, in the API reference section of the documentation.
The Stream API is very robust and can handle complex streaming operations, particularly for the float data type. Some of the more advanced usages include performance tracking and developing reward schemes for the default trading environment. In the following example, we will show how to track the net worth of a portfolio. This implementation will be coming directly from the wallets that are defined in the portfolio.
# Suppose we have an already constructed portfolio object, portfolio.
worth_streams = []
for wallet in portfolio.wallets:
total_balance = Stream.sensor(
wallet,
lambda w: w.total_balance.as_float(),
dtype="float"
)
symbol = w.instrument.symbol
if symbol == portfolio.base_instrument.symbol
worth_streams += [total_balance]
else:
price = Stream.select(
w.exchange.streams(),
lambda s: s.name.endswith(symbol)
)
worth_streams += [(price * total_balance)]
net_worth = Stream.reduce(worth_streams).sum().rename("net_worth")
|
# Kernel regression
34,203pages on
this wiki
The Kernel regression is a non-parametrical technique in statistics to estimate the conditional expectation of random variable.
In any nonparametric regression, the conditional expectation of a variable $Y$ relative to a variable $X$ may be written:
$\operatorname{E}(Y | X)=m(X)$
where $m$ is a non-parametric function.
Nadarya (1964) and Watson (1964) proposed to estimate $m$ as a locally weighted average, using a kernel as a weighting function. The Nadarya-Watson estimator is:
$\widehat{m}_h=\frac{n^{-1}\sum_{i=1}^nK_h(x-X_i)Y_i }{n^{-1}\sum_{i=1}^nK_h(x-X_i)}$
where $K$ is a kernel with a bandwith $h$.
## Statistical implementationEdit
kernreg2 y x, bwidth(.5) kercode(3) npoint(500) gen(kernelprediction gridofpoints)
|
1. anonymous
2. anonymous
use the distance formula$d =\sqrt{(x2-x1)^2 + (y2-y1)^2}$
3. anonymous
are u sure that is the right formula to use ?
4. anonymous
x,1, y1 = -2, -4 and X2, y2 = 4, 3
5. anonymous
whats so with that logic i was right ?
6. anonymous
$d= \sqrt{4-(-2)^2+(3-(-4)^2)}$
7. anonymous
$d= \sqrt{4-(4) + (3-16)}$
8. anonymous
$d= \sqrt{0 -13}$
9. anonymous
This ends up as a complex number d= 13i
10. anonymous
have you studied complex numbers?
11. anonymous
naw lol i just really cant get this question wrong :p
12. anonymous
maybe I have made a mistake, its not in your four options
13. anonymous
but those are the only options i have it has to be there :(
14. anonymous
one second I will get someone else to look over my results, but I have doubled checked it, and there are no mistakes. @Thesmartone
15. anonymous
@Thesmarterone need you to check my results
16. anonymous
hes offfline
17. anonymous
@Black_ninja123 @triciaal
18. Black_ninja123
One sec let me check
19. Black_ninja123
B is correct
20. anonymous
What does B is correct mean?
21. Black_ninja123
22. anonymous
23. anonymous
yea can u prove it ?
24. Black_ninja123
$d=(A.B)=\sqrt{x2-x1^{2}+y2-y1^{2}}$ This is the formula
25. anonymous
wait @HelpOfTheGods
26. anonymous
ok yea i seen that formula in my lesson so im thinking thats prob right since i have 0 clue lol
27. Black_ninja123
$\sqrt{-2 - 4^{2}+-4-3^{2}}$
28. anonymous
@Black_ninja123 chose the points in reverse
29. Black_ninja123
$\sqrt{36+49}$
30. Black_ninja123
$\sqrt{85}$
31. Black_ninja123
Square root of 85 is 9.22
32. anonymous
was anything I did @Black_ninja123 incorrect?
33. Black_ninja123
you said -2^2 is 4 but its actually -4
34. anonymous
are you joking me? -2 x -2 = 4 not -4!
35. Black_ninja123
Use the calculator and its gonna tell you -4
36. anonymous
I appreciate your help on this one @Black_ninja123
37. anonymous
I entered (-2)x (-2) and it still gives me +4
38. Black_ninja123
enter -2^2
39. anonymous
still +4
40. Black_ninja123
ok
41. anonymous
I can change the rules of arithmetic for integers if you like
|
Hurwitz equation
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Markoff–Hurwitz equation, Markov–Hurwitz equation
A Diophantine equation (cf. Diophantine equations) of the form
(a1)
for fixed , . The case was studied by A.A. Markoff [A.A. Markov] [a1] because of its relation to Diophantine approximations (cf. also Markov spectrum problem). More generally, these equations were studied by A. Hurwitz [a2]. These equations are of interest because the set of integer solutions to (a1) is closed under the action of the group of automorphisms generated by the permutations of the variables , sign changes of pairs of variables, and the mapping
If (a1) has an integer solution and is not the trivial solution , then its -orbit is infinite. Hurwitz showed that if (a1) has a non-trivial integer solution, then ; and if , then the full set of integer solutions is the -orbit of together with the trivial solution. N.P. Herzberg [a3] gave an efficient algorithm to find pairs for which the Hurwitz equation has a non-trivial solution. Hurwitz also showed that for any pair there exists a finite set of fundamental solutions such that the orbits are distinct and the set of non-trivial integer solutions is exactly the union of these orbits. A. Baragar [a4] showed that for any there exists a pair such that (a1) has at least fundamental solutions.
D. Zagier [a5] investigated the asymptotic growth for the number of solutions to the Markov equation () below a given bound, and Baragar [a6] investigated the cases .
There are a few variations to the Hurwitz equations which admit a similar group of automorphisms. These include variations studied by L.J. Mordell [a7] and G. Rosenberger [a8]. L. Wang [a9] studied a class of smooth variations.
References
[a1] A.A. Markoff, "Sur les formes binaires indéfinies" Math. Ann. , 17 (1880) pp. 379–399 [a2] A. Hurwitz, "Über eine Aufgabe der unbestimmten Analysis" Archiv. Math. Phys. , 3 (1907) pp. 185–196 (Also: Mathematisch Werke, Vol. 2, Chapt. LXX (1933 and 1962), 410–421) [a3] N.P. Herzberg, "On a problem of Hurwitz" Pacific J. Math. , 50 (1974) pp. 485–493 [a4] A. Baragar, "Integral solutions of Markoff–Hurwitz equations" J. Number Th. , 49 : 1 (1994) pp. 27–44 [a5] D. Zagier, "On the number of Markoff numbers below a given bound" Math. Comp. , 39 (1982) pp. 709–723 [a6] A. Baragar, "Asymptotic growth of Markoff–Hurwitz numbers" Compositio Math. , 94 (1994) pp. 1–18 [a7] L.J. Mordell, "On the integer solutions of the equation " J. London Math. Soc. , 28 (1953) pp. 500–510 [a8] G. Rosenberger, "Über die Diophantische Gleichung " J. Reine Angew. Math. , 305 (1979) pp. 122–125 [a9] L. Wang, "Rational points and canonical heights on K3-surfaces in " Contemp. Math. , 186 (1995) pp. 273–289
How to Cite This Entry:
Hurwitz equation. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Hurwitz_equation&oldid=16935
This article was adapted from an original article by A. Baragar (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
# Computation graphs and graph computation
Research has begun to reveal many algorithms can be expressed as matrix multiplication, suggesting an unrealized connection between linear algebra and computer science. I speculate graphs are the missing piece of the puzzle. Graphs are not only useful as cognitive aides, but are suitable data structures for a wide variety of tasks, particularly on modern parallel processing hardware.
In this essay, I explore the virtues of graphs, algebra, types, and show how these concepts can help us reason about programs. I propose a computational primitive based on graph signal processing, linking software engineering, graphs, and linear algebra. Finally, I share my predictions for the path ahead, which I consider to be the start of an exciting new chapter in computing history.
n.b.: None of these ideas are mine alone. Shoulders of giants. Follow the links and use landscape mode for optimal reading experience.
Over the last decade, I bet on some strange ideas. A lot of people I looked up to at the time laughed at me. I’ll bet they aren’t laughing anymore. I ought to thank them one day, because their laughter gave me a lot of motivation. I’ve said some idiotic things to be sure, but I’ve also made some laughable predictions that were correct. Lesson learned: aim straighter.
In 2012, I was in Austin sitting next to an ex-poker player named Amir who was singing Hinton’s praises. Hypnotized by his technicolor slides, I quit my job in a hurry and started an educational project using speech recognition and restricted Boltzmann machines. It never panned out, but I learned a lot about ASR and Android audio. Still love that idea.
In 2017, I started writing a book on the ethics of automation and predicted mass unemployment and social unrest. Although I got the causes wrong (pandemic, go figure), the information economy and confirmation bias takes were all dead right. Sadly, this is now driving the world completely insane. Don’t say I warned you, go out and fix our broken systems. The world needs more engineers who care.
In 2017, I witnessed the birth of differentiable programming, which I stole from Chris Olah and turned into a master’s thesis. Had a lot of trouble convincing people that classical programs could be made differentiable, but look at the proceedings of any machine learning conference today and you’ll find dozens of papers on differentiable sorting and rendering and simulation. Don’t thank me, thank Chris and the Theano guys.
In 2018, I correctly predicted Microsoft would acquire GitHub to mine code. Why MS and not Google? I’ll bet they tried, but Google’s leadership had fantasies of AGI and besides JetBrains, MS were the only ones who gave a damn about developers. Now ML4SE is a thriving research area and showing up in real products, much to the chagrin of those who believed ML was a fad. I suspect their hype filter blinded them to the value those tools provide.
But to heck with everything I’ve said! If I had just one idea to share with these ML people, it would be types. Beat that drum as loud as I could. Types are the best tool we know for synthetic reasoning. If you want to build provably correct systems that scale on real-world applications, use types. Not everyone is convinced yet, but mark my words, types are coming. Whoever figures out how to connect types and learning will be the next Barbara Liskov or Frances Allen.
This year, I predicted the pandemic weeks before the lockdown, exited the market, and turned down a job at Google. Some people called me crazy. Now I’m going all-in on some new ideas (none of which are mine). I’m making some big bets and some will be wrong, but I see the very same spark of genius in them.
# Everything old is new again
As a kid, I was given a book on the history of mathematics. I remember it had some interesting puzzles, including one with some bridges in a town divided by rivers, once inhabited by a man called Euler. Was there a tour crossing each bridge exactly once? Was it possible to tell without checking every path? I remember spending days trying to figure out the answer.
In the late 90s, my mom and I went to Ireland. I remember visiting Trinity College, and learning about a mathematician called Hamilton who discovered a famous formula connecting algebra and geometry, and carved it onto a bridge. We later visited the bridge, and the tour guide pointed out the stone, which we touched for good luck. The Irish have a thing for stones.
In 2007, I was applying to college and took the train from Boston to South Bend, Indiana, home of the Fighting Irish. Wandering about, I picked up a magazine article by a Hungarian mathematician called Barabási then at Notre Dame, who had some interesting things to say about complex networks. Later in 2009, while studying in Rochester, I carpooled with a nice professor, and learned complex networks are found in brains, languages and many marvelous places.
Fast forward to 2017. I was lured by the siren song of algorithmic differentiation. Olivier Breleux presented Myia and Buche. Matt Johnson gave a talk on Autograd. I met Chris Olah in Long Beach, who gave me the idea to study differentiable programming. I stole his idea, dressed it up in Kotlin and traded it for a POPL workshop paper and later a Master’s thesis. Our contributions were using algebra, shape inference and presenting AD as term rewriting.
In 2019, I joined a lab with a nice professor at McGill applying knowledge graphs to software engineering. Like logical reasoning, knowledge graphs are an idea from the first wave of AI in the 1960s and 70s which have been revived and studied in light of recent progress in the field. I believe this is an important area of research with a lot of potential. Knowledge and traceability plays a big role in software engineering, and it’s the bread-and-butter of a good IDE. The world needs better IDEs if we’re ever going to untangle this mess we’re in.
This Spring, I took a fascinating seminar on Graph Representation Learning. A lot of delightful graph theory has been worked out over the last decade. PageRank turned into power iteration. People have discovered many interesting connections to linear algebra, including Weisfeiler-Lehman graph kernels, graph Laplacians, Krylov methods, and spectral graph theory. These ideas have deepened our understanding of graph signal processing and its applications for learning and program analysis. More on that later.
# What are graphs?
Graphs are general-purpose data structures used to represent a variety of data types and procedural phenomena. Unlike most sequential languages, graphs are capable of expressing a much richer family of relations between entities, and are a natural fit for many problems in computer science, physics, biology and mathematics. Consider the following hierarchy of data structures, all of which are graphs with increasing expressive power:
As we realized in Kotlin∇, directed graphs can be used to model mathematical expressions, as well as other formal languages, including source code, intermediate representations and binary artifacts. Not only can graphs be used to describe extant human knowledge, many recent examples have shown that machines can “grow” trees and graphs for various applications, such as program synthesis, mathematical deduction and physical simulation. Recent neuro-symbolic applications have shown promising early results in graph synthesis:
The field of natural language processing has also developed a rich set of graph-based representations, such as constituency, dependency, link and other and other typed attribute grammars which can be used to reason about syntactic and semantic relations between natural language entities. Research has begun to show many practical applications for such grammars in the extraction and organization of human knowledge stored in large text corpora. Those graphs can be further processed into ontologies for logical reasoning.
Using coreference resolution and entity alignment techniques, we can reconstruct internally consistent relations between entities, which capture cross-corpus consensus in natural language datasets. When stored in knowledge graphs, these relations can be used for information retrieval and question answering, e.g. on wikis and other content management systems. Recent techniques have shown promise in automatic knowledge base construction (cf. Reddy et al., 2016).
Lo and behold, the key idea behind knowledge graphs is our old friend, types. Knowledge graphs are multi-relational graphs whose nodes and edges possess a type. Two entities can be related by multiple types, and each type can relate many pairs of entities. We can index an entity based on its type for knowledge retrieval, and use types to reason about compound queries, e.g. “Which company has a direct flight from a port city to a capital city?”, which would otherwise be difficult to answer without a type system.
# Induction introduction!
In this section, we will review some important concepts from Chomskyan linguistics, including finite automata, abstract rewriting systems, and λ-calculus. Readers already familiar with these concepts will gain a newfound appreciation for how each one shares a common thread and can be modeled using the same underlying abstractions.
## Regular languages
One thing that always fascinated me is the idea of inductively defined languages, also known as recursive, or structural induction. Consider a very simple language that accepts strings of the form 0, 1, 100, 101, 1001, 1010, et cetera, but rejects 011, 110, 1011, or any string containing 11. The → symbol denotes a “production”. The | symbol, which we read as “or”, is just shorthand for defining multiple productions on a single line:
true → 1
term → 0 | 10 | ε
expr → term | expr term
We have two sets of productions, those which can be expanded, called “nonterminals”, and those which can be expanded no further, called “terminals”. Notice how each non-terminal occurs at most once in any single production. This property guarantees the language is recognizable by a special kind of graph, called a finite state machine. As their name suggests, FSMs contain a finite set of states, with labeled transitions between them:
Finite Automaton Library Courtesy Bell
and wait for assistance.
Imagine a library desk: you can wait quietly and eventually you will be served. Or, you can ring the bell once and wait quietly to be served. Should no one arrive after a while, you may press the bell again and continue waiting. Though you must never ring the bell twice, lest you disturb the patrons and be tossed out.
Regular languages can also model nested repetition. Consider a slightly more complicated language, given by the regular expression (0(01)*)*(10)*. The *, or Kleene star, means, “accept zero or more of the previous token”.
Backus-Naur Grammar Nondeterminstic Finite Automaton t → ε | 0 a → 10 | a 10 b → 0 | b 01 | b 0
Note here, a single symbol may have multiple transitions from the same state. Called a nondeterminsic finite automaton (NFA), this machine can occupy multiple states simultaneously. While no more powerful than their determinstic cousins, NFAs often require far fewer states to recognize the same language. One way to implement an NFA is to simulate the superposition of all states, by cloning the machine whenever such a transition occurs. More on that later.
## Arithmetic
Now suppose we have a slightly more expressive language that accepts well-formed arithmetic expressions with up to two variables, in either infix or unary prefix form. In this language, a non-terminal may occur twice inside a single production – an expr can be composed of two subexprs:
term → 1 | 0 | x | y
op → + | - | ·
expr → term | op expr | expr op expr
This is an example of a context-free language (CFL). We can represent strings in this language using a special kind of graph, called a syntax tree. Each time we expand an expr with a production rule, this generates a rooted subtree on op, whose branch(es) are exprs. Typically, syntax trees are inverted, with branches extending downwards, like so:
Syntax Tree Peach Tree
While syntax trees can be interpreted computationally, they do not actually perform computation unless evaluated. To [partially] evaluate a syntax tree, we will now introduce some pattern matching rules. Instead of just allowing terminals to occur on the right-hand side of a production, suppose we also allow terminals on the left, and applying a rule can shrink a string in our language. Here, we use capital letters on the same line to indicate an exact match, e.g. a rule U + V → V + U would replace x + y with y + x:
E + E → +E
E · E → ·E
E + 1 | 1 + E | +1 | -0 | ·1 → 1
E + 0 | 0 + E | E - 0 → E
E - E | E · 0 | 0 · E | 0 - E | +0 | -1 | ·0 → 0
If we must add two identical expressions, why evaluate them twice? If we need to multiply an expression by 0, why evaluate it at all? Instead, we will try to simplify these patterns whenever we encounter them. This is known as a rewrite system, which we can think of as grafting or pruning the branches of a tree. Some say, “all trees are DAGs, but not all DAGs are trees”. I prefer to think of a DAG as a tree with a gemel:
Rewrite Rule Deformed Tree
Let us now introduce a new operator, Dₓ, and some corresponding rules. In effect, these rules will push Dₓ as far towards the leaves as possible, while rewriting terms along the way. We will also introduce some terminal rewrites:
[R0] term → Dₓ(term)
[R1] Dₓ(x) → 1
[R2] Dₓ(y) → 0
[R3] Dₓ(U+V) → Dₓ(U) + Dₓ(V)
[R4] Dₓ(U·V) → U·Dₓ(V) + Dₓ(U)·V
[R5] Dₓ(+U) → +Dₓ(U)
[R6] Dₓ(-U) → -Dₓ(U)
[R7] Dₓ(·U) → +U·Dₓ(U)
[R8] Dₓ(1) → 0
[R9] Dₓ(0) → 0
Although we assign an ordering R0-R9 for notational convenience, an initial string, once given to this system, will always converge to the same result, no matter in which order we perform the substitutions (proof required):
Term Confluence Ottawa-St. Lawrence Confluence
This feature, called confluence, is an important property of some rewrite systems: regardless of the substitution order, we will eventually arrive at the same result. If all strings in a language reduce to a form which can be simplified no further, we call such systems strongly normalizing, or terminating. If a rewriting system is both confluent and terminating it is said to be convergent.
## λ-calculus
So far, the languages we have seen are capable of generating and simplifying arithmetic expressions, but are by themselves incapable of performing arithmetic, since they cannot evaluate arbitrary arithmetic expressions. We will now consider a language which can encode and evaluate any arithmetic expression:
expr → var | func | appl
func → (λ var.expr)
appl → (expr expr)
To evaluate an expr in this language, we need a single substitution rule. The notation expr[var → val], we read as, “within expr, var becomes val”:
(λ var.expr) val → (expr[var → val])
For example, applying the above rule to the expression (λy.y z) a yields a z. With this seemingly trivial addition, our language is now powerful enough to encode any computable function! Known as the pure untyped λ-calculus, this system is equivalent to an idealized computer with infinite memory.
While grammatically compact, computation in the λ-calculus is not particularly terse. In order to perform any computation, we will need a way to encode values. For example, we can encode the boolean algebra like so:
[D1] λx.λy.x = T "true"
[D2] λx.λy.y = F "false"
[D3] λp.λq.p q p = & "and"
[D4] λp.λq.p p q = | "or"
[D5] λp.λa.λb.p b a = ! "not"
To evaluate a boolean expression !T, we will first need to encode it as a λ-expression. We can then evaluate it using the λ-calculus as follows:
( ! ) T
→ (λp.λa.λb. p b a) T [D5]
→ ( λa.λb. T b a) [p → T]
→ ( λa.λb.(λx.λy.x) b a) [D1]
→ ( λa.λb.( λy.b) a) [x → b]
→ ( λa.λb.( λy.b) ) [y → a]
→ ( λa.λb.b ) [y → ]
→ ( F ) [D2]
We have reached a terminal, and can recurse no further. This particular program is decidable. What about others? Let us consider an undecidable example:
(λg.(λx.g (x x)) (λx.g (x x))) f
(λx.f (x x)) (λx.f (x x)) [g → f]
f (λx.f (x x))(λx.f (x x)) [f → λx.f(x x)]
f f (λx.f (x x))(λx.f (x x)) [f → λx.f(x x)]
f f f (λx.f (x x))(λx.f (x x)) [f → λx.f(x x)]
... (λx.f (x x))(λx.f (x x)) [f → λx.f(x x)]
This pattern is Curry’s (1930) famous fixed point combinator and the cornerstone of recursion, called Y. Unlike its typed cousin, the untyped λ-calculus is not strongly normalizing and thus not guaranteed to converge. Were it convergent, it would not be Turing-complete. This hard choice between decidability and universality is one which no computational language can avoid.
The λ-calculus, can also be interpreted graphically. I refer the curious reader to some promising proposals which have attempted to formalize this perspective:
## Cellular automata
The elementary cellular automaton is another string rewrite system consisting of a one dimensional binary array, and a 3-cell grammar. Note there are $$2^{2^3} = 256$$ possible rules for rewriting the tape. It turns out even in this tiny space, there exist remarkable automata. Consider the following rewrite system:
current pattern 111 110 101 100 011 010 001 000
next pattern 0 1 1 0 1 1 1 0
We can think of this machine as sliding over the tape, and replacing the centermost cell in each matching substring with the second value. Depending on the initial state and rewrite pattern, cellular autoamta can produce many visually interesting patterns. Some have spent a great deal of effort cataloguing families of CA and their behavior. Following Robinson (1987), we can also define an ECA inductively, using the following recurrence relation:
$a_i^{(t)} = \sum_j s(j)a_{(i-j)}^{t-1} \mod m$
This characterization might remind us of a certain operation from digital signal processing, called a discrete convolution. We read $$f * g$$ as “$$f$$ convolved by $$g$$”:
$(f * g)[n] = \sum_m f[m]g[n-m]$
Here $$f$$ is our state and $$g$$ is called a “kernel”. Similar to the λ-calculus, this language also is known to be universal. Disregarding efficiency, we could encode any computable function as an initial state and mechanically apply Rule 110 to simulate a TM, λ-calculus, or any other TC system for that matter.
Cellular automata can also be interpreted as a graph rewriting system, although the benefits of this perspective are not as clear. Unlike string rewriting, graph substitution is much more difficult to implement efficiently, as pattern matching amounts to subgraph isomorphism, which is NP-complete. While there are some optimizations to mitigate this problem, graph grammars do not appear to confer any additional computational benefits. Nevertheless, it is conceptually interesting.
# Graphs, inductively
Just like grammars, we can define graphs themselves inductively. As many graph algorithms are recursive, this choice considerably simplifies their implementation. Take one definition of an unlabeled directed graph, proposed by Erwig (2001). Here, the notation list → [item] is shorthand for list → item list, where item is some terminal, and list is just a list of items:
vertex → int
graph → empty | context & graph
Erwig defines a graph in four parts. First, we have a vertex, which is simply an integer. Next we have a list of vertices, adj, called an adjacency list. The context is a 3-tuple containing a vertex and symmetric references to its inbound and outbound neighbors, respectively. Finally, we have the inductive case: a graph is either (1) empty, or (2) a context and a graph.
String Graph ([3], 4, [1, 3]) & ([1, 2, 4], 3, [4] ) & ([1], 2, [1, 3]) & ([2, 4], 1, [2, 3])
Let us consider a directed graph implementation in Kotlin. We do not store inbound neighbors, and attempt to define a vertex as a closed neighborhood:
open class Graph(val vertices: Set<Vertex>) { ... }
data class Vertex(neighbors: Set<Vertex>): Graph(this + neighbors)
// ↳ Compile error!
Note the coinductive definition, which creates problems right off the bat. Since this is not accessible inside the constructor, we cannot have cycles or closed neighborhoods, unless we delay edge instantiation until after construction:
class Graph(val vertices: Set<Vertex>) { ... }
class Vertex(adjacencyMap: (Vertex) -> Set<Vertex>) {
constructor(neighbors: Set<Vertex> = setOf()) : this({ neighbors })
}
We can now call Vertex() { setOf(it) } to create loops and closed neighborhoods. This definition admits a nice k-nearest neighbors implementation, allowing us to compute the k-hop transitive closure of a vertex or set of vertices:
tailrec fun Vertex.neighbors(k: Int = 0, vertices: Set<Vertex> =
neighbors + this): Set<Vertex> =
if (k == 0 || vertices.neighbors() == vertices) vertices
else knn(k - 1, vertices + vertices.neighbors() + this)
fun Set<Vertex>.neighbors() = flatMap { it.neighbors() }.toSet()
// Removes all vertices outside the set
fun Set<Vertex>.closure(): Set<Vertex> =
map { vertex -> Vertex(neighbors.filter { it in this }) }.toSet()
fun Vertex.neighborhood(k: Int = 0) = Graph(neighbors(k).closure())
Another useful representation for a graph, which we will describe in further detail below, is a matrix. We can define the adjacency, degree, and Laplacian matrices like so:
val Graph.adjacency = Mat(vertices.size, vertices.size).also { adj ->
vertices.forEach { v -> v.neighbors.forEach { n -> adj[v, n] = 1 } }
}
val Graph.degree = Mat(vertices.size, vertices.size).also { deg ->
vertices.forEach { v -> deg[v, v] = v.neighbors.size }
}
val Graph.laplacian = degree - adjacency
These matrices have some important applications in algebraic and spectral graph theory, which we will have more to stay about later.
## Weisfeiler-Lehman
Let us consider an algorithm called the Weisfeiler-Lehman isomorphism test, on which my colleague David Bieber has written a nice piece. I’ll focus on its implementation. First, we need a pooling operator, which will aggregate all neighbors in a node’s neighborhood using some summary statistic:
fun Graph.poolBy(statistic: Set<Vertex>.() -> Int): Map<Vertex, Int> =
nodes.map { it to statistic(it.neighbors()) }.toMap()
Next, we’ll define a histogram, which just counts each node’s neighborhood:
val Graph.histogram: Map<Vertex, Int> = poolBy { size }
Now we’re ready to define the Weisfeiler-Lehman operator, which recursively hashes the labels until fixpoint termination:
tailrec fun Graph.wl(labels: Map<Vertex, Int>): Map<Vertex, Int> {
val next = poolingBy { map { labels[it]!! }.sorted().hash() }
return if (labels == next) labels else wl(next)
}
With one round, we’re just comparing the degree histogram. We compute the hash of the entire graph by hashing the multiset of WL labels:
fun Graph.hash() = wl(histogram).values.sorted().hash()
Finally, we can define a test to detect if one graph is isomorphic to another:
fun Graph.isIsomorphicTo(that: Graph) =
this.nodes.size == that.nodes.size &&
this.numOfEdges == that.numOfEdges &&
this.hash() == that.hash()
This algorithm works on many graphs encountered in the wild, however it cannot distinguish two regular graphs with an identical number of vertices and edges. Nevertheless, it is appealing for its simplicity and exemplifies a simple “message passing” algorithm, which we will revisit later. For a complete implementation and other inductive graph algorithms, such as Barabási’s preferential attachment algorithm, check out Kaliningraph.
## Graph Diameter
A graph’s diameter is the length of the longest shortest path between any two of its vertices. Let us define the augmented adjacency matrix as $$A + A^\intercal + \mathbb{1}$$, or:
val Graph.A_AUG = adjacency.let { it + it.transpose() } + ONES
To compute the diameter of a connected graph $$G$$, we can simply power the augmented adjacency matrix until it contains no zeros:
/* (A')ⁿ[a, b] counts the number of walks between vertices a, b of
* length n. Let i be the smallest natural number such that (A')ⁱ
* has no zeros. i is the length of the longest shortest path in G.
*/
tailrec fun Graph.slowDiameter(i: Int = 1, walks: Mat = A_AUG): Int =
if (walks.isFull) d else diameter(i = i + 1, walks = walks * A_AUG)
If we consider the complexity of this procedure, we note it takes $$\mathcal O(M \mid G\mid)$$ time, where $$M$$ is the complexity of matrix multiplication, and $$\mathcal O(Q\mid G \mid^2)$$ space, where $$Q$$ is the number of bits required for a single entry in A_AUG. Since we only care whether or not the entries are zero, we can instead cast A_AUG to $$\mathbb B^{n\times n}$$ and run binary search for the smallest i yielding a matrix with no zeros:
tailrec fun Graph.fastDiameter(i: Int, prev: BMat, next: BMat): Int =
if (walks.isFull || i <= ceil(log2(size))) slowDiameter(i / 2, prev)
else fastDiameter(i = 2 * i, prev = next, next = next * next)
Our improved procedure fastDiameter runs in $$\mathcal O(M\log_2\mid G\mid)$$ time. An iterative version of this procedure may be found in Booth and Lipton (1981).
## Graph Neural Networks
A graph neural network is like a graph, but whose edges are neural networks. In its simplest form, the inference step can be defined as a matrix recurrence relation $$\mathbf H^t := σ(\mathbf A \mathbf H^{t-1} \mathbf W^t + \mathbf H^{t-1} \mathbf W^t)$$ following Hamilton (2020):
tailrec fun gnn(
// Number of message passing rounds
t: Int = fastDiameter(),
// Matrix of node representations ℝ^{|V|xd}
H: Mat,
// (Trainable) weight matrix ℝ^{dxd}
W: Mat = randomMatrix(H.numCols),
// Bias term ℝ^{dxd}
b: Mat = randomMatrix(size, H.numCols),
// Nonlinearity ℝ^{*} -> ℝ^{*}
σ: (Mat) -> Mat = { it.elwise { tanh(it) } },
// Layer normalization ℝ^{*} -> ℝ^{*}
z: (Mat) -> Mat = { it.meanNorm() },
// Message ℝ^{*} -> ℝ^{*}
m: Graph<G, E, V>.(Mat) -> Mat = { σ(z(A * it * W + it * W + b)) }
): Mat = if(t == 0) H else gnn(t = t - 1, H = m(H), W = W, b = b)
The important thing to note here is that this message passing procedure is a recurrence relation, which like the graph grammar and WL algorithm seen earlier, can be defined inductively. Hamilton et al. (2017) also consider induction in the context of representation learning, although their definition is more closely related to the concept of generalization. It would be interesting to explore the connection between induction in these two settings, and we will have more to say about matrix recurrence relations in a bit.
# Graph languages
Approximately 20% of the human cerebral cortex is devoted to visual processing. By using visual representations, language designers can tap into powerful pattern matching abilities which are often underutilized by linear symbolic writing systems. Graphs are one such example which have found many applications as reasoning and communication devices in various domain-specific languages:
Language Example
Finite automata
Tensor networks
Causal graphs
Category theory
Penrose notation
Feynman diagrams
As Bradley (2019) vividly portrays in her writing, we can think of a matrix as not just a two-dimensional array, but a function on a vector space. This perspective can be depicted using a bipartite graph:
Not only do matrices correspond to graphs, graphs also correspond to matrices. One way to think of a graph is just a boolean matrix, or real matrix for weighted graphs. Consider an adjacency matrix containing nodes V, and edges E, where:
\begin{align*} \mathbf A \in \mathbb B^{|V|\times|V|} \text{ where } \mathbf A[u, v] = \begin{cases} 1,& \text{if } u, v \in E \\ 0,& \text{otherwise} \end{cases} \end{align*}
Geometric Matrix
Note the lower triangular structure of the adjacency matrix, indicating it contains no cycles, a property that is not immediately obvious from the naïve geometric layout. Any graph whose adjacency matrix can be reordered into triangular form is a directed acyclic graph. Called a topological ordering, this algorithm can be implemented by repeatedly squaring the adjacency matrix.
Both the geometric and matrix representations impose an extrinsic perspective on graphs, each with their own advantages and drawbacks. 2D renderings can be visually compelling, but require solving a minimal crossing number or similar optimization to make connectivity plain to the naked eye. While graph drawing is an active field of research, matrices can often reveal symmetries that are not obvious from a naïve graph layout (and vis versa).
Matrices are problematic for some reasons. Primarily, by treating a graph as a matrix, we impose an ordering over all vertices which is often arbitrary. Note also its sparsity, and consider the size of the matrix required to store even small graphs. While problematic, this can be overcome with certain optimizations. Despite these issues, matrices and are a natural representation choice for many graph algorithms, particularly on modern parallel processing hardware.
Just like matrices, we can also think of a graph as a function, or transition system, which carries information from one state to the next - given a state or set of states, the graph tells us which other states are reachable. Recent work in graph theory has revealed a fascinating duality between graphs and linear algebra, holding many important insights for dynamical processes on graphs.
# Graphs, computationally
What happens when we take a square matrix $$\mathbb{R}^{n\times n}$$ and raise it to a power? Which kinds of matrices converge and what are their asymptotics? This is a very fertile line of inquiry which has occupied engineers for the better part of the last century, with important applications in statistical phyics, control theory, and deep learning. Linear algebra gives us many tricks for designing the matrix and normalizing the product to promote convergence.
One way to interpret this is as follows: each time we multiply a matrix by a vector $$\mathbb{R}^{n}$$, we are effectively simulating a dynamical system at discrete time steps. This method is known as power iteration or the Krylov method in linear algebra. In the limit, we are seeking fixpoints, or eigenvectors, which are these islands of stability in our dynamical system. If we initialize our state at such a point, the transition matrix will send us straight back to where we started.
$f(x, y) = \begin{bmatrix} \frac{cos(x+2y)}{x} & 0 \\ 0 & \frac{sin(x-2y)}{y} \end{bmatrix} * \begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}cos(x+2y)\\sin(x-2y)\end{bmatrix}$
Locating a fixed point where $$f(x, y) = f\circ f(x, y)$$, indicates the trajectory has terminated. Such points describe the asymptotic behavior of our function.
First, let’s get some definitions out of the way.
𝔹 → True | False
ℕ → 0 | ... | 9
ℤ → ℕ | ℕℤ | -ℤ
ℝ → ℤ.ℤ
T → 𝔹 | ℕ | ℤ | ℝ
vec → [Tⁿ]
mat → [[Tⁿ]ⁿ]
We can think of the Krylov method as either a matrix-matrix or matrix-vector product, or a recurrence relation with some normalization:
Krylov Method
There exists in St. Petersburg a naval research facility, known as the Krylov Shipbuilding Research Institute, which houses the world's largest full ocean depth hydraulic pressure tank. Capable of simulating in excess of 20,000 PSI, the DK-1000 is used to test deepwater submersible vessels. At such pressure, even water itself undergoes ~5% compression. Before inserting your personal submarine, you may wish to perform a finite element analysis to check hull integrity. Instabilities in the stiffness matrix may produce disappointing results.
Grammar Example mmp → mat | mat * mmp mvp → (mmp) * vec $$(\mathbf{M}\mathbf{M})\mathbf{v}, (\mathbf{M}\mathbf{M}\mathbf{M})\mathbf{v}, \ldots$$ mvp → mat * vec | mat * (mvp) $$\mathbf{M}(\mathbf{M}\mathbf{v}), \mathbf{M}(\mathbf{M}(\mathbf{M}\mathbf{v})), \ldots$$ fun → mat * vec / ‖ mat * vec ‖ rec → fun | mat * rec / ‖ mat * rec ‖ $$\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}, \frac{\mathbf{M}\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}}{\|\mathbf{M}\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}\|}, \frac{\mathbf{M}\frac{\mathbf{M}\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}}{\|\mathbf{M}\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}\|}}{\|\mathbf{M}\frac{\mathbf{M}\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}}{\|\mathbf{M}\frac{\mathbf{M}\mathbf{v}}{\|\mathbf{M}\mathbf{v}\|}\|}\|}, \ldots$$
Regrouping the order of matrix multiplication offers various computational benefits, and adding normalization prevents singularities from emerging. Alternate normalization schemes have been developed for various applications in graphs. This sequence forms the so-called Krylov matrix (Krylov, 1931):
$\mathbf{K}_{i} = \begin{bmatrix}\mathbf{v} & \mathbf{M}\mathbf{v} & \mathbf{M}^{2}\mathbf{v} & \cdots & \mathbf{M}^{i-1}\mathbf{v} \end{bmatrix}$
There exists a famous theorem known as the Perron-Frobenius theorem, which states that if $$\mathbf M \in \mathcal T^{n \times n}$$, then $$\mathbf M$$ has a unique largest eigenvalue $$\lambda \in \mathcal T$$ and dominant eigenvector $$\mathbf{q} \in \mathcal T^{n}$$. It has long been known that under some weak assumptions, $$\lim_{i\rightarrow \infty} \mathbf{M}^i \mathbf{v} = c\mathbf{q}$$ where $$c$$ is some constant. We are primarily interested in determinstic transition systems, where $$\mathcal T \in \{\mathbb B, \mathbb N\}$$.
The Krylov methods have important applications for studying dynamical systems and graph signal processing. Researchers are just beginning to understand how eigenvalues of the graph Laplacian affect the asymptotics of dynamical processes on graphs. We have already seen one example of these in the WL algorithm. Another example of graph computation can be found in Valiant (1975), who shows a CFL parsing algorithm which is equivalent to matrix multiplication.
Yet another example of graph computation can be found in Reps et al. (2016), who show that boolean matrix algebra can be used for abstract interpretation. By representing control flow graphs as boolean matrix expressions, they show how to apply root-finding techniques like Newton’s method (first observed by Esparza et al. (2010)) to dataflow analysis, e.g. for determining which states are reachable from some starting configuration by computing their transitive closure:
We could spend all day listing various matrix algorithms for graph computation. Certainly, far better writers have done them justice. Instead, let’s just give some simple examples of dynamical processes on graphs.
# Examples
What happens if we define arithmetic operations on graphs? How could we define these operations in a way that allows us to perform computation? As we already saw, one way to represent a directed graph is just a square matrix whose non-zero entries indicate edges between nodes. Just like real matrices in linear algebra, we can add, subtract, multiply and exponentiate them. Other composable operations are also possible.
We will now show a few examples simulating a state machine using the Krylov method. For illustrative purposes, the state simply holds a vector of binary or integer values, although we can imagine it carrying other “messages” around the graph in a similar manner, using another algebra. Here, we will use the boolean algebra for matrix multiplication, where + corresponds to logical disjunction (∨), and * corresponds to logical conjunction (∧):
┌───┬───┬─────┬─────┐
│ x │ y │ x*y │ x+y │ Boolean Matrix Multiplication
├───┼───┼─────┼─────┤ ┌─ ─┐ ┌─ ─┐ ┌─ ─┐
│ 0 │ 0 │ 0 │ 0 │ │ a b c │ │ j │ │ a * j + b * k + c * l │
│ 0 │ 1 │ 0 │ 1 │ │ d e f │*│ k │=│ d * j + e * k + f * l │
│ 1 │ 0 │ 0 │ 1 │ │ g h i │ │ l │ │ g * j + h * k + i * l │
│ 1 │ 1 │ 1 │ 1 │ └─ ─┘ └─ ─┘ └─ ─┘
└───┴───┴─────┴─────┘
## Linear chains
Let’s iterate through a linked list. To do so, we will initialize the pointer to the head of the list, and use multiplication to advance the pointer by a single element. We add an implicit self-loop to the final node, and halt whenever a fixpoint is detected. This structure is known as an absorbing Markov chain.
Graph Matrix S S' a b c ┌──────── a │ 0 0 0 b │ 1 0 0 c │ 0 1 1 1 0 0 0 1 0 a b c ┌──────── a │ 0 0 0 b │ 1 0 0 c │ 0 1 1 0 1 0 0 0 1 a b c ┌──────── a │ 0 0 0 b │ 1 0 0 c │ 0 1 1 0 0 1 0 0 1
## Nondeterminstic finite automata
Simulating a DFA using a matrix can be inefficient since we only ever inhabit one state at a time. The real benefit of using matrices comes when simulating nondeterminstic finite automata, seen earlier.
Formally, an NFA is a 5-tuple $$\langle Q, \Sigma, \Delta, q_0, F \rangle$$, where $$Q$$ is a finite set of states, $$\Sigma$$ is the alphabet, $$\Delta :Q\times (\Sigma \cup \{\epsilon \})\rightarrow P(Q)$$ is the transition function, $$q_0 \in Q$$ is the initial state and $$F \subseteq Q$$ are the terminal states. An NFA can be represented as a labeled transition system, or directed graph whose adjacency matrix is defined by the transition function, with edge labels representing symbols from the alphabet and self-loops for each terminal state, both omitted for brevity.
Typical implementations often require cloning the NFA when multiple transitions are valid, which can be inefficient. Instead of cloning the machine, we can simulate the superposition of all states using a single data structure:
Graph Matrix S S' a b c d ┌─────────── a │ 0 0 0 0 b │ 1 0 0 0 c │ 1 0 0 0 d │ 0 1 1 1 1 0 0 0 0 1 1 0 a b c d ┌─────────── a │ 0 0 0 0 b │ 1 0 0 0 c │ 1 0 0 0 d │ 0 1 1 1 0 1 1 0 0 0 0 1 a b c d ┌─────────── a │ 0 0 0 0 b │ 1 0 0 0 c │ 1 0 0 0 d │ 0 1 1 1 0 0 0 1 0 0 0 1
We encode the accept state as a self cycle in order to detect the fixpoint criterion $$S_{t+1} = S_{t}$$, after which we halt execution.
## Dataflow graphs
Suppose we have the function f(a, b) = (a + b) * b and want to evaluate f(2, 3). For operators, we will need two tricks. First, all operators will retain their state, i.e. 1s along all operator diagonals. Second, when applying the operator, we will combine values using the operator instead of performing a sum.
Graph Matrix S S' a b + * ┌─────────── a │ 0 0 0 0 b │ 0 0 0 0 + │ 1 1 1 0 * │ 0 1 1 1 2 3 0 0 0 0 5 3 a b + * ┌─────────── a │ 0 0 0 0 b │ 0 0 0 0 + │ 1 1 1 0 * │ 0 1 1 1 0 0 5 3 0 0 0 15 a b + * ┌─────────── a │ 0 0 0 0 b │ 0 0 0 0 + │ 1 1 1 0 * │ 0 1 1 1 0 0 0 15 0 0 0 15
The author was very excited to discover this technique while playing with matrices one day, only later to discover it was described 33 years earlier by Miller et al. (1987). Miller was inspired by Valiant et al.’s (1983) work in arithmetic circuit complexity, who was in turn inspired by Borodin et al.’s (1982) work on matrix computation. This line of research has recently been revisited by Nisan and Wigderson (1997) and later Klivans and Shpilka (2003) which seeks to understand how circuit size and depth affects learning complexity.
# Graphs, efficiently
Due to their well-studied algebraic properties, graphs are suitable data structures for a wide variety of applications. Finding a reduction to a known graph problem can save years of effort, but graph algorithms can be challenging to implement efficiently, as dozens of libraries and compiler frameworks have found. Why have efficient implementations proven so difficult, and what has changed?
One issue hindering efficient graph representation is their space complexity. Suppose we have a graph with $$10^5=100,000$$ nodes, but only a single edge. We will need $$10^{5\times 2}$$ bits, or about 1 GB to store its adjacency matrix, where an equivalent adjacency list would only consume $$\lceil 2\log_2 10^5 \rceil = 34$$ bits. Most graphs are similarly sparse. But how do you multiply adjacency lists? One solution is to use a sparse matrix, which is spatially denser proportional to its sparsity and can be linearly faster on parallel computing architectures.
Perhaps the more significant barrier to widespread adoption of graph algorithms is their time complexity. Many interesting problems on graphs are NP-complete, including Hamiltonian path detection, TSP and subgraph isomorphism. Many of those problems have approximations which are often tolerable, but even if exact solutions are needed, CS theory is primarily concerned with worst-case complexity, which seldom or rarely occurs in practice. Natural instances can often be solved quickly using heuristic-guided search, such as SAT or SMT solvers.
Most graph algorithms are currently implemented using object oriented or algebraic data types as we saw previously. While conceptually simple to grasp, this approach is computationally inefficient. We would instead prefer a high level API backed by a pure BLAS implementation. As numerous papers have shown, finding an efficient matrix representation opens the path to optimized execution on GPUs or SIMD-capable hardware. For example, all of the following automata can be greatly accelerated using sparse matrix arithmetic on modern hardware:
Suppose we want to access the source code of a program from within the program itself. How could we accomplish that? There is a famous theorem by Kleene which gives us a clue how to construct a self-replicating program. More specifically, we need an intermediate representation, or reified computation graph (i.e. runtime-accessible IR). Given any variable y, we need some method y.graph() which programmatically returns its transitive closure, including upstream dependencies and downstream dependents. Depending on scope and granularity, this graph can expand very quickly, so efficiency is key.
With the advent of staged metaprogramming in domain-specific languages like TensorFlow and MetaOCaml, such graphs are available to introspect at runtime. By tracing all operations (e.g. using operator overloading) on an intermediate data structure (e.g. stack, AST, or DAG), these DSLs are able to embed a programming language in another language. At periodic intervals, they may perform certain optimizations (e.g. constant propagation, common subexpression elimination) and emit an intermediate language (e.g. CUDA, webasm) for optimized execution on special hardware, such as a GPU or TPU.
Recent work in linear algebra and sparse matrix representations for graphs shows us how to treat many recursive graph algorithms as pure matrix arithmetic, thus benefiting from SIMD acceleration. Researchers are just beginning to explore how these techniques can be used to transform general-purpose programs into graphs. We anticipate this effort will require further engineering to develop an efficient encoder, but see no fundamental obstacle for a common analysis framework or graph-based execution scheme.
# Programs as graphs
Graphs are not only useful as data structures for representing programs, but we can think of the act of computation itself as traversing a graph on a binary configuration space. Each tick of the clock corresponds to one matrix multiplication on a boolean tape.
Futamura (1983) shows us that programs can be decomposed into two inputs: static and dynamic. While long considered a theoretical distinction, partial evaluation has been successfully operationalized in several general purpose and domain-specific languages using this observation.
$\mathbf P: I_{\text{static}} \times I_{\text{dynamic}} \rightarrow O$
Programs can be viewed as simply functions mapping inputs to output, and executing the program amounts to running a matrix dynamical system to completion. Consider the static case, in which we have all information available at compile-time. In order to evaluate the program, we can just multiply the program $$\mathbf P: \mathbb B^{\lvert S\rvert \times \lvert S\rvert}$$ by the state $$S$$ until termination:
[P]──────────────────────────────── } Program
╲ ╲ ╲ ╲
[S₀]───*───[S₁]───*───[S₂]───*───[..]───*───[Sₜ] } TM tape
Now consider the dynamic case, where the matrix $$\mathbf P$$ at each time step might be governed by another program:
[Q]───────────────────── } Dynamics
╲ ╲ ╲
[P₀]───*───[P₁]───*───[..]───*───[Pₜ₋₁] } Program
╲ ╲ ╲ ╲
[S₀]───*───[S₁]───*───[S₂]───*───[..]───*───[Sₜ] } TM tape
We might also imagine the dynamic inputs as being generated by successively higher order programs. Parts of these may be stored elsewhere in memory.
⋮
[R₀]───────── } World model
╲ ╲
[Q₀]───*───[..]───*───[Pₜ₋₂] } Dynamics
╲ ╲ ╲
[P₀]───*───[P₁]───*───[..]───*───[Pₜ₋₁] } Program
╲ ╲ ╲ ╲
[S₀]───*───[S₁]───*───[S₂]───*───[..]───*───[Sₜ] } TM tape
What about programs of varying length? It may be the case we want to learn programs where $$t$$ varies. The key is, we can choose an upper bound on $$t$$, and search for fixed points, i.e. halt whenever $$S_t = S_{t+1}$$.
There will always be some program, at the interface of the machine and the real world, which must be approximated. One question worth asking is how large does $$\lvert S\rvert$$ need to be in order to do so? If it is very large, this procedure might well be intractable. Time complexity appears to be at worst $$\mathcal{O}(tn^{2.37})$$, using CW matmuls, although considerably better if $$\mathbf P$$ is sparse.
# Program synthesis
Many people have asked me, “Why should developers care about automatic differentiation?” Yes, we can use it to build machine learning systems. Yes, it has specialized applications in robotics, space travel, and physical simulation. But does it really matter for software engineers?
I have been thinking carefully about this question, and although it is not yet fully clear to me, I am starting to see how some pieces fit together. A more complete picture will require more research, engineering and rethinking the role of software, compilers and machine learning.
[P]──────────────────────────────── } Program
╲ ╲ ╲ ╲
[S₀]───*───[S₁]───*───[S₂]───*───[..]───*───[Sₜ] } TM tape
Consider the static case seen above. Since the matrix $$\mathbf P$$ is fixed throughout execution, to learn $$\mathbf P$$, we need to solve the following minimization problem:
$\underset{P}{\text{argmin}}\sum_{i \sim I_{static}}\mathcal L(P^t S^i_0, S_t)$
One issue with this formulation is we must rely on a loss over $$S_t$$, which is often too sparse and generalizes poorly. It may be the case that many interesting program synthesis problems have optimal substructure, so we should be making “progress” towards a goal state, and might be able to define a cross-entropy loss over intermediate states to guide the search process. This intuition stems from RL and needs to be explored in further depth.
Some, including Gaunt et al., (2016), have shown gradient is not very effective, as the space of boolean circuits is littered with islands which have zero gradient (some results have suggested the TerpreT problem is surmountable by applying various smoothing tricks). However Gaunt’s representation is also relatively complex – effectively, they are trying to learn a recursively enumerable language using something like a Neural Turing Machine (Graves et al., 2014).
More recent work, including that of Lample et al., (2019), demonstrated gradient is effective for learning programs belonging to the class of context-free languages. This space is often much more tractable to search through and generate synthetic training data. Furthermore, this appears to be well within the reach of modern language models, i.e. pointer networks and transformers.
In the last year, a number of interesting results in differentiable architecture search started to emerge. DARTS (Liu et al., 2019) proposes to use gradient to search through the space of directed graphs. The authors first perform a continuous relaxation of the discrete graph, by reweighting the output of each potential edge by a hyperparameter, optimizing over the space of edges using gradient descent, then taking a softmax to discretize the output graph.
Solar-Lezma (2020) calls this latter approach, “program extraction”, where a network implicitly or explicitly parameterizes a function, which after training, can be decoded into a symbolic expression. This perspective also aligns with Ian Goodfellow’s notion of deep networks as performing computation, where each layer represents a residual step in a parallel program:
A less charitable interpretation is that Goodfellow is simply using a metaphor to explain deep learning to lay audience, but I prefer to think he is communicating something deeper about the role of recurrent nonlinear function approximators as computational primitives, where adding depth effectively increases serial processing capacity and layer width increases bandwidth. There may be an interesting connection between this idea and arithmetic circuit complexity (Klivans and Shpilka, 2003).
While it would be sufficient to prove boolean matrix multiplication corresponds to Peano Arithmetic, a constructive proof taking physics into consideration is needed. Given some universal language $$\mathcal L$$, and a program implementing a boolean vector function $$\mathcal V: \mathbb B^i \rightarrow \mathbb B^o \in \mathcal L$$, we must derive a transformation $$\mathcal T_\mathcal L: \mathcal V \rightarrow \mathcal M$$, which maps $$\mathcal V$$ to a boolean matrix function $$\mathcal M: \mathbb B^{j \times k} \times \mathbb B^{l\times m}$$, while preserving asymptotic complexity $$\mathcal O(\mathcal M) \lt \mathcal O(\mathcal V)$$, i.e. which is no worse than a constant factor in space or time. Clearly, the identity function $$\mathcal I(\mathcal V)$$ is a valid candidate for $$\mathcal T_{\mathcal L}$$. But as recent GPGPU research has shown, we can do much better.
The second major hurdle to graph computation is developing a binary recompiler which translates programs into optimized BLAS instructions. The resulting program will eventually need to demonstrate performant execution across a variety of heterogenously-typed programs, e.g. Int, Float16, Float32, and physical SIMD devices. Developing the infrastructure for such a recompiler will be a major engineering undertaking in the next two decades as the world transitions to graph computing. Program induction will likely be a key step to accelerating these graphs on physical hardware.
|
# zbMATH — the first resource for mathematics
Analogues de la forme de Killing et du théorème d’Harish-Chandra pour les groupes quantiques. (Analogues of the Killing form and of the Harish- Chandra theorem for quantum groups). (French) Zbl 0721.17012
Quantum groups are understood in the sense of Drinfel’d and Jimbo, i.e., as deformations $$U_ h{\mathfrak g}$$ of Lie algebras $${\mathfrak g}$$ over the field $${\mathbb{C}}[[h]]$$ of formal power series. The purpose of the paper is to develop suitable analogs of the concept of the Killing form and the Harish-Chandra theorem in the case where $${\mathfrak g}$$ is semisimple. With that purpose, the author concentrates on a certain $${\mathbb{C}}$$-subalgebra of $$U_ h{\mathfrak g}$$, named $$U_ t{\mathfrak g}$$, where t is a comlex parameter. An analog of the Killing form is introduced and carefully studied in most part of the paper, as a bilinear form on $$U_ t{\mathfrak g}$$ invariant under the adjoint representation. Its nondegeneracy and links with representations are treated. In the rest of the paper, the author states a q-analog of the Harish-Chandra theorem; the complete reducibility of finite-dimensional representations of $$U_ t{\mathfrak g}$$; some properties of characters of finite-dimensional irreducible $$U_ t{\mathfrak g}$$-modules.
##### MSC:
17B37 Quantum groups (quantized enveloping algebras) and related deformations
Full Text:
##### References:
[1] A. BOREL , Communication privée. [2] V. G. DRINFLED , Quantum Groups (Proc. of I.C.M., Berkeley, 1986 ). [3] J. E. HUMPHREYS , Finite and Infinite Dimensional Modules for Semisimple Lie Algebras , dans Lie Theories and their Applications, Queen’s Papers in Pure Appl. Math., n^\circ 48. MR 80i:17009 | Zbl 0392.17006 · Zbl 0392.17006 [4] M. JIMBO , A q-Difference Analog of Ug and the Yank, Baxter Equation (Lett. Math. Phys., Vol. 10, 1985 , p. 63-69). MR 86k:17008 | Zbl 0587.17004 · Zbl 0587.17004 · doi:10.1007/BF00704588 [5] M. JIMBO , A q-Analog of U(gl(N + 1)), Hecke Algebras and the Yang-Baxter Equation (Lett. Math. Phys., vol. 11, 1986 , p. 247-252). MR 87k:17011 | Zbl 0602.17005 · Zbl 0602.17005 · doi:10.1007/BF00400222 [6] G. LUSZTIG , Quantum Deformations of Certain Simple Modules over Enveloping Algebras (Adv. Maths., vol. 70, 1988 , p. 237-249). MR 89k:17029 | Zbl 0651.17007 · Zbl 0651.17007 · doi:10.1016/0001-8708(88)90056-4 [7] M. ROSSO , Finite Dimensional Representations of the Quantum Analog of the Enveloping Algebra of a Complex Simple Lie Algebra (Comm. Math. Phys., vol. 117, 1988 , p. 581-593). Article | MR 90c:17019 | Zbl 0651.17008 · Zbl 0651.17008 · doi:10.1007/BF01218386 · minidml.mathdoc.fr [8] M. ROSSO , An Analogue of P.B.W. Theorem and the Universal R-matrix for Uhsl(N + 1) (Comm. Math. Phys., vol. 124, 1989 , p. 307-318). Article | MR 90h:17019 | Zbl 0694.17006 · Zbl 0694.17006 · doi:10.1007/BF01219200 · minidml.mathdoc.fr
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
Table of powers of complex numbers
Share
Table of powers of complex numbers by National Bureau of Standards.
• ·
Written in English
Book details:
Edition Notes
ID Numbers Statement by Herbert E. Salzer [for the National Bureau of Standards]. Series Applied mathematics series -- 8 Contributions Salzer, Herbert E. Open Library OL20215081M
To plot a complex number, we use two number lines, crossed to form the complex plane. The horizontal axis is the real axis, and the vertical axis is the imaginary axis. See. Complex numbers can be added and subtracted by combining the real parts and combining the imaginary parts. See. Complex numbers can be multiplied and divided. In the powers of 4 table, the ones digits alternate: 4, 6, 4, 6. In fact, you can see that the powers of 4 are the same as the even powers of 2: 4 1 = 2 2 4 2 = 2 4 4 3 = 2 6 etc. So, thinking of numbers in this light we can see that the real numbers are simply a subset of the complex numbers. The conjugate of the complex number $$a + bi$$ is the complex number $$a - bi$$. In other words, it is the original complex number . When you write your complex number as an e-power, your problem boils down to taking the Log of $(1+i)$. Now that is $\ln\sqrt{2}+ \frac{i\pi}{4}$ and here it comes: + all multiples of $2i\pi$. So in your e-power you get $(3+4i) \times (\ln\sqrt{2} + \frac{i\pi}{4} + k \cdot i \cdot 2\pi)$ I would keep the answer in e-power form.
|
# Manhattan Metric Software and Science in Equal Measure
## What is a Manhattan Metric?
At this point, I think it is safe to say that computers are not just some passing fad.
Computers have become an indispensable tool for many people and an integral part of a number of different industries. If we expand our definition of computers to include all electronics that contain an integrated logic circuit, then something like 70% of all humans on the earth interact with computers on a daily basis (based on the number of mobile phones in use alone). Indeed, it is almost assured that the amount of computer technology in use on the planet Earth will only increase.
As revolutionary as this rapid spread of technology might seem, I do not think the number of devices in use is the real “computer revolution”. Computers and computation represent more than just a revolution in the way we accomplish everyday tasks. They represent a fundamental revolution in thought; a revolution as significant as those brought on by Newton’s Laws of Motion, Darwin’s Theory of Evolution, and Einstein’s Theories of Special and General Relativity.
That might seem like a fairly lofty claim, but I’m sticking to it. To understand why, I’d like to talk a bit about metrics…
## What is a metric?
A metric is, essentially, a fancy way for mathematicians to say “how to get from here to there”. More formally, a metric represents a way to measure distances. You probably learned your first metric in grade school without even knowing it. The Pythagorean theorem that we’re all familiar with, gives us a convenient way to measure distances in a plane: $$\sqrt{dx^2 + dy^2} = distance$$. This “square-root of the sum of squares” method, known as the Euclidean metric, can also be used to calculate distances in 3D space.
To be perfectly honest, the Euclidean metric is pretty boring stuff. Sure, there are plenty of practical applications for it, but unless you’re an ancient Greek this isn’t exactly going to cause a revolution in thought. For that, you have to fast forward all the way from Pythagoras and Euclid to the year 1905. This was the annus mirabilis of Albert Einstein, during which he published his paper on the Special Theory of Relativity. If you’ve studied Special Relativity at all, you’ll know that one of its consequences is that as you go faster, distances get shorter.
No, distances don’t appear shorter as you go faster. They are shorter, any way you measure it. Quite literally, your speed through the universe warps space. Since Special Relativity deals with the way we measure distances, this phenomenon is quite nicely described by a metric: the Lorentz metric. A number of years later, Einstein would take advantage of yet another metric, the Riemann metric, to formulate his General Theory of Relativity. With it, he related the way we measure distances not only to the speed we are traveling, but also to the mass and energy of objects around us. Gravity is just an alteration to the distances in space and time.
## Are we there yet?
So what does any of this have to do with Manhattan? Well, as much as we can do with all these other metrics, the one thing we can’t do is answer the question: “How far is it from Columbus Circle to the Javits Center?” That’s because midtown Manhattan’s streets are laid out in a grid fashion, and the “shortest route” according to Euclid would require the ability to walk through several city blocks worth of buildings.
Instead, we have to consider how many streets and how many avenues we would need to traverse. Mathematically speaking, we can consider Manhattan a graph, with each intersection representing a “node” and every street representing an “edge”. In order to calculate the distance between any two “nodes”, we have to walk the “edges”. This idea, this way of calculating distances, is sometimes called the Manhattan metric.
What is unique about the Manhattan metric, though, is that it is not exactly amenable to calculations. For example, if you toss a ball in the air, and I ask you how far away it is from you at any point in time, you can write down an equation that describes its motion, and use the Euclidean metric to figure out the distance. Likewise, if a spacecraft sets off in orbit around the Earth and I ask you how far it has traveled, you could use the equations of Special and General Relativity to give me an answer for any arbitrary number of orbits.
However, if I tell you that Joe’s pizza has moved from the corner of 53rd Street and Broadway to 34th Street and 7th Avenue, and I ask you how much longer of a walk that is from your place on 108th and Riverside…well, you’d probably have to pull up Google Maps.
You’d probably have to pull up Google Maps…
See where I’m going?
## It’s not just about pizza
Certainly, Google Maps is neat, but it is also not a revolution in thought. The thing to understand is that the Manhattan metric is not just useful for getting to the nearest pizza place. It’s also useful for buying the pizza. How?
Consider the tomatos in your pizza. How did they get there? Presumably, the owner of the pizza place has a supplier that brings him cans of tomato paste. The supplier probably gets them from a warehouse, the warehouse gets them from a distributor. The distributor probably sources tomato paste from a factory that processes tomatos, and the factory probably has warehouses and distributors of its own. Eventually, those distributors have to deal with the farmers who actually grow the tomatos in the first place.
As you hand over your \$2.50 for that slice, that money will traverse from you to the pizza shop to the supplier, the warehouse, the distributor, the factory, the other distributors, and eventually to the farmer. This is the economic distance between you and the tomato on your pizza. It is a distance also measured by a Manhattan metric, where every actor is a node and each transaction is an edge. In fact, the entire world economy is just a graph ruled by the Manhattan metric.
That might seem pretty impressive, but it’s not just pizza or economies. The Manhattan metric covers so much more. Consider, for example, biology. What makes you different from every other person you pass on the street? If you were to ask that question to a biologist, they would tell you that it’s your genes. More specifically, it’s the unique differences, the mutations, in your genes.
DNA is composed of 4 bases that we abbreviate with the letters A, C, G, and T. As DNA is copied over and over and over again, occasionally biology makes a mistake. An A might be changed to a G, or a T to a C. If one of these “mistakes” ends up in a sperm or egg cell, it will result in an offspring with a new version of a gene. As new versions of genes come and go, and their frequency increases or decreases in a population over a long enough period of time, you might even end up with a new species.
How different, then, are any two people or any two species? That answer depends on how many mutations have occurred to get from the sequence of one individual’s genes to those of the other. It might be a little more difficult to see how the Manhattan metric applies here, but if we consider all of the possible DNA sequences of a gene as nodes, and the mutations that transform one sequence into another as the edges, then we again have a system ruled by the Manhattan metric.
Treating evolution as a graph like this allows us a peek at the coming revolution of thought. Certainly, treating the economy as a graph is intriguing, and might make one rich a lot sooner than worrying about evolution. Yet, there are only 7 billion people on the Earth. The number of economic interactions that can take place is astronomically large, but finite.
On the other hand, inside of every human there are 6 billion bases, and that’s just humans. The “sequence space” of biology (that is, all the possible DNA sequences of any length) is beyond astronomical. Theoretically, it is infinite, though there are practical concerns that prevent that from actually being the case. Even though there are more bacterial cells than human cells in every human, and more than a million bacteriophages in every drop of water in the oceans, life as it exists on Earth explores but a minuscule portion of sequence space.
This, actually, is the essence of the study of evolution. Using the Manhattan metric, we can determine the distances between species, which allows us to construct a “tree of life”, but what really interests evolutionary biologists are all the paths untaken. Much like many of the streets of Manhattan are one way, therefore limiting which edges one can actually traverse (if traveling by car), the edges in evolution’s sequence space have layered upon them the concept of “fitness”.
Exactly how fitness drives the path of life along the edges of sequence space, and in which directions in might drive life into the future, are major open questions in the study of evolution.
## Just walk
To be sure, graph theory is a young field; perhaps the youngest field of mathematics. Every day there are new advances, new techniques, new algorithms devised. Still, there remains the very real possibility that, for the majority of problems in graph theory, the only way one will be able to find solutions will be to walk the edges. Finally, we return to Google Maps, and computers in general.
If there is one thing that computers are really good at, it is walking the edges of a graph. Computers don’t get tired, never get bored, and can walk very fast. As we refine the study of graph theory, and tackle the breathtakingly large number of problems to which graph theory can be applied, it is almost certain that computers and computation will be there with us every step of the way. Looking out even further into the future, the promise of quantum computing is nothing more than a computer that can walk all edges (or at least very many edges) simultaneously.
So, after millennia of mathematics driven mostly by the sheer force of human logic, we stand on the cusp of a future where essential discoveries will only be possible through the combined application of human ingenuity and raw computational brute force. Just something to think about the next time you walk down the street and pull out your phone to get directions to Joe’s pizza.
|
Tag Info
8
Optimization: You've done a good job of making sure you're not processing more than you have to, but there are a couple of things you can do here to improve performance. Instead of looping through all of the MiniTOCs you've just inserted, updating each one individually. Just update all of the fields in the entire document at once. Replace This: ' Update ...
8
No need for a loop at all; just use Word's built-in ability to find text based on its style and other formatting. Like so: Public Function FindHeading(strHeadLevel As String) As String Dim rng As Range 'set a range to the selection first so we can avoid ' the selection jumping around as we do our find Set rng = Selection.Range With ...
7
There are 5 steps you could take after which you could repost your code here. Use 'Option Explicit' in each module and address the errors that this will show. If you are able, install the RubberDuck addin. Use the 'RubberDuck' addin to do a code inspection and then address all the issues you find. Split your code into simple functions/subs. At the moment ...
7
Before diving into the code itself, I'm going to go over a couple "structural things". First, I'm not entirely sure what the utility of a tool like this actually is. When I'm writing code in the Visual Basic Editor, I get all of this great help like IntelliSense, syntax highlighting, an Object Browser, etc., etc. (and this is before using custom add-ins ...
7
First things first, let's clean everything up. Proper descriptive naming, proper validation variables, making the code clear and obvious about what's happening where Public Function GetHeadingFromStyle(ByVal styleToFind As String) As String '/ Iteratively checks the style of all paragraphs, starting at the current selection and working towards the ...
7
Maybe I'm deeply wrong, but please test: -- Almost all your variables are of type Variant, even the ones which could be Long, like counter. Define each variable with the correct type (it will be faster), in the form: Option Explicit Dim counter as Long Dim ColumnName1 as String, ColumnName2 as String -- You dont really use pIndex. Try: Dim ...
6
Thanks for using Rubberduck! As @Vogel612 mentioned, the parser error you're getting is a known issue in the v1.4.3 release. In short: foo = bar(x)(y) Accessing a subscript immediately after a function call isn't supported. An easy work-around is to introduce a local variable to hold the collection/array result: Dim result result = bar(x) foo = result(y) ...
6
I have two tiny things right now. The variable count is a system reserved name for something else, so I would avoid naming the variable that way. countComponents might be good. ElseIf (ModuleExists(Module) = False) Then This logic can be simplified by editing the ModuleExists function to return the opposite. ElseIf NoModuleExists(Module) Then A simple ...
6
General Notes First - In addition to replacing the magic numbers, I'd put your file path into a constant at the top of the module for easier maintenance: Private Const WORD_DOCUMENT_LOCATION As String = "C:\Users\<user>\Desktop\CyberAwareness.docx" Second - Get a reference to the Worksheet object at the start of the function instead of relying on ...
6
Using case-sensitive string matching like that is in my opinion the first nail in the coffin. Instead a case-insensitive dictionary would be a much better option. Example: var replacements = new Dictionary<string, Func<string>>(StringComparer.OrdinalIgnoreCase) { ["Plan"] = () => $"{DateTime.Now.Year} Q2 Plan", ["Project Name"] = () =... 5 You can replace inner foreach loops with LINQ replace if-else with switch replace string.Format with string interpolation add const modifier for path variables if they will be constants remove string[] args from Main if you won't use them Result using DocumentFormat.OpenXml.Packaging; using DocumentFormat.OpenXml.Wordprocessing; using System; namespace ... 5 Well that's great progress. I now have some more comments. Brute force rather than precision Your search for highlighted words is using a 'brute force' approach as you are examining every word rather than using the Word search function to search for words with a specific highlight. 2 Multiple variables rather than grouped data Your are using multiple ... 5 In the Word document Click on an underlined word In the Home menu, in the Editing section, click Select > Select Text with Similar Formatting Copy Open Excel and paste You may need to clean it up in Excel, but you don't need VBA to do this. 5 Security: Credentials in ENV It is less than ideal to store your credentials in environment variables, as they may easily leak. phpinfo for example will print all environment variables. Of course, you don't want to allow access to phpinfo to just anyone either, but it may still leak, and not everyone who should be allowed to see phpinfo should be allowed ... 5 Performance I see three major issues with the performance of this code. GUI operations are expensive and slow. Remove these lines, they're doing nothing but slowing you down. ActiveSheet.Range(currentCellToAdd).Activate ActiveWindow.ScrollRow = ActiveCell.Row You should avoid using activate and select anyway. They tend to lead to nasty bugs. Don't ... 5 As this is a code review request, some of my comments may be considered "best practices" by me and not by others (though most of my habits I've picked up from sites and reviews such as this one). Your code is successful already because it accomplishes the task for which you have designed. Most of the improvements I can suggest are in terms of software design ... 4 Instead of decoding each field within the CSV as CP1252, you should open the entire file that way. I think that you should approach this more as a templating problem than a document-generation problem. Templating is a challenge that has been solved many times before — though mainly generating HTML rather than OOXML. Authoring OOXML directly looks like it ... 4 Answers Question 1: No, this is now Hungarian-free™. The article that was referred to in your prior post differentiated between "Apps Hungarian" and "Systems Hungarian", and your previous code was using the latter (i.e. strName). You were basically indicating that yourFunction SearchWordDoc(strPath, strName)was expecting String's as parameters. The current ... 3 This post is tagged with object-oriented, but what I'm seeing is very ironically procedural code. It doesn't matter that this is "just a quick tool": we're here to write professional, quality code that is easy to read and maintain, performs well, and correctly. Comintern has given great feedback and highlighted a number of bugs and edge cases - which is ... 3 It's not clear why you would need to p/invoke the native MessageBoxU function, given you're not using it anywhere, correctly preferring the language's own MsgBox wrapper. Remove all the dead/commented-out code if you're not using it - otherwise is obscures the intent of the code. As far as I know, Universal_Converter_v1.0 is only a valid VBA identifier if ... 3 Give this a shot. I was able to identify 30,000 underlined words out of a total 140,000 words in about 25 seconds. I also posted this on the SO question. This might a more flexible approach if you want to add various criteria to the search. To provide some more context as to how this works. This subroutine works by iterating over each StoryRange, e.g. ... 3 Answering my Own question to share with community the Height of stupidity in my code and gradually how over 1 or 2 sleepless nights of testing, the problem was reduce to workable. First the height of stupidity is the line If i < .Paragraphs.Count Then even after knowing well that while working with such giant document interaction with document is ... 2 The name Update_Word_DocMs_In_Folder isn't following the PascalCase convention that's otherwise observed by your code; underscores should generally be avoided in procedure names in VB6/VBA (especially for Public members), because underscores have a very specific meaning in identifier names: [Interface]_[Member] [EventSource]_[EventName] So ... 2 Naming variables Give your variables meaningful names. Characters are free and when you know what something is by looking at its name, it's so much easier to follow the code! A few examples: rngP1 might be better off as something like firstParagraph You'll notice I said "to hell!" with that hungarian notation - it's unneeded! Here: LastRow1 As Long, ... 2 First of all, kudos for naming all these controls! You have a lot of duplication going on; extract functionality into more specialized functions/procedures. For example, this: If (AllDocumentsPostedCheckbox.Value = True) Then Section8Complete.Caption = "Complete": Section8Complete.BackColor = RGB(0, 255, 0): DocInputBy.Caption = UpgradeTechnic.Text ... 2 ScreenUpdating In order for you to interact with a spreadsheet (e.g. moving the mouse, selecting a cell) , the display on your monitor is re-drawn something like 30+ times per second. This takes a lot of processing power. If you set Application.ScreenUpdating = False then, while it is false, the screen will not be updated, and your code will run much ... 2 Shouldn't: #region Find/Replace parameters ...$findWrap = [Microsoft.Office.Interop.Word.WdReplace]::wdReplaceAll $format =$false $replace = [Microsoft.Office.Interop.Word.WdFindWrap]::wdFindContinue #endregion be: #region Find/Replace parameters ...$replace = [Microsoft.Office.Interop.Word.WdReplace]::wdReplaceAll $format =$false \$findWrap = [...
2
I would try to use bulkcopy to load all of the IDs and their CachedText at once into a staging table in Azure, and then do a single update on your document table. CREATE TABLE document (docKey BIGINT IDENTITY(1, 1) PRIMARY KEY, CachedText NVARCHAR(MAX), id INT ); CREATE TABLE document_stage (CachedText NVARCHAR(MAX), id INT ); ...
1
You're doing a lot with .Selection. By default, that is going to be slow. Doing things behind the scenes is always faster. Either way, working with selection isn't the best way to go. Either pass the document to the sub or prompt the user. Is the selection a document, text, body, header, title, section? Be explicit. The way you use .Find makes me thing you'...
1
COM Automation (which is what you are using) is always going to be slow. There's not much you can do about that except to try find new ways to do what you want with as few operations as possible. An alternative you could investigate is the Open XML SDK. I've never tried it myself, but it is supposed to be a lot faster than COM Automation. The Open XML SDK ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# OS-T: 2020 Increase Natural Frequencies of an Automotive Splash Shield with Ribs
In this tutorial you will generate a preliminary design of stiffeners in the form of ribs for an automotive splash shield. The objective is to increase the natural frequency of the first normal mode using topology to identify locations for ribs in the designable region.
The optimization problem for this tutorial is stated as:
Objective
Maximize frequency of mode number 1.
Constraint
Upper bound constraint of 40% for the designable volume.
Design Variables
Density of each element in the design space.
## Launch HyperMesh and Set the OptiStruct User Profile
1. Launch HyperMesh.
The User Profile dialog opens.
2. Select OptiStruct and click OK.
This loads the user profile. It includes the appropriate template, macro menu, and import reader, paring down the functionality of HyperMesh to what is relevant for generating models for OptiStruct.
## Import the Model
1. Click File > Import > Solver Deck.
2. For the File type, select OptiStruct.
3. Select the Files icon .
A Select OptiStruct file browser opens.
4. Select the sshield_opti.fem file you saved to your working directory. Refer to Access the Model Files.
5. Click Open.
6. Click Import, then click Close to close the Import tab.
## Apply Loads and Boundary Conditions
1. Create the Constraints load collector.
1. In the Model Browser, right-click and select Create > Load Collector from the context menu.
A default load collector displays in the Entity Editor.
2. For Name, enter constraints.
3. Click Color and select a color from the color palette.
4. Set Card Image to None.
2. Create the EIGRL load collector.
1. In the Model Browser, right-click and select Create > Load Collector from the context menu.
A default load collector displays in the Entity Editor.
2. For Name, enter EIGRL.
3. Click Color and select a color from the color palette.
4. Set Card Image to EIGRL.
5. For V2, enter 3000.
6. For ND, enter 2.
This load collector defines data needed to perform real eigenvalue analysis (vibration or buckling) and specified the solver to calculate the first two modes between a frequency range of 0 and 3000 Hz.
### Create Constraints
In this step you will create constraints at bolt locations.
1. From the Model Browser, Load Collectors folder, right-click on Constraints and select Make Current from the context menu.
2. From the Analysis page, click constraints.
3. Select the Create subpanel.
4. Double-click nodes and select by id, then enter 1075, 1076 in the id= field.
5. Constrain all dofs.
Dofs with a check will be constrained, while dofs without a check will be free. Dofs 1, 2, and 3 are x, y, and z translation degrees of freedom. Dofs 4, 5, and 6 are x, y, and z rotational degrees of freedom.
6. Click create.
Two constraints are now created. Constraint symbols (triangles) appear at the selected nodes. The number 123456 is displayed beside the constraint symbol, indicating that all dofs are constrained.
1. In the Model Browser, right-click and select Create > Load Step from the context menu.
A default load step displays in the Entity Editor.
2. For Name, enter frequencies.
3. Set Analysis type to normal modes.
4. Define SPC.
1. For SPC, click Unspecified > Loadcol.
2. In the Select Loadcol dialog, select constraints and click OK.
5. Define METHOD(STRUCT).
1. For METHOD(STRUCT), click Unspecified > Loadcol.
2. In the Select Loadcol dialog, select EIGRL and click OK.
## Submit the Job
1. From the Analysis page, click the OptiStruct panel.
2. Click save as.
3. In the Save As dialog, specify location to write the OptiStruct model file and enter sshield_analysis for filename.
For OptiStruct input decks, .fem is the recommended extension.
4. Click Save.
The input file field displays the filename and location specified in the Save As dialog.
5. Set the export options toggle to all.
6. Set the run options toggle to analysis.
7. Set the memory options toggle to memory default.
8. Clear the options field.
9. Click OptiStruct to launch the OptiStruct job.
If the job is successful, new results files should be in the directory where the sshield_analysis.fem was written. The sshield_analysis.out file is a good place to look for error messages that could help debug the input deck if any errors are present.
The default files written to the directory are:
sshield_analysis.html
HTML report of the analysis, providing a summary of the problem formulation and the analysis results.
sshield_analysis.out
OptiStruct output file containing specific information on the file setup, the setup of your optimization problem, estimates for the amount of RAM and disk space required for the run, information for each of the optimization iterations, and compute time information. Review this file for warnings and errors.
sshield_analysis.h3d
HyperView binary results file.
sshield_analysis.res
HyperMesh binary results file.
sshield_analysis.stat
Summary, providing CPU information for each step during analysis process.
sshield_analysis.mvw
HyperView session file.
sshield_analysis_frames.html
HTML file used to post-process the .h3d with HyperView Player using a browser. It is linked with the _menu.html file.
HTML file to post-process the .h3d with HyperView Player using a browser.
## View the Results
Eigenvector results are output from OptiStruct for a normal modes analysis by default. This section describes how to view the results in HyperView.
1. From the OptiStruct panel, click HyperView.
HyperView launches inside of page 2 of HyperMesh Desktop, and displays the sshield_analysis.mvw session file, which is linked with the sshield_analysis.h3d file.
2. From the Animation toolbar, set the animation mode to (Modal).
3. In the Results browser, click Mode 1.
The browser shows the first two natural frequencies calculated between 0 and 3000Hz.
4. Define deformed settings.
1. From the Results toolbar, click to open the Deformed panel.
2. Set Result type to Eigen mode (v).
3. Set Scale to Model Units.
4. Set Type to Uniform.
5. In the Value field, enter 10.
6. Click Apply.
5. Animate the model.
1. From the Animation toolbar, click .
2. Move the Max Frame Rate slider between 60 and 1 to increase or decrease the animation speed.
Tip: You can also change the default values for Angular Increment to refine your animation.
3. Click to start the animation.
An animation of the mode shape should be seen for the first frequency.
4. Click to stop the animation.
6. On the Page Control toolbar, click the Page Delete icon to delete the HyperView page.
## Set Up the Optimization
### Create Topology Design Variables
1. From the Analysis page, click optimization.
2. Click topology.
3. Select the create subpanel.
4. In the desvar= field, enter shield.
5. Set type: to PSHELL.
6. Using the props selector, select design.
7. For base thickness, enter 0.300.
8. Click create.
A topology design space definition, shield, has been created. All elements referring to the design property collector (elements organized into the "design" component collector) are now included in the topology design space. The thickness of these shells can vary between 0.300 (base thickness) and the maximum thickness defined by the T (thickness) field on the PSHELL card.
The object of this exercise is to determine where to locate ribs in the designable region. Therefore, a non-zero base thickness is defined, which is the original thickness of the shells. The maximum thickness, which is defined by the T field on the PSHELL card, should be the allowable depth of the rib.
Currently, the T field on the PSHELL card is still set to 0.300 (the original shell thickness). You will change this to 1.0 so that the ribs of a maximum height of 0.7 units can be obtained by the topology optimization.
9. Click return.
10. Edit the thickness of the design property.
1. In the Model Browser, Properties folder, click design.
2. In the Entity Editor, T field, enter 1.000.
### Create Optimization Responses
1. From the Analysis page, click optimization.
2. Click Responses.
3. Create the volume fraction response.
1. In the responses= field, enter volfrac.
2. Below response type, select volumefrac.
3. Set regional selection to total and no regionid.
4. Click create.
4. Create the frequency response.
1. In the responses= field, enter freq1.
2. Below response type, select frequency.
3. For Mode Number, enter 1.0.
4. Click create.
A response, freq1, is defined for the frequency of the first mode extracted.
### Define the Objective Function
1. Click the objective panel.
2. Verify that max is selected.
3. Click response= and select freq1.
4. Using the loadsteps selector, select frequencies.
5. Click create.
6. Click return twice to exit the Optimization panel.
### Define Constraints
A response defined as the objective cannot be constrained. In this case, you cannot constrain the response freq1. An upper bound constraint needs to be defined for the response volfrac.
1. Click dconstraints.
2. In the constraint= field, enter volume_constr.
3. Check the box next to upper bound, then enter 0.40.
4. Click response = and select volfrac.
5. Click create.
A constraint is defined on the response volfrac. The constraint is an upper bound with a value of 0.40. The constraint applies to all subcases as the volumefrac response is a global response. In this step you are allowing the topology optimization to use additional volume with which it can come with ribsvconstr.
## Run the Optimization
1. From the Analysis page, click OptiStruct.
2. Click save as.
3. In the Save As dialog, specify location to write the OptiStruct model file and enter sshield_optimization for filename.
For OptiStruct input decks, .fem is the recommended extension.
4. Click Save.
The input file field displays the filename and location specified in the Save As dialog.
5. Set the export options toggle to all.
6. Set the run options toggle to optimization.
7. Set the memory options toggle to memory default.
8. Click OptiStruct to run the optimization.
The following message appears in the window at the completion of the job:
OPTIMIZATION HAS CONVERGED.
FEASIBLE DESIGN (ALL CONSTRAINTS SATISFIED).
OptiStruct also reports error messages if any exist. The file sshield_optimization.out can be opened in a text editor to find details regarding any errors. This file is written to the same directory as the .fem file.
9. Click Close.
The default files that get written to your run directory include:
sshield_optimization.mvw
HyperView session file.
sshield_optimization.HM.comp.cmf
HyperMesh command file used to organize elements into components based on their density result values. This file is only used with OptiStruct topology optimization runs.
sshield_optimization.out
OptiStruct output file containing specific information on the file setup, the setup of the optimization problem, estimates for the amount of RAM and disk space required for the run, information for all optimization iterations, and compute time information. Review this file for warnings and errors that are flagged from processing the sshield_optimization.fem file.
sshield_optimization.sh
Shape file for the final iteration. It contains the material density, void size parameters and void orientation angle for each element in the analysis. This file may be used to restart a run.
sshield_optimization.hgdata
HyperGraph file containing data for the objective function, percent constraint violations, and constraint for each iteration.
sshield_optimization.oss
OSSmooth file with a default density threshold of 0.3. You may edit the parameters in the file to obtain the desired results.
sshield_optimization.stat
Contains information about the CPU time used for the complete run and also the break-up of the CPU time for reading the input deck, assembly, analysis, convergence, and so on.
sshield_optimization.his_data
The OptiStruct history file containing iteration number, objective function values and percent of constraint violation for each iteration.
sshield_optimization.HM.ent.cmf
HyperMesh command file used to organize elements into entity sets based on their density result values. This file is only used with OptiStruct topology optimization runs.
sshield_optimization.html
HTML report of the optimization, giving a summary of the problem formulation and the results from the final iteration.
sshield_optimization_frame.html
HTML file used to post-process the .h3d with HyperView Player using a browser. It is linked with the _menu.html file.
HTML file used to post-process the .h3d with HyperView Player using a browser.
sshield_optimization_des.H3D
HyperView binary results file that contains: Density results from topology optimizations, Shape results from topography or shape optimizations and Thickness results from size and topology optimizations.
sshield_optimization_s1.H3D
HyperView binary results file that contains: Displacement results from linear static analysis, Element strain energy results from normal mode analysis and Stress results from linear static analysis, etc.
## View the Results
With topology optimization of shell elements, Element Density and Element Thickness results are output from OptiStruct for all iterations. In addition, Eigenvector results are output for the first and last iterations by default. This section describes how to view those results in HyperView.
1. From the OptiStruct panel, click HyperView.
HyperView launches inside of HyperMesh Desktop, and loads the session file sshield_optimization.mvw that is linked with the sshield_optimization_des.h3d and the sshield_optimization_s1.h3d.
2. From the Results toolbar, click to open the Contour panel.
3. Set the Result type: to Element Densities(s).
4. Click Apply.
5. From the Results Browser, select the last iteration.
Each element of the model is assigned a legend color, indicating the density of each element for the selected iteration.
Analyze the following:
Have most of your elements converged to a density close to 1 or 0?
If there are many elements with intermediate densities, the discrete parameter may need to be adjusted. The DISCRETE parameter (set in the Opti control panel on the Optimization panel) can be used to push elements with intermediate densities towards 1 or 0 so that a more discrete structure is given.
Regions that need reinforcement tend towards a density of 1.0. Areas that do not need reinforcement tend towards a density of 0.0.
Is the max = field showing 1.0e+00?
In this case, it is.
If it is not, the optimization has not progressed far enough. Allow more iterations and/or decrease the OBJTOL parameter (set in the Opti control panel).
If adjusting the DISCRETE parameter, incorporating a checkerboard control, refining the mesh, and/or decreasing the objective tolerance does not yield a more discrete solution (none of the elements progress to a density value of 1.0), you may want to review the set up of the optimization problem. Some of the defined constraints may not be attainable for the given objective function (or visa-versa).
Where would you place your ribs?
6. On the Page Control toolbar, click the Page Delete icon to delete the HyperView page.
## Set Up the Final Normal Modes Analysis
Based on the topology results obtained above, a number of ribs were added to the model. The new design sshield_newdesign.fem, which includes these ribs can be found in the optistruct.zip file.
### Delete the Current Model
2. From the menu bar, click File > New > Model.
3. Click Yes to clear the current session.
Deleting the current model clears the current HyperMesh database. Information stored in .hm files on your disk is not affected.
### Import the Model
1. Click File > Import > Solver Deck.
2. For the File type, select OptiStruct.
3. Select the Files icon .
A Select OptiStruct file browser opens.
4. Select the sshield_newdesign.fem file you saved to your working directory. Refer to Access the Model Files.
5. Click Open.
6. Click Import, then click Close to close the Import tab.
## Submit the Job
1. From the Analysis page, click the OptiStruct panel.
2. Click save as.
3. In the Save As dialog, specify location to write the OptiStruct model file and enter sshield_newdesign for filename.
For OptiStruct input decks, .fem is the recommended extension.
4. Click Save.
The input file field displays the filename and location specified in the Save As dialog.
5. Set the export options toggle to all.
6. Set the run options toggle to analysis.
7. Set the memory options toggle to memory default.
8. Clear the options field.
9. Click OptiStruct to launch the OptiStruct job.
If the job is successful, new results files should be in the directory where the sshield_newdesign.fem was written. The sshield_newdesign.out file is a good place to look for error messages that could help debug the input deck if any errors are present.
The default files written to the directory are:
sshield_newdesign.html
HTML report of the analysis, providing a summary of the problem formulation and the analysis results.
sshield_newdesign.out
OptiStruct output file containing specific information on the file setup, the setup of your optimization problem, estimates for the amount of RAM and disk space required for the run, information for each of the optimization iterations, and compute time information. Review this file for warnings and errors.
sshield_newdesign.h3d
HyperView binary results file.
sshield_newdesign.res
HyperMesh binary results file.
sshield_newdesign.stat
Summary, providing CPU information for each step during analysis process.
sshield_newdesign.mvw
HyperView session file.
sshield_newdesign_frames.html
HTML file used to post-process the .h3d with HyperView Player using a browser. It is linked with the _menu.html file.
HTML file to post-process the .h3d with HyperView Player using a browser.
## View the Results
1. From the OptiStruct panel, click HyperView.
This launches HyperView in the HyperMesh Desktop and loads the file sshield_newdesign.mvw that is linked with the file sshield_newdesign.h3d.
2. Set the animation mode to (Modal).
3. In the Results Browser, select Mode 1.
4. Click to open the Defomed panel.
5. Make or verify the following settings in the Deformed panel.
Result Type
Eigen mode (v)
Scale
Model Units
Type
Uniform
Value
10
6. Click Apply.
7. Click to start the animation.
An animation of the mode shape should be seen for the first frequency.
8. Click again to stop the animation.
## Compare Results
What is the percentage increase in frequency for your first mode (sshield_analysis.fem versus sshield_newdesign)?
You have seen that the frequency of the structure for the first mode has increased from 43.63 Hz to 84.88 Hz.
How much mass has been added to the part (check the mass of your ribs in the mass calc panel in the Tool page)?
What is the percentage increase in mass?
|
## Tutorial on Riemannian Geometry for Scientific Visualization (2021)
### Abstract
This tutorial introduces the most important basics of Riemannian geometry and related concepts with a specific focus on applications in scientific visualization. The main concept in Riemannian geometry is the presence of a Riemannian metric on a differentiable manifold, comprising a second-order tensor field that defines an inner product in each tangent space that varies smoothly from point to point. Technically, the metric is what allows defining and computing distances and angles in a coordinate-independent manner. However, even more importantly, it in a sense is really the major structure (on top of topological considerations) that defines the space where scientific data, such as scalar, vector, and tensor fields live.
However, the concept of a metric, and crucial related concepts such as connections and covariant derivatives, are not often used explicitly in visualization. In contrast to concepts of differential topology, which have been used extensively in visualization, for example in scalar and vector field topology, we believe that concepts from Riemannian geometry have been underrepresented in the visualization literature. One reason for this might be that most visualization techniques are developed for scalar, vector, or tensor fields given in Euclidean space $$\mathbb{R}^2$$ or $$\mathbb{R}^3$$, and data given on curved surfaces are usually treated explicitly through their embedding in $$\mathbb{R}^3$$. However, the presence of a Riemannian metric on a manifold has very important implications even for data given in Euclidean space, for example regarding the physical meaning of visualizations as well as for the use of non-Cartesian coordinates. Therefore, considering the metric tensor field explicitly provides several important benefits.
### Objectives
In this tutorial, we try to particularly highlight the additional insight that can be gained from employing concepts from Riemannian geometry in scientific visualization. However, although we believe that insight is the most important benefit to be gained from using these concepts, we also discuss computational advantages. In addition to Riemannian metrics, we also introduce the most important related concepts from modern, coordinate-free differential geometry, in particular general (non-Cartesian) tensor fields and differential forms, smooth mappings between manifolds, Lie derivatives, and Lie groups and Lie algebras. Throughout the tutorial, we use several examples from the scientific visualization literature, dealing with scalar, vector, or tensor fields, respectively, and highlight their implicit or explicit connections to Riemannian geometry.
### Motivation and Audience
While there exist a lot of general mathematical textbooks and courses on differential geometry and Riemannian geometry, we are not aware of any course that specifically targets visualization researchers and practitioners. Furthermore, the concepts—and in particular the emphasis—most relevant and important for visualization techniques are hard to extract from standard geometry texts, which often cover a large amount of advanced material. At the same time, time-dependent data, such as unsteady vector fields, are not treated in sufficient detail in most geometry texts. This tutorial aims to start filling this gap for researchers and practitioners in visualization, on an intermediate level.
|
# GMAT Question of the Day (Feb 20): Probability and Critical Reasoning
- Feb 20, 02:00 AM Comments [0]
Math (PS)
In preparation for the Olympics, a group of five women is participating in a 400 meter race. What is the probability that either Jenny or Sally will win this race?
(A) $\frac{1}{5}$
(B) $\frac{4}{15}$
(C) $\frac{2}{5}$
(D) $\frac{1}{2}$
(E) $\frac{3}{5}$
Question Discussion & Explanation
GMAT Daily Deals
• Over 4M students achieved educational & career goals with Kaplan’s unparalleled test prep courses. Save $400. • The GMAT Pill Method: Learn how to think through the questions. Only$357 via GMAT Club. Start now.
• Want to get into your dream school without spending \$1000s? Exclusive GMAT Club deals for myEssayReview here.
• From Stacey Blackman: Think Final Round is a Waste of Time? Not Necessarily, Says UNC Kenan-Flagler. Learn more.
Verbal (CR)
Columnist: Almost anyone can be an expert, for there are no official guidelines determining what an expert must know. Anybody who manages to convince some people of his or her qualifications in an area—whatever those may be—is an expert.
The columnist’s conclusion follows logically if which one of the following is assumed?
(A) Almost anyone can convince some people of his or her qualifications in some area.
(B) Some experts convince everyone of their qualification in almost every area.
(C) Convincing certain people that one is qualified in an area requires that one actually be qualified in that area.
(D) Every expert has convinced some people of his or her qualifications in some area.
(E) Some people manage to convince almost everyone of their qualifications in one or more areas.
Question Discussion & Explanation
Like these questions? Get the GMAT Club question collection: online at GMAT Club OR on your Kindle OR on your iPhone/iPad
Browse all GMAT Questions of the Day
Subscribe to GMAT Question of the Day: E-mail | RSS
|
## Koszul and de Rham complexes
Last update: 22 June 2012
## Example 1: Singular homology
Let ${e}_{0},\dots ,{e}_{N–1}$ be the standard basis of $ℝ$. The standard $n$-simplex is $Δn= {x0e0 +⋯+ xnen∣ x0+⋯+ xn≤1}, with faces defined by ιj: Δn-1 ⟶Δn$ where ${\iota }_{j}\left({x}_{0}{e}_{0}+\cdots +{x}_{n-1}{e}_{n-1}\right)={x}_{0}{e}_{0}+\cdots +{x}_{j-1}{e}_{j-1}+{x}_{j}{e}_{j}+{x}_{j}{e}_{j+1}+\cdots +{x}_{n-1}{e}_{n}$.
Let $X$ be a topological space and let $𝔸$ be a ring. The singular homology of $X$ is the homology ${H}_{i}\left(X;𝔸\right)$ of the complex with given by $d(ef) = ∑j=1n (-1)j ef∘ij$ If $Y$ is a subspace of $X$ let $C\left(X;Y;𝔸\right)=C\left(X;𝔸\right)/C\left(Y;𝔸\right)$ so that $0⟶ C(Y;𝔸)⟶ C(X;𝔸)⟶ C(X;Y;𝔸) ⟶0$ is an exact sequence of complexes.
## Example 2: Cell complex homology
Let ${B}_{n}$ be the $n$-dimensional open ball in ${ℝ}^{n}$.
Let $X$ be a Hausdorff topological space. A cellular decomposition of $X$ is a sequence $∅⊆X0⊆ X1⊆⋯⊆ XN=X$ of closed subspaces of $X$ such that, for each $n$, ${X}_{n}-{X}_{n-1}$ has a finite number of connected components and, for each connected component $C$ of ${X}_{n}-{X}_{n-1}$ there is a homeomorphism ${B}_{n}\stackrel{\sim }{⟶}C$ which extends to a continuous map $\stackrel{‾}{{B}_{n}}⟶X$.
The cellular homology is the homology of the complex $⋯⟶Γn ⟶dn Γn-1 ⟶⋯$ given by ${\Gamma }_{n}={H}_{n}\left({X}_{n–1},{X}_{n}\right)$ with the connecting homomorphism ${d}_{n}:{H}_{n}\left({X}_{n},{X}_{n-1}\right)⟶{H}_{n-1}\left({X}_{n-1},{X}_{n-2}\right)$ coming from the exact sequence $0⟶ C(Xn-1, Xn-2) ⟶C(Xn, Xn-2) ⟶C(Xn, Xn-1) ⟶0.$ Then the singular homology of $X$ is isomorphic to the cellular homology, $Hn(X) ≃Hn(Γ).$
## The Koszul complex and the de Rham complex
Let $𝔸$ be a commutative ring, $L$ an $𝔸$-module and $u:L⟶𝔸$ an $𝔸$-linear map. The Koszul complex is $⋯⟶ Λi+1 (L)⟶ di+1 Λi(L)⟶ di Λi-1 (L)⟶⋯$ where $d$ is the unique antiderivation ($d\left(x\wedge y\right)=d\left(x\right)\wedge y-x\wedge d\left(y\right)$) of $\Lambda \left(L\right)$ that extends $u:L\to 𝔸$. Explicitly, $d(ℓ1∧⋯ ∧ℓn) = ∑i=1n (-1)i+1 u(ℓi) , ℓ1∧⋯∧ ℓi-1 ∧ℓi+1 ∧⋯∧ℓn.$
#### Example
Let $𝕂$ be a commutative ring, $L$ a $𝕂$-module and let $𝔸=S\left(L\right)$. The Koszul complex for the $S\left(L\right)$-module $S\left(L\right){\otimes }_{𝕂}L$ with linear form $u: S(L)⊗𝕂 L ⟶ S(L) f⊗x ↦ fx is Λ(S(L) ⊗𝕂L) =S(L)⊗ 𝕂Λ(L)$ and is the direct sum of complexes $0⟶S0L ⊗𝕂Λn L⟶S1L ⊗𝕂 Λn-1 ⟶⋯⟶ SnL⊗ 𝕂Λ0L ⟶0$ over $n\in {ℤ}_{\ge 0}$ with $d((x1⋯ xp)⊗( y1∧⋯∧ yq)) =∑i=1 q (-1) i+1yi x1⋯xp ⊗(y1 ∧⋯∧ yi-1∧ yi+1∧ ⋯∧yq).$ If $L$ is flat or $A$ is a $ℚ$-algebra these complexes are exact and $∑i=1n (-1)i [Si(L)] [Λn-i (L)]=0.$
#### Example 2
Let $𝕂$ be a commutative ring, $M$ a $𝕂$-module and ${x}_{1},\dots ,{x}_{\ell }$ a set of commuting endomorphisms of $M$. Then $M$ is a module for the ring $𝔸 =𝕂[x1, …,xℓ] and if L=𝔸⊕ℓ =𝔸-span {e1,…, eℓ}$ with linear map $u: L ⟶ 𝔸 ei ↦ Xi$ If ${C}^{p}\left(M\right)=\left\{\phantom{\rule{0.5em}{0ex}}\text{alternating maps from}\phantom{\rule{0.5em}{0ex}}\left\{1,\dots ,\ell {\right\}}^{p}⟶M\right\}$ then $Cp(M)≃ Hom𝔸(Λp (L),M)≃ Hom𝕂(Λp (𝕂ℓ),M) and Cp(M)≃ M⊗𝕂Λp (𝕂ℓ)$ and this gives a double complex $⋯⇋Cp-1 (M)⇋∂p ∂p-1 Cp(M) ⇋∂p+1 ∂p Cp+1 (M)⇋⋯$ with $(∂pm) (α1,…, αp+1) =∑j=1 p+1(-1) j+1x αjm( α1,…, αj-1, αj+1, …, αp+1)$ and $\left({\partial }^{p}m\right)\left({\alpha }_{1},\dots ,{\alpha }_{p+1}\right)=$
The homology and cohomology of the complex are denoted ${H}_{r}\left({x}_{1},\dots ,{x}_{\ell };M\right)$ and ${H}^{r}\left({x}_{1},\dots ,{x}_{\ell };M\right)$.
Let $𝔸$ be a commutative ring and let $M$ be an $𝔸$-module. A sequence ${x}_{1},\dots ,{x}_{\ell }$ of elements of $𝔸$ is completely secant for $M$ if ${H}_{r}\left({x}_{1},\dots ,{x}_{\ell };M\right)=0$, for $i\in {ℤ}_{>0}$. An $M$-regular sequence is a sequence ${x}_{1},\dots ,{x}_{\ell }$ of elements of $𝔸$ such that $M (x1 M+⋯+ xi–1M) ⟶ M (x1M+⋯ +xi-1 M) y ↦ xiy is injective for i=1,2,…,n .$
If ${x}_{1},\dots ,{x}_{\ell }$ is an $M$-regular sequence then ${x}_{1},\dots ,{x}_{\ell }$ is completely secant for $M$. If ${x}_{1},\dots ,{x}_{\ell }$ is an $𝔸$-regular sequence and $I=⟨{x}_{1},\dots ,{x}_{\ell }⟩$ then $I/{I}^{2}$ is free of rank $r$ over $𝔸/I$ (see [Lang, XXI Sec. 4]).
Example. The special case $M=𝕂\left[{x}_{1},\dots ,{x}_{n}\right]$ with commuting endomorphisms $\frac{\partial \phantom{x}}{\partial {x}_{1}},\dots ,\frac{\partial \phantom{x}}{\partial {x}_{n}}$ is the de Rham complex of $𝕂\left[{x}_{1},\dots ,{x}_{n}\right]$.
### de Rham cohomology
Let $A$ be a commutative algebra. The de Rham cohomology of $A$ is the cohomology of the complex $⋯⟶ Ωi-1(A) ⟶di-1 Ωi(A)⟶ di Ωi+1 (A)⟶⋯$ where the $p$-differential forms of $A$ is ${\Omega }^{p}\left(A\right)={\Lambda }^{p}\left({\Omega }^{1}\left(A\right)\right),{\Omega }^{1}\left(A\right)=I/{I}^{2},I=\mathrm{ker}\left(A\otimes A\to A\right),$ and $d$ is the unique antiderivation of degree 1 which extends $d: A ⟶ Ω1(A) x ↦ x⊗1-1⊗x$ and satisfies ${d}^{2}=0$.
Example. If $A=𝔽\left[{x}_{1},\dots ,{x}_{n}\right]$ then ??????
Let $M$ be an $A$-module. A connection on $M$ is an $𝔽$-linear map $\nabla :M⟶M{\otimes }_{A}{\Omega }^{1}\left(A\right)$ such that $\nabla \left(fm\right)=f\nabla \left(m\right)+m\otimes df$, for $f\in A$, $m\in M$.
## Notes and References
These notes are originally from http://researchers.ms.unimelb.edu.au/~aram@unimelb/MathGlossary/index.html the file http://researchers.ms.unimelb.edu.au/~aram@unimelb/MathGlossary/Koszul_deRham.xml
## References
[BouA] N. Bourbaki, Algebra I, Chapters 1-3, Elements of Mathematics, Springer-Verlag, Berlin, 1990.
[BouL] N. Bourbaki, Groupes et Algèbres de Lie, Chapitre IV, V, VI, Eléments de Mathématique, Hermann, Paris, 1968.
|
# 16. Molding and Casting¶
This week’s objective is to design and produce a mold for casting, then try out some casting materials. The mold should not be 3D printed, as a CNC machine can reach a much higher level of detail with the right settings.
When looking for an idea, I stumbled upon this Pinterest post of a little snail that you can attach on the side of your tea cup to hold the tea bag’s string. Being a huge fan of tea, I loved to idea and decided to create my own version.
My goal is to cast a silicon snail, using a two-part wax mold machined on my home CNC.
In the group assignemnt, we looked at the different types of casting material available in our lab.
## 2-step process¶
As my final casing is soft silicone, I can directly cast it in a wax mold. For an example of 2-step casting with silicone and resin, please see the second part of this page.
### Design¶
CAD softwares such as Fusion 360 are not well suited when trying to draw organic shapes. I picked blender for the whole modeling, knowing that I could import the resulting 3D object later on.
I start with a simple shell made with a Bezier curve serving as a guide for a mesh using the Curve modifier.
I then add a body and some eyes. Notice how the body has the shape of the side of a cup. When slicing the object in two, I have to make such that all angles can be achieved by the CNC. Because of this limitation, the eyes are mostly straight.
I include two vents for pouring the silicon and evacuating air. I also add registration pins. The tricky thing is to combine them additively for one half of the mold, and subtractively for the other half.
The resulting .stl file is imported in Fusion 360 for the CNC toolpath design.
I plan to use a 3mm flat end mill for the rough stage, then a 1mm end mill for the details.
I use a roughing step-down of 0.5mm for the 3mm bit and 0.35 for the 1mm bit. The other settings are:
• spindle speed: 9000 rpm.
• cutting feedrate: 900 mm/min.
### Mold machining¶
For this assignment, I wanted to try out my home CNC, which is a very common 3018 desktop CNC. I improved it slightly by adding endstop switches, as mine was not equipped with any.
I start by gluing my machinable wax to the plate with a hot glue gun, and mill a flat surface.
After setting the flat surface to the new z=0, I launch the first mold carving with my 3mm bit.
The detail is already good, as I already included a finishing step. However, some areas could simply not be reached with the 3mm bit, so I switch to my 1mm flat end mill to get the final finish.
For the other half, a lot more material must be carved to make sure the registration pins come out properly.
A quick test fit confirms that the milling was successful, and the vents are visible on the side.
I remove the wax from the CNC by cutting away the glue.
The mold is now complete! Notice small features are present, like the eyes of the snail. This is thanks to the finishing wit hthe 1mm bit.
### Casting 1: thick silicone¶
For my first cast, I use a rather thick silicone, the Dragon skin 10:
The information is as follows:
• Pot life: 15 mins.
• Cure time: 75 mins.
• Mixing ratio: 1:1.
This is not a hazardous material, but using gloves is still very much required as it can get sticky. There is no toxic fume being produced, but it is still a good idea to work in a ventilated area.
I pour the two components in a small mixing cup and stir for 2 minutes.
I tried using the vents, but this silicone was simply too thick, so I had to open the mold in two, and pour it directly. The main result was a proper mess once closing the mold, as the extra material came out from all sides.
I was anxious that the mold would not open again, due to being so sticky now. However, the silicone became less sticky once cured, and opening the mold was not difficult using a small blade from the side. As I took no precaution regarding bubble, the result was full of trapped air. Moreover, there is a skin on the separation line, due to the excess material when closing the mold.
One larger air bubble is visible on the right. Overall, this is not a great success in terms of details, but at least it showed that the alignment of the mold was good.
### Casting 2: polyurethane¶
Next, I tried some polyurethane compound. This is a more hazardous substance, so protective equipment is mandatory.
I use PX60 from xencast:
The information is as follows:
• Pot life: 10 mins.
• Curing time: 1-2 hours for demolding, another 7 days for full cure.
• mixing ratio: 1:1.
I also added some red pigment, but this is barely visible due to the dark tone of the polyurethane.
Right after mixing the two parts, a vacuum chamber is used to eliminate trapped air.
After injecting the material into the mold, I decided to put it back in the vacuum chamber.
The pressure is about 0.1 bar. However, the vacuum pump was too slow, and I forgot to take the pot life into account. When the full 10 minutes where reached, there was still a vacuum in the chamber, meaning the gas bubbles where fully inflated.
As the material was now curing, it was too late for these bubbles to retract again and I was stuck with rather large bubbles in the finished product. However, the vacuum helped capture some more details, and we can now see the eyes of the snail.
### Casting 3: thin silicone¶
For my last try, I wanted to use silicone again, but with two changes:
• I want an opaque, colored silicone.
• It should be very liquid when pouring, but hard once cured.
I found the perfect material for this in my lab, the elite double 32. This is normally used to produce molds, but in this case the finished product will be silicone. It’s also food-safe, which is a good thing.
The technical specs are:
• pot life: 10 mins.
• curing time: 20 mins.
• mixing ratio: 1:1.
After mixing the two parts, I put it in the vacuum chamber to eliminate the bubbles. I then use a syringe to inject it into my mold. I have to be quick, as a 10 minutes port life is shorted than it sounds.
The injected mold is a bit messy now, but I wanted to make sure enough material was present.
When I opened the mold, I was very satisfied with the level of detail. However, there is still one last trapped air bubble in the tail. I think this happened during injection, and the vent was to thin for the bubble to easily espace the mold.
I use a dremel to apply some surgical operations to the tail.
With part of its tail removed, the snail has a clean finish. Notice the mold separation line is almost invisible.
When looking very carefully, you can see the 0.1mm scan lines of the CNC machining.
Time to make some tea to try out the idea! The snail weighs about 10 grams but is surprisingly stable thanks to the silicone being sticky and bendy.
## 3-step process¶
The fabacademy assignment requires students to make a 2-step casting, in which the machined wax part is a positive, then turned into a negative silicone mold. I went back and designed a positive mold to illustrate this method. This was more fun than I expected, because the silicone mold can be duplicated and is more durable than the wax.
### Design¶
One of my favorite artists has always been M.C. Escher. His optical illusions are fascinating at first, and he fully mastered the tesselation method. This is a type of shape that can be tiled indefinitely, covering a plane of arbitrary size. I found the following page presenting a 3D printed version of Escher’s lizards. Here is the original drawing:
I downloaded the .stl and started drawing my lizard in Blender following the pattern. To check that it is indeed a tileable shape, I place several lizard in contact at a 120° angle. They all share the same mesh, so any change on one lizard is copied to the others. This way, I ensured to leave around 0.1mm of space between them, otherwise it might be too tight for a fit.
The lizard are flat for now, and some corners are too tight for even a 0.8mm mill. I decide to continue this work in Fusion 360.
I produced the following .svg using the Freestyle SVG Exporter module:
The .svg is imported into Fusion 360 and extruded. At this stage, I add a bevel on tight corners to help the mill reaching those spots. It’s important to bevel the convex and concave corners by about the same value, this ensures the shape can still fit afterwards. I also beveled the top face to give it a smooth look.
I add a stock and some registration pins in case I change my mind later and decide to upgrade it to a 2-part mold.
The milling operations start with a pocket with a 3mm flat end mill with 0.1mm or radial stock to leave:
Then, a pocket with 1/32” = 0.8mm mill with the rest machining option enabled. This ensures any material already removed by the previous operations is ignored, making this operation a lot faster.
Finally, two finishing parallel operations with a 0.1mm stepover and the 0.8mm mill. The two operations are identical except for their orientation, with a 90° difference to reveal more tiny details.
### Mold machining¶
I place the machinable wax on my 3018 desktop CNC, with the 3mm mill for the first step. The result is a flat lizard with the rough shape visible.
The 0.8mm tool had just the right length to reach every part of the mill. Thanks to Fusion 360’s collision prediction, the conical section of the mill was taken into account.
After the finishing steps, the beveled edges are more visible.
The wax is ready to be used as a positive for the silicone mold. If you watch closely, you can see that the top and bottom areas were not reached by the finishing step to save some time.
### Mold casting¶
The first step is to build a small box around the wax to keep the silicone from flowing over. I cut and strap some cardboard on the sides. Notice I also extended the border next to the lizard’s legs, as I was worried it might be too thin for silicone.
I use the same silicone as mentioned earlier, the elite double 32 with a mixing ratio of 1:1 and a curing time of 20 minutes.
When pouring, I stard from the corners and only pour on the top when the lizard is fully submerged. This prevents bubbles from forming on the sides of the lizard.
The cardboard is removed to reveal the mold. Note that a very thin line of silicone managed to infiltrate the cardboard, showing how fluid this silicone is. This extra material is only on the sides and is easily cut with scissors.
The final mold is looking good, there are no obvious bubbles. I later notice some microscopic bubbles on the corners, which I think only a vacuum chamber could have helped with.
### Final casting¶
For the final cast, I use CLV super sap epoxy from Entropy Resins. The basic information is:
• Pot life: 30 mins.
• Cure time: around 12 hours, up to 72 hours.
• Mixing ratio: 100:40 (resin to hardener ratio).
When pouring, it is not always easy to know when to stop especially with a clear product. I use the reflections on the surface until I don’t spot any concave or convex meniscus.
There are some bubbles in the final lizard, but overall the result is satisfying.
I cast a second lizard to prove that the fit works. I wish I could cast around 10 to really start tiling them, but the process is really time-consuming with resin.
Finally, I cast another silicone mold to try casting edible materials (I don’t want to eat any resin leftover!). When casting chocolate, it is important to use dark chocolate and heat it to around 40°C; this gives a smooth and shiny look after cooling down. In this example, I used cheap 50% cocoa chocolate, so the result is not as shiny.
|
setting real parameters in Hamiltonian matrix
The following Matrix represents my Hamiltonian.
m = {{ B/2 - d - j/2 + a, (Sqrt[2]*j)/2, M}, {(Sqrt[2]*j)/2, B/2 + a, 0}, {M, 0, (-3*B)/2 - d + j/2 + a}
How can I define the variables used in matrix such that the Mathematica assumes as they are real. because with this matrix, mathematica gives complex eigenvalues but my eigenvalues are supposed to be real.
-
The matrix you give is a symbolic one, and Eigenvalues returns symbolic Root objects. How did you arrive to complex eigenvalues? The matrix is symmetric, so they are always supposed to be real. Finding the eigenvalues of this matrix symbolically or with exact numbers amounts to solving a cubic equation. The solutions of some cubic equations can't be written in an explicit form (in terms of radicals) without using the imaginary unit, but this does not mean that they are not real. It is possible that this is what happened when you substituted in numbers. – Szabolcs Feb 11 '14 at 15:58
The matrix
Clear[a,B,d,j,M];
m = {{B/2 - d - j/2 + a, (Sqrt[2]*j)/2, M}, {(Sqrt[2]*j)/2, B/2 + a,
0}, {M, 0, (-3*B)/2 - d + j/2 + a}};
has eigenvalues determined by the characteristic polynomial of (maximal) degree 3. In the absence of any other information, we get three Root objects as the eigenvalues, which is Mathematica's way of preserving all the information necessary to provide an arbitrarily precise answer to the question what the roots of the characteristic polynomial of m are.
By itself, m is not guaranteed to have real eigenvalues unless you specify some additional properties of the variables. In particular, if you choose them to be real then m is real-symmetric and therefore has real eigenvalues. This doesn't affect the form of the Root solutions, but it may be important for other test that may be performed in your calculations.
For example, if you ask whether m is hermitian,
HermitianMatrixQ[m]
(* ==> False *)
which means the hermiticity is not recognized because it is not explicit in the form of the components of m. To maintain the symbolic form of the matrix and still have this last test yield True, you can do this:
m1 = m /. Map[(# -> Re[#]) &, Variables[m]]
(*
==> {{Re[a] + Re[B]/2 - Re[d] - Re[j]/2, Re[j]/Sqrt[2],
Re[M]}, {Re[j]/Sqrt[2], Re[a] + Re[B]/2, 0}, {Re[M], 0,
Re[a] - (3 Re[B])/2 - Re[d] + Re[j]/2}}
*)
HermitianMatrixQ[m1]
(* ==> True *)
The test HermitianMatrixQ, and (to my knowledge) all test functions ending in Q, inspect the explicit form of the argument, and by giving all variables the Head Re I make sure this is recognized.
This also allows you to get the expected answer from
Simplify[ConjugateTranspose[m1] == m1]
(* ==> True *)
whereas this is not true for m.
There are in fact algorithms that can find the roots of a cubic polynomial without using complex numbers, assuming that three real roots are known to exist. This is based on trigonometric identities. This is where it may be useful to have specified symbolically that the matrix has only real entries, as I did above.
-
I think you may be misinterpreting the answer you see. Let's define a function:
e[{a_, d_, B_, M_, j_}] :=
Eigenvalues[{{a + B/2 - d - j/2, j/Sqrt[2], M}, {j/Sqrt[2], a + B/2,
0}, {M, 0, a - (3 B)/2 - d + j/2}}]
This will gives the eigenvalues for any set of parameters a,d,B,M,j. For instance,
e[{1, 2, 3, 4, 5}]
{1/4 (-15 - Sqrt[185]), 5, 1/4 (-15 + Sqrt[185])}
For arbitrary values:
e[RandomReal[{-1, 1}, 5]] // N
{-1.19836, -0.203181, 0.13574}
-
|
# Why is vertical take-off restricted to lighter weight aircraft?
Vertical take off is a big advantage, but why it is only limited to low weight?
For example, fixed wing like An225 can lift more than hundred tonnes of payload, while the biggest helicopter is 25 tonnes. Why there is no effort to build a helicopter can lift 50 - 100 tonnes. What limits them? Economic or technical problem? I guess it is a technical problem because even the heaviest helicopter is just an experiment, but what is it?
• To be fair, the biggest helicopter lifted over 44 tons. On the other hand, An-225 lifted over 250 t :) – Zeus Apr 19 '16 at 2:14
• If we want to stretch the definition of aircraft a bit, the first stage of the Saturn V lifted well over 3000 tons, vertically. Which if you think about it does tend to shed light on the relative efficiency of vertical takeoff. – jamesqf Apr 19 '16 at 18:02
Essentially it comes down to supersonic rotor tips
With a plane, in theory you can make it pretty much as big as you like - as long as you have strong/light enough materials, and can keep adding power, an aeroplane design scales pretty well. Bigger wing = more lift. As long as you can make the wing bigger without it breaking, and as long as you can add enough power to overcome the extra drag, there aren't many fixed limits
With a helicopter, we're limited by the rotor tips: once they go supersonic, they cause a lot of problems.
So how does a helicopter produce lift? By using rotors to push air down within a kind of circle. To add more lift we can do (essentially) three things.
1. Make the rotor spin faster, so it pushes more air down in the circle it already uses. Obviously this makes the tips spin faster, so we can only do it to a certain extent. We've already hit this limit.
2. Make the rotor blades longer, so they push a bigger circle of air. Again, though, due to the nature of a circular blade, the outside of a blade is moving faster than the inside. For a certain rotor speed, there's a fixed limit to how large the blades can be. Again, we've already hit this limit
3. Add more blades, so there are more blades producing lift. This works to an extent (hence why smaller helicopters may have two rotor blades, but larger ones have 4, 5 or more. Again, though, this doesn't scale indefinitely - each rotor interferes with the next, you can't just keep adding more
There are other slight modifications we can make, such as the airfoil of the rotor, but they don't add significant gains
So, basically, we've hit the limit of what we can lift with a single rotor, The only real way to add more lift now is to add more rotors: doing that would be far less efficient than simply using an aeroplane.
Which brings me to the final point - helicopters are very inefficient and pretty slow... We simply don't need, except in a few niche circumstances, to carry more weight with them.
• +1 for the final point. To perhaps be a little more clear, helicopters require thrust/weight ratios > 1. Airplanes do not. Even if we could put enough rotors and enough blades on it, the engines would have to produce a lot more power than those of an airplane carrying the same payload. Beyond a point, we'd have to start installing vertically-mounted GE90s or some such thing on them. It should also be noted that this limitation applies to all vertical takeoff (except lighter-than-air craft,) not just helicopters. – reirab Feb 15 '15 at 19:12
• Aircraft require thrust to weight considerations, it's just not exactly proportionate in the same way – Jon Story Feb 15 '15 at 19:54
• Yes, but they're usually closer to 0.25, not > 1.0. Much less engine power (and, therefore, much less fuel) for the same lift. – reirab Feb 15 '15 at 20:00
• True, fuel burn becomes an issue too - I may add that into the answer – Jon Story Feb 15 '15 at 20:10
• From physical point of view, isnt that power and force can be traded? E.g. a longer lever will have higher force for the same power? – user2174870 Feb 18 '15 at 16:21
Jon's answer is correct: "helicopters are very inefficient and pretty slow..."
but he misses one important point: helicopters are also incredibly fragile and delicate. Even a regular single-rotor helicopter flying is a miracle. It's been called: "10,000 spare parts flying in close formation."
You could add more rotors to gain more lift, the Chinook has two rotors which is part of why it can carry so much. The rear rotor adds complexity but also removes the need for a tail rotor, which is why it isn't quite 20,000 spare parts flying in close formation. But even Chinooks are fragile compared to C-130s.
Adding engines adds redundancy in a plane. And remember that if you are in a plane and you lose an engine (or all engines) you can still fly or glide to a safe landing.
If you are in a rotorcraft and you lose your engine (either main rotor or tail) or really any single part of those 10,000 parts, then all you can do is pray and try an auto-rotation landing. It gets worse the more rotors you add, not better. This is one of the limitations of rotorcraft and why you for the most part only see multiple rotors (4-, 6-, 8-) used in unmanned drones except for some very experimental vehicles.
• A big advantage for the Chinook is that you can have a load center anywhere along the midline rather than only exactly below the rotor center. Very useful for a military transport with underslung cargo – NobodySpecial Mar 27 '15 at 3:25
• "You could add more rotors to gain more lift" and yet the heaviest-lift chopper to go into production was a single rotor design. – Peter Green May 21 '18 at 0:11
For a helicopter to take off vertically: $$Lift \gt Weight.$$
The weight grows with length to the third power ($l^3$, weight is proportional to volume) while the lift only grows with $l^2$, because it's proportional to the rotor blades planform area.
More lift means higher lift coefficient and more rotorblade planform area. The maximum cirumferential speed at the blade tips is limited (there's the constraint that the tips can't move at supersonic speeds). One would need stiff longer and trapezoidal blades and there is a structural limit to the possible torque on the blade root.
Aircraft that cruise at high subsonic speeds exprience the same airspeed at any portion along the wings, unlike a helicopter that has a $v=r \omega$ relationship for the speed along the rotor blade. $v$ is constrained to subsonic speed.
• Without more detail, this would be better as a comment. – fooot Feb 15 '15 at 19:15
• Well, for anything to take off at all, lift must be greater than weight. Airplanes must deal with the same square-cube law in order to generate enough lift, resulting in larger aircraft having disproportionately larger wings. The main difference is that airplanes don't have to use thrust to directly generate their lift, allowing for much lower thrust/weight ratios and, thus, much better fuel economy for the same load. – reirab Feb 16 '15 at 4:27
• You can always throw up a stone, no lift involved. Subsonic aircraft experience the same high subsonic speed on any portion of their wing, unlike a helicopter for which the high subsonic speed is present at the rotor blade tips. Going inward from the tips, the speed decreases. – user7241 Feb 16 '15 at 17:04
The power required to maintain altitude at forward flight is lower than the power required to hover. As horizontal velocity ($V_x$) increases, the induced velocity ($V_i$) at the disc decreases, and induced power ($P_i$) decreases roughly equal to $\frac{1}{V_i}$. However, as $V_x$ increases, parasitic power ($P_p$) increases proportional to $V^3$. Profile power ($P_0$ - the power needed to maintain rotor speed) remains roughly constant. If you chart all of these together with Required Power on the y-axis and forward velocity on the x-axis, you end up with something that looks like this:
What this means is that there's an optimum forward speed at which your power is minimized. This is important, because the max rate of climb (and lifting power) is determined by your available power $$P_{av} = P_{tot} - P_i - P_p - P_0$$. If you minimize the power needed to keep the aircraft flying, you have more net power for lifting other things (and for heavy helicopters, they may not be able to take off vertically at all). For this reason, the max rate of climb will always be at some forward speed, and by definition, the max lifting capacity will also be at some forward velocity. A practical demonstration of this principle is that large cargo lifters like the sky crane and the chinook always take off with some forward velocity component. This is why.
• good answer, do you have a source for the image? – ROIMaison Apr 19 '16 at 15:09
In addition to the power-to-weight ratio - already aptly described above - I bluntly strike at the economic reasons and cite that bean-counters look at the cost-per-takeoff and shake their head.
• Yes. Even assuming that you could solve all the technical problems of a really heavy-lift helicopter, where's the economic demand for them? What can you do enough of cheaper with a big helicopter to pay off the development & operating costs? Even where there is demand for heavy lift, it seems that it can be met better with airships: bloomberg.com/news/articles/2013-06-14/… – jamesqf Mar 25 '15 at 19:05
• This seems to echo the current top-voted answer's final statement: "We simply don't need, except in a few niche circumstances, to carry more weight with [helicopters]." We tend to prefer answers that add something substantial that has not already been written in another answer to the question; can you edit your answer so that it does so? – a CVn Mar 25 '15 at 19:08
|
# Sunday Times Teaser 2756 – Terrible Teens
### by Victor Bryant
#### Published: 19 July 2015 (link)
I have allocated a numerical value (possibly negative) to each letter of the alphabet, where some different letters may have the same value. I can now work out the value of any word by adding up the values of its individual letters. In this way NONE has value 0, ONE has value 1, TWO has value 2, and so on up to ELEVEN having value 11. Unfortunately, looking at the words for the numbers TWELVE to NINETEEN, I find that only two have values equal to the number itself.
Which two?
Here is a solution using SymPy:
Here is a manual solution (this analysis has been updated to take account of Tony’s comment below).
First we have: \begin{align}\small FOURTEEN&=\small 4+\small TEEN \\ \small SIXTEEN&=\small 6+\small TEEN \\ \small SEVENTEEN&=\small 7+\small TEEN \\ \small NINETEEN&=\small 9+\small TEEN\end{align}
So, whatever value $$\small TEEN$$ has, these are all correct or all incorrect. And, since only two can be correct, this means that they are all incorrect.
Second, we can show that $$\small TWELVE$$ is correct as follows:
\begin{align}\small TWELVE&=\small TW+\small LV+\small 2\small E \\&=(\small TWO-\small O)+(\small ELEVEN-\small N-\small 3\small E)+\small 2\small E\\ \small &=\small TWO+\small ELEVEN-\small ONE \\ &=\small 12\end{align}
Third, we have \begin{align}\small THIRTEEN&=\small THREE+\small TEN+\small I-\small E \\ \small &=\small THREE+\small TEN+\small NINE-\small 2(\small N+\small E)\\ \small &=\small 22 -\small 2 \small E – \small 2(\small NONE – \small ONE) \\ &= \small 24-\small 2\small E\\ \small EIGHTEEN&=\small EIGHT+\small N+\small 2\small E\\ \small &=\small EIGHT+\small NONE-\small ONE+\small 2E\\ \small&=\small 7+\small 2E\end{align}
To make $$\small THIRTEEN$$ correct, $$\small E$$ has to be 5.5 but this value also makes $$\small EIGHTEEN$$ correct. However, if both are correct we would have three correct values when there are only two. So they must both be incorrect.
Hence only $$\small TWELVE$$ and $$\small FIFTEEN$$ can be correct.
Brian
(1) The above does not show that FIFTEEN is actually correct (although your algebra on the other site does).
(2) On the other site John Crabtree correctly points out that letter values are not restricted to integers.
Allowing non-integers still gives the solution TWELVE, FIFTEEN but for a reason different from the above.
If E=5.5 THIRTEEN and EIGHTEEN would both be correct (not allowed)
Also if E=0 FOURTEEN, SIXTEEN, SEVENTEEN and NINETEEN would also be correct (not allowed).
Any value of E other than 5.5 or 0 and any value of G (real or complex in each case) would lead to a valid set of letter values.
Regards
Tony
Yes I assumed integer values in my proof above, which, as you say, is not necessary to obtain a solution. I’ll update the analysis as you suggest. But, given the integer values assumption, $$\small FIFTEEN$$ is shown to be correct indirectly by showing that other words in $$\small THIRTEEN$$ to $$\small NINETEEN$$ are incorrect (unless, of course, the teaser is wrong).
|
# Riesz space
In mathematics, a Riesz space, lattice-ordered vector space or vector lattice is a partially ordered vector space where the order structure is a lattice.
Riesz spaces are named after Frigyes Riesz who first defined them in his 1928 paper Sur la décomposition des opérations fonctionelles linéaires.
Riesz spaces have wide ranging applications. They are important in measure theory, in that important results are special cases of results for Riesz Spaces. E.g. the Radon–Nikodym theorem follows as a special case of the Freudenthal spectral theorem. Riesz spaces have also seen application in Mathematical economics through the work of Greek-American economist and mathematician Charalambos D. Aliprantis.
## Definition
A Riesz space E is defined to be a vector space endowed with a partial order "$\leq$" that, for any $x,y,z\in E$, satisfies:
1. $x\leq y$ implies $x + z\leq y + z$ (translation invariance).
2. For any scalar $\alpha \geq 0$, $x\leq y$ implies $\alpha x \leq \alpha y$ (positive homogeneity).
3. For any pair of vectors $x,y\in E$ there exists a supremum (denoted $x\vee y$) in E with respect to the partial order "$\leq$" (lattice structure)
## Basic properties
Every Riesz space is a partially ordered vector space, but not every partially ordered vector space is a Riesz space.
Every element f in a Riesz space, E, has unique positive and negative parts, written $f^+:=f\vee 0$ and $f^-:=(-f)\vee 0$. Then it can be shown that, $f=f^+-f^-$ and an absolute value can be defined by $|f|:=f^++f^-$. Every Riesz space is a distributive lattice and has the Riesz decomposition property.
## Order convergence
There are a number of meaningful non-equivalent ways to define convergence of sequences or nets with respect to the order structure of a Riesz space. A sequence $\{x_n\}$ in a Riesz space E is said to converge monotonely if it is a monotone decreasing (increasing) sequence and its infimum (supremum) x exists in E and denoted $x_n\downarrow x$ ($x_n\uparrow x$).
A sequence $\{x_n\}$ in a Riesz space E is said to converge in order to x if there exists a monotone converging sequence $\{p_n\}$ in E such that $|x_n-x|\leq p_n\downarrow 0$.
If u is a positive element a Riesz space E then a sequence $\{x_n\}$ in E is said to converge u-uniformly to x for any $\varepsilon >0$ there exists an N such that $|x_n-x|<\varepsilon u$ for all n>N.
## Subspaces
Being vector spaces, it is also interesting to consider subspaces of Riesz spaces. The extra structure provided by these spaces provide for distinct kinds of Riesz subspaces. The collection of each kind structure in a Riesz space (e.g. the collection of all Ideals) forms a distributive lattice.
### Ideals
A vector subspace I of a Riesz space E is called an ideal if it is solid, meaning if for any element f in I and any g in E, |g| ≤ |f| implies that g is actually in I. The intersection of an arbitrary collection of ideals is again an ideal, which allows for the definition of a smallest ideal containing some non-empty subset A of E, and is called the ideal generated by A. An Ideal generated by a singleton is called a principal ideal.
### Bands and $\sigma$-Ideals
A band B in a Riesz space E is defined to be an ideal with the extra property, that for any element f in E for which its absolute value |f| is the supremum of an arbitrary subset of positive elements in B, that f is actually in B. $\sigma$-Ideals are defined similarly, with the words 'arbitrary subset' replaced with 'countable subset'. Clearly every band is a $\sigma$-ideal, but the converse is not true in general.
As with ideals, for every non-empty subset A of E, there exists a smallest band containing that subset, called the band generated by A. A band generated by a singleton is called a principal band.
### Disjoint complements
Two elements f,g in a Riesz space E, are said to be disjoint, written $f\bot g$, when $|f|\wedge |g|=0$. For any subset A of E, its disjoint complement $A^d$ is defined as the set of all elements in E, that are disjoint to all elements in A. Disjoint complements are always bands, but the converse is not true in general.
### Projection bands
A band B in a Riesz space, is called a projection band, if $E=B\oplus B^d$, meaning every element f in E, can be written uniquely as a sum of two elements, $f=u+v$, with $u\in B$ and $v\in B^d$. There then also exists a positive linear idempotent, or projection, $P_B:E\to E$, such that $P_B(f)=u$.
The collection of all projection bands in a Riesz space forms a Boolean algebra. Some spaces do not have non-trivial projection bands (e.g. $C([0,1])$), so this Boolean algebra may be trivial.
## Projection properties
There are numerous projection properties that Riesz spaces may have. A Riesz space is said to have the (principal) projection property if every (principal) band is a projection band.
The so-called main inclusion theorem relates these properties. Super Dedekind completeness implies Dedekind completeness; Dedekind completeness implies both Dedekind $\sigma$-completeness and the projection property; Both Dedekind $\sigma$-completeness and the projection property separately imply the principal projection property; and the principal projection property implies the Archimedean property.
None of the reverse implications hold, but Dedekind $\sigma$-completeness and the projection property together imply Dedekind completeness.
## Examples
• The space of continuous real valued functions with compact support on a topological space X with the pointwise partial order defined by fg when f(x)g(x) for all x in X, is a Riesz space. It is Archimedean, but usually does not have the principal projection property unless X satisfies further conditions (e.g. being extremally disconnected).
• The space $\mathbb{R}^2$ with the lexicographical order is a non-Archimedean Riesz space.
|
# Perturbed circular billiard, chaos
1. Sep 12, 2007
### pomaranca
1. The problem statement, all variables and given/known data
The center of a circular billiard is harmonically oscillating in horizontal direction with the amplitude a and frequency omega.
Describe the motion of elastic particle with mass m in this billiard. Use the proper phase space and Poincare map.
Under what conditions, dimensionless parameter eta=r/R, is energy of the system time limited and when time unlimited and when is the motion chaotic?
Does any one has experience with this kind of problems?
2. Sep 14, 2007
### EnumaElish
I'd suggest writing out relevant equations and attempted solution.
3. Sep 19, 2007
### pomaranca
Dynamical system
This billiard is a dynamical system for which i should construct attractors and numerically find its fractal dimensions. (http://en.wikipedia.org/wiki/Dynamical_billiards)
Let's say that only the boundary is oscillating. Let mass of the particle be 1.
Phase space
What are the coordinates in the phase space?
Would coordinate pair $$(s,v)$$ suffice to describe the motion of out particle? Here $$s$$ is the perimeter length, position of the collision on the circle, and $$v$$ speed of the particle.
Collision on the oscillating boundary
If the boundary would be fixed then the particle would reflect specularly (the angle of incidence equals to the angle of refection), with no change in the tangential component of speed and with instantaneous reversal of the speed component normal to the boundary.
But because our boundary is oscillating in horizontal direction, the angle of reflection is not the same to the angle of incidence because the particle gets another component of speed in the horizontal direction from the wall of billiard.
The collision is elastic, therefore the energy and momentum are conserved (http://en.wikipedia.org/wiki/Elastic_collision)
$$v_2' = \frac{v_2(m_2-m_1)+2m_1v_1}{m_1+m_2}$$:
where $$v_2'$$ is the speed of particle after the collision and $$v_2$$ its speed before the collision, $$v_1$$ is the speed of the billiard boundary before the collision.
The potential of billiard is $$V(q)=\begin{cases} 0 \qquad q \in Q \\ \infty \qquad q \notin Q \end{cases}$$
where $$Q$$ the region inside the circle. The particle can't affect the movement of the boundary: $$m_1\gg m_2$$ and thus from the above equation $$v_2'=2v_1-v_2$$.
We describe the oscillation of the boundary by $$x(t)=x_0\sin(\omega t)$$ for every point of boundary. The absolute value of speed in the moment after the collision is then $$v_2'=2x_0\omega\cos(\omega t)-v_2=f(t)-v_2$$, where $$t$$ is the moment of collision.
Poincaré return map
To find attractors in the phase space i should first construct the Poincaré map. For that i should know what will be the phase space and then discretize continuus time dynamics to discrete time dynamics. For the case of billiards that is of course dynamics between subsequent collisions: $$(s_n,v_n)\to (s_{n+1},v_{n+1})$$.
I realize this is quite a geometric problem. But i don't know if this is at all the correct way to finding Poincaré return map.
Any suggestions?
Last edited: Sep 19, 2007
|
1. ## arithmetic series
q:
Question: A set of steps for the end of a pier are built of stone. A sketch of the cross-section of the steps is shown.
Each step has rise of 0.2m and tread of 0.6m. Form a series to calculate the area of the cross-section.
I tried to answer the question but I got it wrong
I tried to to find the area of A
using Sums of arithmetic - i found the length of the stairs
$S_{12}= \frac{12}{2}\left [ 2(0.6)+(12-1)0.2 \right ] = 19.8$
area of a = $\frac{1}{2}(19.8)(12) = 118.8$
area b = 8 x 12 = 96
therefore total area = 214.8 square metres
the answer in the book is 118.8 square metres
2. ## Re: arithmetic series
12/0.6 = 20 rectangular strips of width 0.6 m and variable height 8+0.2n meters
$S_{20}=(8 \cdot 0.6) + [(8+0.2) \cdot 0.6] + [(8+0.4) \cdot 0.6] + ... + [(8+3.8) \cdot 0.6]$
$S_{20} = 0.6(8+8.2+8.4+ ... +11.8) = 0.6\bigg[\dfrac{20}{2}(16+19 \cdot 0.2) \bigg] = 118.8 \, m^2$
3. ## Re: arithmetic series
In case this helps:
h = height = 8
w = width = 12
So area without steps = hw = 96
r = rise = .2
|
# Gravity Ratio
1. Homework Statement :
The center of a 1.00 km diameter spherical pocket of oil is 1.20 km beneath the Earth's surface. Estimate by what percentage g directly above the pocket of oil would differ from the expected value of g for a uniform Earth? Assume the density of oil is 8.0*10^2 (kg/m^3).
Delta(g)/g=
2. Homework Equations
g=GM/r2
D=m/v
3. The Attempt at a Solution :
Well I calculated that the pocket of oil is 0.7km beneath the earth. And using density=mass/volume to get the M and plugged it into the gravity equation. And subtracted it from 9.8 and then divided by 9.8. The answer is neglegable since they want me to answer using 2 sigfigs. So I am kind of lost, what do I need to do?
|
# The cost function for a certain company is C=60x+300 and the revenue
The cost function for a certain company is
$C=60x+300$
and the revenue is given by
$R=100x-0.5{x}^{2}$
Recall that profit is revenue minus cost. Set up a quadratic equation and find two values of x (production level) that will create a profit of $\mathrm{}300$.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Vaing1990
Step 1
$P\left(x\right)=R\left(x\right)-C\left(x\right)$
Where P = Profit Function, C = cost function, R = revenue function
Step 2
$R\left(x\right)-C\left(x\right)=300$
$⇒100x-0.5{x}^{2}-60x-300=300$
$⇒40x-0.5{x}^{2}-600=0$
$⇒40x-\frac{5}{10}{x}^{2}-600=0$
$⇒400x-5{x}^{2}-6000=0$
$⇒5{x}^{2}-400x+6000=0$
$⇒{x}^{2}-80x+1200=0$
$⇒{x}^{2}-60x-20x+1200=0$
$⇒x\left(x-60\right)-20\left(x-60\right)=0$
$⇒\left(x-60\right)\left(x-20\right)=0$
Rither, $x-60=0$
$⇒x=60$
or, $x-20=0$
$x=20$
$\therefore$ The two values of x with be 60, 20 that with create a profit of Rs 300
|
# Bell's theorem outside of the context of entanglement
I am an undergrad college student and have been watching videos on YouTube explaining Bell's theorem / inequality and how it shows that there cannot be local hidden-variable theories for entanglement.
However, the experiments most videos talk about are in the context of entanglement specifically, where the experiment involves taking a pair of entangled particles and recording the statistics of how often the results match when two people repeatedly measure the two particles along randomly chosen axes. The statistics of the data we get after performing the experiment rules out local hidden variables.
My question is how this rules out local hidden variable theories in quantum mechanics outside of the context of entanglement? Or does it actually only rule out local hidden variable theories in the context of entanglement?
For example, we could have a local hidden variable theory outside of entanglement like the following: A single electron (no entanglement) could have a definite position before it is measured (a local hidden variable), but we cannot know it until we have measured it, and that local hidden variable is why we can only talk about the probability of finding it at a location. And therefore, the randomness is not a fundamental property of the quantum nature of the electron but is due to a hidden variable.
It seems to me that the Bell's theorem experiment they talk about in the videos only tells us that quantum entanglement specifically cannot be explained by local hidden-variable theories, but not that other quantum phenomena, such as the fact that we can only know the probability of finding an electron in an atom at a certain location until we measure it, cannot be explained by local hidden-variable theories.
Are there other experiments that have been done that rule out local hidden variable theories in these other contexts? Or if not, how do you generalize the result of Bell's theorem/experiment in the "entanglement case" to prove there cannot be local hidden-variable theories in quantum mechanics in general?
Clarification: I am not for or against using any theory. I just don't see how local hidden variable theories in QM in general, are ruled out by the Bell experiments with entanglement specifically and am asking how/if local hidden variable theories are ruled out in general.
• To be clear, you are proposing that we use a local hidden variable theory (whose predictions match that of QM) in those situations where we can get away with it, and resort to quantum mechanics whenever it can be proved that no local hidden variable theory could match the (verifiably correct) predictions of QM, right? Why wouldn't we just use QM all the time? Aug 19, 2022 at 22:57
• I completely agree with @J.Murray . But for the record, yes, you are right that entangled states are precisely the states where there are no joint probability distributions for the outcomes of various observations, and so they are precisely the states where no hidden variable story can work. Aug 19, 2022 at 23:11
• Bell’s theorem says that QM can produce correlations that cannot be produced by a local hidden variable theory. It doesn’t say QM must produce such correlations in all cases.
– d_b
Aug 19, 2022 at 23:32
• you don't need two particles to violate a Bell inequality. You just need different degrees of freedom. You can violate a Bell inequality using an "entangled state" of, say, polarization and position degrees of freedom of a single photon/particle
– glS
Aug 22, 2022 at 14:54
• For example, how do you explain the single photon mach zehnder experiment if your image of a photon or electron or whatever is as a single distinct ball with specific (but unknown) properties at any given moment?
– TKoL
Aug 23, 2022 at 16:59
• Outside entanglement could be manyworlds : interpreting $A(a,\lambda)$ as "result of measurement apparatus A in direction of measure given by angle a 'in the world indexed by $\lambda$'" , maybe ? Sep 25, 2022 at 13:14
|
# CHLOE
Most popular questions and responses by CHLOE
1. ## Math
A box contains 95 pink rubber bands and 90 brown rubber bands. You select a rubber band at random from the box. Find each probability. Write the probability as a fraction in simplest form. a. Find the theoretical probability of selecting a pink rubber
2. ## Math ~ Check Answers ~
1. There are 35 marbles in a bag: 9 blue marbles, 8 green marbles, 4 red marbles, 8 white marbles, and 6 yellow marbles. Find P (red). Write the probability as a fraction in simplest form, a decimal, and a percent. a.) 4/31, 0.129, 12.9% b.) 4/35, 0.114,
3. ## Chemistry
If a hydrate of the formula MCl * xH2O decomposes when heated to produce HCL, what change would you expect to occur when a piece of litmus paper is held in the path of the vapor released?
4. ## Math
***PS: I can't put the diagram so I type it.*** The diagram below shows the contents of a jar from which you select marbles at random. Inside the jar there is 16 marbles. Inside the jar there is 4 red marbles. 7 blue marbles. 5 green marbles. a. What is
Write the fraction as a percent. If necessary, round to the nearest tenth of a percent. 6/15 a. 40.0% *** b. 37.5% c. 33.3% d. 46.7% Write the fraction as a percent. If necessary, round to the nearest tenth of a percent. 6 4/5 a. 68% b. 6.8% c. 6.4% d.
6. ## SC state history
note: not all letters will be used. a. sectionalism b. missouri compromise c. andrew jackson d. john c. calhoun e. articles of confederation f. cabinet g. tariff h. treason i. daniel webster j. louisiana purchase k. nationalism l. thomas jefferson 1. the
7. ## Social Studies south asia
why was the migration of the aryans to india significant?
8. ## History
What did John Brown plan to do with the guns he seized at Harper's Ferry? a. he planned to sell them to make money to support antislavery candidates for Congress. b. he planned to give them to slaves to start slave rebellions. c. He planned to bring them
1. During the election of 1824, which candidate won the popular vote and which candidate won the presidency? a.) Andrew Jackson; John Quincy Adams ***** b.) Andrew Jackson; Andrew Jackson c.) John Quincy Adams; John Quincy Adams d.) John Quincy Adams;
10. ## Math
list three different ways to write 5^11 as the product of two powers. Explain why all three of your expressions are equal to 5^11
11. ## SS HELP MEEEE PLEASEEE
13. how did the embargo act of 1807 affect south carolina? a. it hurt the sale of the planters' rice and cotton b. it hurt the planters who needed british manufactured goods c. it hurt the aristocrats who could not longer send their sons to school in
12. ## Algebra
Shirts with the a school's mascot were printed, and the table shows the number of people who bought them the first, second, third, and fourth weeks after their release. Which graph could represent the data shown in the table? Weeks | Number of sales 1 |22
13. ## Physics
An object 0.600 cm tall is placed 16.5 cm to the left of the vertex of a concave mirror having a radius of curvature of 22.0 cm. (a) Draw a principle-ray diagram showing the formation of the image. (b) Calculate the position, size, orientation (erect or
14. ## Science
Identify the benefits that your space mission might provide for modern society. Explore the possibility of connecting your space mission with satellites that are already gathering information as they orbit Earth. I don't understand what does this mean?
A Surprising Point of View: A Television Play in Two Acts Characters: (in order of appearance) 1. JASON: a boy of about 14. He is a student in Ms. Smith’s English class. He regularly misbehaves in order to get attention, and he doesn’t apply himself to
16. ## social studies help pls
13. how did the embargo act of 1807 affect south carolina? a. it hurt the sale of the planters' rice and cotton b. it hurt the planters who needed british manufactured goods c. it hurt the aristocrats who could not longer send their sons to school in
17. ## Math, all correct?
Write the fraction as a percent. If necessary, round to the nearest tenth of a percent. 6/15 a. 40.0% *** b. 37.5% c. 33.3% d. 46.7% Write the fraction as a percent. If necessary, round to the nearest tenth of a percent. 6 4/5 a. 68% b. 6.8% c. 6.4% Write
18. ## Math ~ Check Answers ~
A rectangular prism has a width of 92 ft and a volume of 240 ft^3. Find the volume of a similar prism with a width of 23 ft. Round to the nearest tenth, if necessary. •3.8 ft^3 •60 ft^3
32. ## One more thing!
Day had broken cold and gray, exceedingly cold and gray, when the man turned aside from the main Yukon trail and climbed the high earth-bank, where a dim and little traveled trail led eastward through the fat spruce timberland. It was a steep bank, and he
33. ## Physics
A playground is on the flat roof of a city school, 7.00 m above the street below. The vertical wall of the building is 8.00 m high, forming a 1.00 m high railing around the playground. A ball has fallen to the street below, and a passerby returns it by
34. ## Social Studies
How did knowing how to cut and stack stones help early Andean civilizations grow?
35. ## Algebra
16. The equation models the height h in centimeters after t seconds of a weight attached to the end of a spring that has been stretched and then released. h= 7 cos (π/3 t) a. Solve the equation for t. (I'm pretty sure i've got this one solved) T= 3
36. ## Science
What form of pollution, if any, does each energy source cause? How does the economy influence energy source choices? How does location influence energy source choices?
37. ## math help asap
1.which of the following is a radius of the circle a.DE b.AB C.CE 2.which of the following is a diameter of the circle a.ED b.EC c.BA 3.which of the following is a chord, but not a diameter, of the circle a.CE b.FD c.DE 4.what is the correct way to name a
38. ## Math ~ Check Answers ~ASAP
3. In the following list of data, find the range: 21, 28, 31, 35, 39, 43, 51, 60. 35 37 39****** 40 Pat recorded the weights of the first 10 fish she caught and released at Mirror Lake this season. The weights were 8 lb, 6 lb, 9 lb, 6 lb, 7 lb, 5 lb, 7 lb,
39. ## English
What type of poem is this ? We found his flag one bitter morning And knew our hopes had come to woe. We had come pursuing glory But ended freezing in the snow. 1. The stanza of the poem offers an example of (1 point) slant rhyme. end rhyme. perfect rhyme.
40. ## Algebra 1
Paul has decided to raise rabbits. He has been warned that the number of rabbits will double every month. He started with 3 rabbits, and the function y=3(2^x) models the number of rabbits he will have after x months. How many rabbits will paul have after 8
41. ## Math
Six groups of students sell 162balloons at the school carnival there are three students in each group if student sells the same number of balloons how many balloons does each to student sell
42. ## physics-sound
A police car moving at 41 m/s is initially behind a truck moving in the same direction at 17 m/s. The natural frequency of the car's siren is 800 Hz. The change in frequency observed by the truck driver as the car overtakes him is __(Hz) Take the speed of
43. ## honors algebra 1
3. Which ordered pair is a solution of the equation y=x-4? a. (2,6) b. (6,2) c. (-2,6) d. (3,1) 4. Which ordered pair is a solution of the equation y=5x? a. (-2,10) b. (-5, 25) c. (-3,15) d. (-2, -10) 5. Which ordered pair is a solution of the equation
44. ## Planets- I Love Space!
why the outer planets did not lose the lighter gases in their atmospheres.?
45. ## Math
There are 16 cubic boxes of erasers in a case. What are all the possible dimensions for a case of erasers ? please help need in 30 minutes or less so confused help please thanks
46. ## Physics
In order to determine the coefficients of friction between rubber and various surfaces, a student uses a rubber eraser and an incline. In one experiment, the eraser begins to slip down the incline when the angle of inclination is 36.4° and then moves down
47. ## Chemistry
Chloroform (CHCL3) has a normal boiling point of 61 'C and an enthalpy of vaporization of 29.24 kJ/mol. What is the value of delta Gvap (in kJ) at 61'C for chloroform?
48. ## Algebra
Jay weighs 20 pounds more than Dan whose weight is twice what Kirsten's is. Their total weight is 320 pounds. How much does Kirsten weigh?
49. ## math
350 is 70% of what number? A) 500 B) 420 C) 245 D) 280 What percent of 64 is 24? A) 267% B) 62.5% C) 40% D) 37.5% What percent of $6.50 is$2.60? A) 25% B) 40% C) 90% D) 250% 20% of 140 is what number? A) 280 B) 120 C) 28 D) 12 I am kind of confused on
50. ## Math
Two picture frames are similar. The ratio of the perimeters of the two pieces is 3:5. If the area of the smaller frame is 108 square inches. what is the area of the larger frame?
51. ## chemistry
suppose that a large volume of 3% hydrogen peroxide decomposes to produce 12mL of oxygen gas in 100 s at 298 K. Estimate how much oxygen gas would be produced by and identical solution in 100s at 308 K hi can you please help me set up this problem, then I
52. ## Social Studies
What characteristic did the Mauryan and Mughal empires share? a.Both helped spread Buddhism. b.Both helped spread Islam. c.Both embraced the idea of religious tolerance. d.Both were begun by Aryan people I think either a or d Help?
53. ## Physics
A ball is tossed from an upper-story window of a building. The ball is given an initial velocity of 8.15 m/s at an angle of 18.5o below the horizontal. It strikes the ground 3.30 s later. How far horizontally from the base of the building does the ball
54. ## Chemistry
If 150.0 grams of iron at 95.0 degrees Celcius, is placed in an insulated container containing 500.0 grams of water at 25.0 degrees Celcius, and both are allowed to come to the same temperature, what will that temperature be? The specific heat of water is
55. ## Math
A box contains 95 pink rubber bands and 90 brown rubber bands. You select a rubber band at random from the box. Find each probability. Write the probability as a fraction in simplest form. a. Find the theoretical probability of selecting a pink rubber
56. ## Permutations
1. 4! a.) 3 b.) 6 c.) 12 d.) 24****** 2. 5! a.) 25******** b.) 120 c.) 250 d.) 625 5. Five friends are having their picture taken. How many ways can the photographer arrange the friends in a row? a.) 150 b.) 100 c.) 120******* d.) 80 6. How would you apply
57. ## physics
You drop a rock from rest out of a window on the top floor of a building, 50.0 m above the ground. When the rock has fallen 3.80 m, your friend throws a second rock straight down from the same window. You notice that both rocks reach the ground at the
58. ## chem
calculate the solubility of Ca(OH)2 in a 0.469M CaCl2 solution at 31 degrees C, Ksp of Ca(OH)2 is 4.96 X 10*-6
59. ## Math
Find the area for the circle (use 3.1 for pi) Round to the nearest tenth The circles radius is 23 yd.. My answer is 1666
60. ## chemistry
Given that for the vaporization of benzene Hvap= 30.7 kj/mol and Svap= 87.0 J(K * mol) , calculate Delta G (in kJ/mol) for the vaporization of benzene at 13 Celcius.
61. ## Physics
The conducting rod ab shown below makes frictionless contact with metal rails ca and db. The apparatus is in a uniform magnetic field of 0.855 T, perpendicular to the plane of the figure. The length L of the rod is 25 cm. (a) Find the magnitude of the emf
62. ## Math / Statistics
Supposed that voting in municipal elections is being studied and four people are randomly selected. The accompanying, table provides the probability distribution where x is the number of those people that voted in the last election x | P (x) 0 | 0.23 1 |
63. ## maths
There are 28 in our class. One day there were 6 times as many present as there were absent. How many were present? How many were absent?
64. ## British Literature
Please check my answers #11: Coffeehouses were centers for (A) literary activity (B) political debates © entertainment (D) all of the above. #11: D #12: After viewing the London Fire on September 2, 1666, Samuel Pepys (A) mailed a letter to his father (B)
65. ## chemistry
Calculate the enthalpy of fusion of naphthalene (C10H8) given that its melting point is 128 Celsius and its entropy of fusion is 47.7 J (k*mol) .
66. ## physics
A 5.0 kg \rm kg, 52-cm \rm cm-diameter cylinder rotates on an axle passing through one edge. The axle is parallel to the floor. The cylinder is held with the center of mass at the same height as the axle, then released. a)What is the magnitude of the
67. ## chemistry
What is the molar solubility of Sr3(AsO4)2 and the concentration of Sr^+2 and AsO4^-3 in a saturated solution at 25C? the Ksp is 1.3 E -18. Show the balanced reaction.
68. ## Chemistry
A gas diffuses twice as fast as CF4 (g). The molecular mass of the gas is: I'm not sure how to figure this out without the rates of diffusion.
69. ## Math
Can someone please check my answers? A yard has a square gate that has an area of 8 square meters. 1. What is the height of the gate in meters? Answer: 28 meters 2.How many feet would that be? Answer: 8.4 feet 3. If each side was 1 foot longer, what would
70. ## Maths Mechanics
A plane flies due north at 300km/h but a crosswind blows north-west at 40km/h. Find the resultant velocity of the plane. Once I have drawn I diagram it is often okay, but it is the initial steps that I struggle with.
71. ## Anatomy and physiology
what is the length of one cardiac cycle of each block is 0.04
72. ## Chemistry
Use the data in the table to determine how long it will take for half of the original amount of SO2Cl2 to decompose at the average reaction rate. Experimental Data for SO2Cl2(g) → SO2(g) + Cl2(g) Time (min) [SO2Cl2](M) [SO2](M) [Cl2](M) 0.0 1.00 0.00
73. ## Math
The diameter of a circular pizza is 24 in. How much pizza is eaten (in square inches) if half of it is consumed?
You have four reasons for saying "no." Two of them are very strong reasons, one has a tiny loophole and one is very weak. In your message you should: (Points: 5) include all four reasons. omit the very weak reason and use the other three. use only the two
75. ## Maths
Jack spends 1 3/5 as long on his homework as Jill. Last week, Jill spent 7 1/4 hours doing homework. How long did Jack spend doing homework?
76. ## Chemistry
Given the equilibrium reaction N2(g) + 3 H2(g) 2NH3(g) + heat, an increase in temperature will: a) Increase the value of K b) Decrease the value of K c) Increase or decrease the value of K depending upon the concentration of reactants d) Will have no
77. ## Chemistry
DrBob222, I don't know how to solve this problem either. And yes, this is the question for this problem. I even double check it. Please help. Chloroform (CHCL3) has a normal boiling point of 61 'C and an enthalpy of vaporization of 29.24 kJ/mol. What is
78. ## physics
Find the magnitude and direction of the net electrostatic force exerted on the point charge q3 in the figure below, where q = +2.5 μC and d = 28 cm. The picture is of a square. q1=+q, q2=-2q, q3=-3q, q4=-4
79. ## Maths
Jack spends 1 3/5 as long on his homework as Jill. Last week, Jill spent 7 1/4 hours doing homework. How long did Jack spend doing homework?
80. ## PHYSICS
A train starts from rest and accelerates uniformly, until it has traveled 3 km and acquired a velocity of 20 m/s The train then moves at a constant velocity of 20 for 400 s. The train then slows down uniformly at 0.065 m/s2 until it is brought to a halt.
81. ## math
Complete the indirect proof. Given: Rectangle JKLM has an area of 36 square centimeters. Side is at least 4 centimeters long. Prove: KL ≤ 9 centimeters Assume that a. ____. Then the area of rectangle JKLM is greater than b. _____ , which contradicts the
82. ## Algebra 1
Suppose the growth of a population (in thousands) is represented by the expression 32*2.1^x, where x represents the number of years since 2010. Evaluate the expression for the year 2015. Round to the nearest whole number. a. 41 b. 1,307 c. 67 d. 622
83. ## Math / Statistics
Supposed that voting in municipal elections is being studied and four people are randomly selected. The accompanying table provides the probability distribution where x is the number of those people that voted in the last election x | P (x) 0 0.23 1 0.32 2
84. ## algera help plzzzz
In the diagram​ below, what is the relationship between the number of pentagons and the perimeter of the figure they​ form? Represent this relationship using a​ table, words, an​ equation, and a graph. Let x=the number of
85. ## English
In the story The Necklace written by Guy de Maupassant what are the examples of similes and metaphors?
86. ## math
The top horizontal edge of the rectangle measures 17 centimeters, the left vertical edge of the rectangle measures 22 centimeters, and the bottom horizontal edge measures 17 centimeters. The left vertical leg is on the right edge of the rectangle, with v
87. ## History
Why was Vietnam key to the US curbing communism in Asia? a. None of these answers. b. Vietnam and the US had strong economic ties. c. Vietnam was a major supplier of rice to the United States. d. Vietnam had a direct tie to Japan which the US had thus far
88. ## Chemistry
I really need help with this I am really confused with the law, I have looked at multiple examples and I am still so confused, I know you can't give me the answer, but is there anyway you could please take me through this question step by step? Use Hess's
89. ## Health
nutrition is the process of the body using foods that we eat in order to sustain life true or false
90. ## Math ~ Check Answers ~
A number cube with the numbers 1 through 6 is rolled. Find P (number is greater than or equal to 3). a.) 4/6 b.) 1/6 c.) 2/6********** d.) 5/6
91. ## Chemistry
Calculate the molar solubility of MgCO3 in a solution that already contains 1.7×10−3 M sodium carbonate. MgCO3 Ksp = 6.82 x 10-6 IMPORTANT NOTE: You must use the quadratic equation to solve this problem. (DrBob222- thank you so much for all your help!)
92. ## physics
Find the magnitude and direction of the net electrostatic force exerted on the point charge q3 in the figure below, where q = +2.5 μC and d = 28 cm. The picture is of a square. q1=+q, q2=-2q, q3=-3q, q4=-4
93. ## math
The vertical left edge of a trapezoid is 8 inches and meets the bottom edge of the trapezoid at a right angle. The bottom edge is 4 inches and meets the vertical right edge at a right angle. The right edge is 11 inches. The top slanted edge measures 5
94. ## math
how do you draw and label the ray HA. Then draw point T on it?
95. ## Civics
Davis in executive at a large company. He is concerned that other businesses in his industry have been moving some of their operations to foreign countries in order to cut down on labor costs. The CEO has asked David to make a recommendation on what the
96. ## chemistry
How many grams of MgO are produced during an enthalpy change of -199 kJ? 2 Mg(s) + O2(g) → 2 MgO(s) ΔH = -1204 kJ
97. ## ENGLISH HELP PLS
Identify the choice that best describes the sentence. 5. More than 1,000 years passed before Copernicus proposed that the earth orbited the sun. (1 point) a. simple*** b. compound c. complex d. compound-complex *** = my answer
98. ## Math
If a red light flashes every 3 seconds, a blue light flashes every 4 seconds and a blue light flashes every 5 seconds and they all flashed together at midnight when would be the next time they all flash together?
99. ## English
In the following sentence identify the appositive or appositive phrase and the noun or pronoun defined by the appositive. Mr. Murray, the Scoutmaster, likes to hike. Appositive? Noun or pronoun defined?
|
How to teach neural network a policy for a board game using reinforcement learning?
I need to use reinforcement learning to teach a neural net a policy for a board game. I chose Q-learining as the specific alghoritm.
I'd like a neural net to have the following structure:
1. layer - rows * cols + 1 neurons - input - values of consecutive fields on the board (0 for empty, 1 or 2 representing a player), action (natural number) in that state
2. layer - (??) neurons - hidden
3. layer - 1 neuron - output - value of action in given state (float)
My first idea was to begin with creating a map of states, actions and values, and then try to teach neural network. If the teaching process would not succeed I could increase the number of neurons and start over.
However, I quickly ran into performance problems. Firstly, I needed to switch from simple in-memory Python dict to a database (not enough RAM). Now the database seems to be a bottleneck (simply speaking there are so many possible states that the retrieval of actions' values is taking a noticeable amount of time). Calculations would take weeks.
I guess it would be possible to teach neural network on the fly, without the layer of a map in the middle. But how would I choose a right number of neurons on the hidden layer? How could I figure out that I'm loosing periously saved (learned) data?
• @NeilSlater Yes, that's my goal. – Luke Jan 5 '16 at 21:40
2 Answers
You need to use some function approximation scheme. In addition, experience replay would be useful for two reasons: (1) you want to keep past memories (2) you need to decorrelate the way to teach your network.
Have a look at Deepmind's DQN on ATARI games. What you are describing is basically what they have solved. The paper is in their website: http://deepmind.com/dqn.html
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, et al. Human-level control through deep reinforcement learning. Nature. 2015 Feb 25;518(7540):529–33.
With respect to the network architecture, it will definitely require some experimentation. For an alternative, you can also have a look at HyperNEAT. They evolve the network topology:
Hausknecht M, Khandelwal P, Miikkulainen R, Stone P. HyperNEAT-GGP: A HyperNEAT-based Atari general game player. In: Proceedings of the 14th annual conference on Genetic and evolutionary computation p. 217–24. Available from: http://dl.acm.org/citation.cfm?id=2330195
For more strategic games. Maybe you can have a look at "Giraffe: Using Deep Reinforcement Learning to Play Chess" http://arxiv.org/abs/1509.01549
In order to train an agent to play a board game, the first important task is to create a Reinforcement Learning environment.
4 essential aspects of an environment is:
• Observation space: What is the input/state, as well as its shape and range?
• Action space: What is the possible action, as well as its shape and range?
• Reward function: What is the reward for a particular pair of state + action? Will it be immediate or sparse reward? The reward function will affect the difficulty of the environment a lot.
• When will an episode/a game end?
I highly suggest using the same API as OpenAI Gym because of its popularity and high-quality. Then you can try to apply direct these algorithms before trying on your own. They are state-of-the-art algorithms and the quality is guaranteed.
About the neural network architecture, you can exploit the nature of the game for better result.
For example, in Go, the position is symmetric and the actions are simple and position-independent, which is very suitable to use convolutional neural network (AlphaGo Zero).
On the other hand, in Chess, because the actions are asymmetric and position-dependent, Google had to redesign the architecture to train an agent to play chess.
|
# How to account for a corner node with zero-flux condition at an extrapolated distance
I am trying to implement a numerical solver and am having troubles dealing with boundary conditions, especially in the corners.
I have a 2D mesh, and on the left I have a Dirichlet condition, on the bottom I have a Neumann condition, and on the top and on the right I have a zero-flux vacuum boundary condition.
That is:
On the left: $$u = B$$, a constant value
On the bottom: $$\frac{du}{dn} = 0$$, symmetry of the problem
On the right and top: $$\frac{1}{u} \frac{du}{dn} = \frac{-1}{d_n}$$, where $$d_n$$ is an extrapolation length. So we're saying that the flux will be technically zero at that distance from the boundary
I can set up my finite difference equations for those mostly, with my Laplacian depending on $$u_{i,k-1}$$, $$u_{i-1,k}$$, $$u_{i,k}$$, $$u_{i+1,k}$$ and $$u_{i,k+1}$$, with my boundary conditions looking like:
# Bottom left corner (i=0, k=0):
$$u_{i,k} = B$$
# Bottom right corner (i=0, k=L):
$$u_{1,k}$$ = $$u_{-1,k}$$ [ghost point]
$$\frac{u_{i,k+1} - u_{i,k-1}}{\Delta z + d} = -\frac{u_{i,k}}{d}$$ $$\rightarrow$$ $$u_{i,k} = \frac{d}{d+\Delta z} u_{i,k-1}$$
And I replace everything in my Laplacian so that it only depends on $$u_{i+1,k}$$ and $$u_{i,k-1}$$
# Rest of bottom points (i=0, k between 0 and L):
Use Neumann to replace the ghost point $$u_{i-1,k}$$ with $$u_{i+1,k}$$
# Top left corner (i=R, k=0)
$$u_{i,k} = B$$
# Top points (i=R, k between 0 and L):
$$\frac{u_{i-1,k} - u_{i+1,k}}{\Delta r + d} = -\frac{u_{i,k}}{d}$$ $$\rightarrow$$ $$u_{i,k} = \frac{d}{d+\Delta r} u_{i-1,k}$$
# Right side points (i between 0 and R, k=L):
$$\frac{u_{i,k+1} - u_{i,k-1}}{\Delta z + d} = -\frac{u_{i,k}}{d}$$ $$\rightarrow$$ $$u_{i,k} = \frac{d}{d+\Delta z} u_{i,k-1}$$
And I replace everything in my Laplacian so that it only depends on $$u_{i-1,k}$$, $$u_{i+1,k}$$ and $$u_{i,k-1}$$
And I replace everything in my Laplacian so that it depends on u(i,k-1), $$u_{i,k+1}$$ and $$u_{i-1,k}$$
# Top right corner (i=R, k=L):
This one has me stuck. Basically, I have two "concurrent" conditions applying there.
$$\frac{u_{i-1,k} - u_{i+1,k}}{\Delta r + d} = -\frac{u_{i,k}}{d}$$ $$\rightarrow$$ $$u_{i,k} = \frac{d}{d+\Delta r} u_{i-1,k}$$
$$\frac{u_{i,k+1} - u_{i,k-1}}{\Delta z + d} = -\frac{u_{i,k}}{d}$$ $$\rightarrow$$ $$u_{i,k} = \frac{d}{d+\Delta z} u_{i,k-1}$$
And I do not know how to apply them.
Any ideas? Or does something look off somehow?
Hopefully this is the right Stack Exchange place, if not please direct me to the correct one!
• Welcome to SciComp.SE, I think that this is the right site for your question. I think that your question is really close to an old one answered by WolfgangBangerth, but I don't see it right now. – nicoguaro Dec 31 '19 at 19:11
• Regarding your post, would you mind using MathJax for equations? – nicoguaro Dec 31 '19 at 19:11
• I'll improve the way the math is displayed asap. Couldn't figure it out easily while posting, sorry about that – William Abma Dec 31 '19 at 19:34
• @nicoguaro Is this the answer you're referring to? scicomp.stackexchange.com/a/23676/33528 What I'm seeing from looking around is that "it doesn't matter, just pick one direction" should work, though I'm not certain at all. – William Abma Jan 1 '20 at 18:23
• Yes, that is the one. – nicoguaro Jan 1 '20 at 18:24
|
Resource requirements for efficient quantum communication
using all-photonic graph states generated from a few matter
qubits
Paul Hilaire, Edwin Barnes, and Sophia E. Economou
Department of Physics, Virginia Tech, Blacksburg, Virginia 24061, USA
Quantum communication technologies
show great promise for applications rang-
ing from the secure transmission of secret
messages to distributed quantum comput-
ing. Due to fiber losses, long-distance
quantum communication requires the use
of quantum repeaters, for which there ex-
ist quantum memory-based schemes and
all-photonic schemes. While all-photonic
approaches based on graph states gen-
erated from linear optics avoid coher-
ence time issues associated with memo-
ries, they outperform repeater-less pro-
tocols only at the expense of a pro-
hibitively large overhead in resources.
Here, we consider using matter qubits to
produce the photonic graph states and an-
alyze in detail the trade-off between re-
sources and performance, as character-
ized by the achievable secret key rate
per matter qubit. We show that fast
two-qubit entangling gates between mat-
ter qubits and high photon collection and
detection efficiencies are the main ingre-
dients needed for the all-photonic proto-
col to outperform both repeater-less and
memory-based schemes.
The ability to share entangled states over long
distances is a major milestone for the realiza-
tion of a fully-functional quantum internet [1, 2].
Beyond secure communications via quantum key
distribution (QKD) [35], the implementation of
such a quantum internet would also have various
applications ranging from distributed quantum
computing [6], secure access to a remote quan-
tum computer [7, 8], accurate clock synchroniza-
tion [9], and improved telescope observations [10].
However, enabling world-wide quantum com-
munication requires addressing the major prob-
Paul Hilaire: [email protected]
lem of photonic losses, which significantly reduces
the range of quantum information transfer. Even
though direct amplification of a quantum state
is made impossible due to the no-cloning theo-
rem [11, 12], this exponential photon loss can still
be overcome through the realization of quantum
repeaters (QR) [13, 14].
Several QR approaches have been proposed
and can be divided into two main categories de-
pending on the method used to propagate quan-
tum information between two adjacent repeater
nodes [15]. The first approach proposed, re-
ferred to here as a "memory-based" approach,
relies on heralded-entanglement generation [16
24] between two quantum memories (QM) situ-
ated at adjacent repeater nodes. Examples of
memory-based schemes are depicted schemati-
cally in Figs. 1(a),(b). The heralded entangle-
ment generally succeeds when a detector situated
at an intermediate node measures photons emit-
ted by the QMs [2527]. A classical signal car-
rying the outcome of an entanglement generation
attempt must be sent back to the QMs, thus lim-
iting the repetition rate of these protocols. When
successful, entanglement swapping transfers the
entanglement through these repeater nodes.
The second category of repeater [2834] relies
on logically-encoded multi-photon states [32, 35],
resistant both to photonic losses and errors, to
transfer quantum information across a network.
One particularly promising approach of this type
was put forward in Ref. [31], which proposed an
all-optical QR protocol based on repeater graph
states (RGS). An example of an RGS with logical
encoding is shown in Fig. 1(c), while the corre-
sponding communication protocol is summarized
in Fig. 1(d) and will be detailed later. Ref. [31]
also proposed a method to construct these states
probabilistically using single-photon sources, lin-
ear optics, and detectors [36]. However, this
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 1
arXiv:2005.07198v4 [quant-ph] 11 Feb 2021
L
0
L
0
Alice Bob
QR node
Measurement
node
(a) 2-quantum-memory protocol
L
0
Classical channel
Photon
Entanglement
Swapping
HEG
Multiplexed QR
(b) Multiplexed QR
2N
=
b
0
=3
b
1
=2
m=3
(c) RGS
(d) RGS protocol
Measurement
node
RGS node
L
0
L
0
Matter qubit
Alice Bob
Matter qubit
Figure 1: (a) 2-quantum-memory QR protocol. (b) Multiplexed quantum repeater composed of 2N memories per
repeater with heralded entanglement generation (HEG) at measurement nodes. (c) Left side: Example of a repeater
graph state. Vertices represent qubits (blue: physical qubits, orange: logical qubits) while edges represent CZ gates.
Right side: Example of logical encoding using a depth-3 tree graph. (d) Quantum communication scheme using the
RGS protocol. Matter qubits are also included.
generation procedure requires about 10
6
single-
photon sources per repeater node to just barely
outperform direct fiber transmission [37]. These
findings suggest that the RGS protocol may be
well beyond the reach of current and near-future
technological capabilities.
However, it remains unclear whether the RGS
protocol can become more feasible if alternative
methods to create the states are used. Ref. [38]
proposed a deterministic generation procedure to
produce a linear graph state using one quantum
emitter which emits spin-entangled photons [39];
this procedure was later demonstrated experi-
mentally [40]. Ref. [41] showed theoretically
that emitters that undergo entangling gates can
also be used to generate two-dimensional cluster
states. Refs. [42, 43] built on this protocol and
demonstrated that entanglement between emit-
ters can be harnessed for the generation of more
complex photonic graph states. These ideas have
been extended further to general prescriptions
and to protocols tailored to specific physical sys-
tems [4447]. Ref. [42] introduced a protocol for
producing an arbitrary-sized RGS using only a
few matter qubits, thus significantly decreasing
the resource overhead required for RGS genera-
tion, making it deterministic in principle. Us-
ing this generation technique, the RGS protocol
only necessitates a few-qubit processor at each
repeater node, which would ease its practical im-
plementation compared to other error-correction-
based proposals that generally require several
hundreds of qubits per repeater node [29, 30]
(with the notable exception of Ref. [48] which
uses techniques introduced in Ref. [42] and re-
quires deterministic spin-photon Bell measure-
ments). Although the RGS protocol with de-
terministic state generation seems promising, a
systematic and detailed evaluation of its perfor-
mance and resource requirements has not been
carried out.
In this paper, we compare the resource-
efficiency—characterized by the achievable secret
key rate per matter qubit—of this protocol to di-
rect fiber transmission and to QR schemes based
on memories and heralded entanglement gener-
ation. We first show that the rate per matter
qubit has a fundamental upper bound in the case
of memory-based QRs. We then review the RGS
protocol and how RGSs can be generated using a
few matter qubits. We evaluate the performance
of this scheme, show that its rate per matter qubit
does not have a theoretical upper-bound, and find
the conditions under which it outperforms both
the repeater-less and the memory-based QR ap-
proaches. These conditions depend sensitively on
the speed with which two-qubit gates between the
matter qubits can be executed and on the col-
lection and detection efficiencies of the photons
emitted by these matter qubits.
1 Upper bound on rate for memory-
based repeater schemes
In this section, we show that there is a theoret-
ical upper bound, R
(QM)
max
, on the rate per mat-
ter qubit for protocols based on quantum mem-
ories and heralded entanglement generation. In
such protocols, the total distance L between Al-
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 2
ice and Bob is divided into smaller distances L
0
by N
QR
= L/L
0
1 repeater nodes. Quantum
memories at adjacent repeater nodes are entan-
gled via a heralded entanglement procedure (see
Fig. 1(a)). When a repeater node shares entan-
glement connections with its two adjacent nodes,
entanglement swapping is performed on the two
memory qubits within that node to create a di-
rect entanglement connection between memories
on the adjacent nodes. This procedure can be
repeated until Alice and Bob share an entangled
qubit pair.
It is clear that creating an entangled pair
between Alice and Bob requires generating an
entanglement connection between each adjacent
pair of repeaters. This means that the overall pro-
tocol rate R
(QM)
is limited by the entanglement
generation rate hT
ent
i
1
between two adjacent re-
peaters. Here, we use this fact to derive an upper
bound on the rate per matter qubit for QR pro-
tocols that use quantum memories and heralded
entanglement generation.
To determine an upper bound on hT
ent
i
1
, we
focus for concreteness on the protocol presented
in Fig. 1(a), which uses two quantum memories
per node. A quantum memory should emit a pho-
ton that is maximally entangled with one of its
degrees of freedom. Two photons generated at
adjacent repeater nodes arrive at the same mea-
surement node situated halfway between the two
repeaters, where they are measured in a Bell state
basis. Because a photon Bell state measurement
using only linear optics succeeds with probabil-
ity at best 1/2 (without ancillary qubits or QND
measurements [4954]), the overall success prob-
ability of the distant heralded entanglement gen-
eration is P
ent
1/2. It is worth mentioning
that a method for achieving heralded entangle-
ment generation with higher success probability
has been proposed [55], but its efficacy is re-
stricted to qubits separated by a short distance,
so we exclude this from our analysis (see Supple-
mentary Materials for more details). A classical
signal must then inform the repeater nodes of the
success or failure of the Bell state measurement.
Because the distance from the measurement node
to the repeater nodes is L
0
/2, this means that
the overall heralded entanglement generation at-
tempt takes total time T
trial
L
0
/c. This in-
cludes the time L
0
/(2c) for the single-photon
transfer from the repeater to the measurement
Notation Definition
L Total distance between Alice and Bob.
L
0
Distance between adjacent nodes.
N
QR
Number of repeater nodes.
2m Number of arms of the RGS.
~
b Branching vector of an error-correction
tree (
~
b = (b
0
, b
1
, . . . , b
n1
)).
T
CZ
CZ gate time.
η
t
(l) Transmission of a fiber of length l (η
t
(l) =
exp(l/L
att
)).
η
c
, η
d
In-fiber collection and detection efficien-
cies of photons.
L
att
, c Attenuation distance of the fiber and
speed of light in a fiber. (L
att
20km)
t
att
Average time of flight of photons in the
fiber (t
att
= L
att
/c).
T
RGS
Generation time of an RGS.
R, R
m
Rate and rate per matter qubit of the pro-
tocol.
Single-photon error rate.
Table 1: Table of notations
node and also the time L
0
/(2c) for the classical
signaling in the opposite direction. Here, we are
neglecting the time it takes to prepare and pump
the quantum memories. Throughout this proto-
col, a QM can be maximally entangled with at
most one other qubit (either a photon or another
QM). Therefore, it cannot emit another spin-
entangled photon before receiving the classical
signal carrying the information about the success
or failure of the Bell measurement, hence limiting
the repetition rate of the protocol. To generate
an entanglement connection between the two ad-
jacent nodes, the procedure must be repeated on
average P
1
ent
times. The entanglement generation
rate is therefore hT
ent
i
1
= P
ent
/T
trial
< c/(2L
0
).
We derived this result for a specific heralded en-
tanglement generation protocol but it also holds
for all known protocols [2527] (see Supplemen-
tary Materials).
From these results, we can show that the rate
per matter qubit (where the number of matter
qubits is N
m
= 2(N
QR
+ 1) = 2L/L
0
) has a the-
oretical upper bound, R
(QM)
max
:
R
(QM)
N
m
hT
ent
i
1
N
m
c
4L
= R
(QM)
max
. (1)
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 3
This theoretical upper bound also holds if there
are more than two QMs at each repeater node
(see Fig. 1(b)) as the rate would linearly in-
crease with the number of matter qubits. There-
fore, we have derived a general theoretical upper
bound for memory-based protocols based on her-
alded entanglement generation. It is worth not-
ing that the fundamental reason for this upper
bound comes from the need for classical signal-
ing in these protocols. Such classical signaling
is not required for RGS protocols, enabling them
to surpass this limit, as we show below. In the
Supplementary Materials, we also show that a
tighter bound, R
(QM)
max
= c/7L, can be obtained
for memory-based schemes in which there are two
quantum memories per repeater node, and her-
alded entanglement swapping is used.
We emphasize that the upper bound derived in
this section holds for QR protocols that are based
on quantum memories and distant heralded en-
tanglement generation. This corresponds to the
first and second generations of QRs, as catego-
rized in Ref. [15]. Consequently, in the following,
the RGS-based protocol will be compared only
to these categories of QRs, for which the perfor-
mance is limited by classical signaling.
2 Rate of the RGS protocol with de-
terministic graph state generation
In this section, we review the RGS protocol as
introduced in Ref. [31] and the deterministic gen-
eration of RGSs using a few matter qubits as
proposed in Ref. [42]. We show how the rate of
the RGS protocol depends on various parameters
in the case where deterministic state generation
methods are used.
2.1 RGS protocol and rate
An RGS is a quantum state |Gi that can con-
veniently be represented in the form of a graph
G = (V, E) with V vertices and E edges. Each
vertex corresponds to a photonic qubit prepared
in the |+i state, and each edge corresponds to the
application of a CZ gate between the two qubits
it connects:
|Gi =
Y
(i,j)E
CZ
ij
|+i
V
. (2)
An example of the graph representing an RGS
is shown in Fig. 1(c). These states include 2m
inner photonic qubits that are referred to as the
first-leaf qubits. All the first-leaf qubits are fully
connected to each other and each of them is also
connected to one additional qubit, referred to
as a second-leaf qubit. The first-leaf qubits are
logically-encoded using tree graph states; further
details on this are given below.
In an RGS protocol (see Fig. 1(d)), the distance
L separating Alice and Bob is also divided into
smaller steps L
0
by N
QR
= L/L
0
1 source nodes
where the RGSs are created. The RGS is divided
into two equal parts, each containing m arms,
and one part is sent to the left adjacent measure-
ment node and the other to the right. Thus, half
of one RGS meets half of another RGS at each
measurement node, where each second-leaf qubit
from one of the half-RGSs undergoes a Bell mea-
surement with its counterpart from the other half-
swapping procedure at the measurement node are
given later, but it is important to note that the
RGS protocol does not use quantum memories at
all and thus cannot store the information. This
implies that an entanglement connection between
Alice and Bob should be realized in only one trial
with a probability P
AB
much higher than direct
fiber transmission: P
AB
η
t
(L). The genera-
tion of an entanglement connection between Alice
and Bob requires the realization of successful en-
tanglement connections between all the adjacent
RGSs. It is important to note that the measure-
ments performed at each measurement node do
not require information from other measurement
nodes so that, in contrast to memory-based ap-
proaches, the RGS protocol does not require any
classical signaling while entanglement is being ex-
tended through the network. Classical signaling
is needed only once at the end of the protocol to
recover the Pauli frame of the Bell pair shared
by Alice and Bob, i.e. to determine which local
Pauli rotations Alice and Bob’s qubits should un-
dergo. This means that in the RGS protocol, it
is not necessary to wait for any classical signaling
before proceeding to generate the next batch of
RGSs needed to create the next Bell pair shared
between Alice and Bob (see Supplementary Ma-
terials for more details). Consequently, unlike the
memory-based scheme, the rate of the RGS pro-
tocol does not depend on the time it takes for the
photon to get from one node to the next. It is
limited only by the generation time T
RGS
of an
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 4
RGS:
R
(RGS)
=
P
RGSRGS
L/L
0
T
RGS
, (3)
with P
RGSRGS
the probability to generate
an entanglement connection between two RGSs.
The main notations used in this work are defined
in Table 1.
A successful entanglement link can be gener-
ated if at least one of the Bell measurements at
a measurement node succeeds. In that case, the
two first-leaf qubits attached to the second-leaf
qubits that underwent the successful Bell mea-
surement are measured in the X basis, while the
remaining 2m 2 first-leaf qubits are measured
in the Z basis. The X measurements transfer
the entanglement connection to the next two ad-
jacent measurement nodes, while the Z measure-
ments disentangle all the excess qubits associated
with failed Bell measurements. All these first-leaf
qubit measurements must be successful in order
to reliably create an entanglement link. There-
fore, the probability to successfully create an en-
tanglement link between two RGSs is given by:
P
RGSRGS
= (1 (1 P
Bell
)
m
)
× Pr(M
X,
)
2
Pr(M
Z,
)
2m2
,
(4)
where P
Bell
= P
ph
2
/2 is the probability of a
successful Bell measurement. This depends on
P
ph
= η
c
η
d
η
t
(L
0
/2), which is the probability
that a single photon is emitted and collected
into the fiber (η
c
), is transmitted to the mea-
surement node (η
t
(L
0
/2)) and detected (η
d
).
Pr(M
X,
) and Pr(M
Z,
) are the probabilities that
the logical X and Z measurements on the first-
leaf qubits succeed. Note that if the first-leaf
qubits were not logically encoded, we would have
Pr(M
X
)
2
Pr(M
Z
)
2m2
= P
ph
2m
η
t
(L
0
), and it
would be impossible to have an advantage over
direct fiber transmission. Therefore, the loss-
tolerance of logically-encoded qubits is crucial for
this protocol. Next, we review this encoding,
which was introduced in Refs. [35] and [31].
2.2 Loss-tolerance with tree graph states
We review how the probabilities Pr(M
X,
) and
Pr(M
Z,
) depend on the single-photon transfer
probability P
ph
and on the shape of the tree graph
state used for the logical encoding. Ref. [35]
demonstrated that this encoding remains loss-
tolerant as long as P
ph
is above 50%.
We consider the calculation of the probabili-
ties of successful measurements of the logically
encoded qubits in the presence of loss errors on
a tree graph state. A tree is characterized by
its branching vector
~
b = (b
0
, b
1
, ..., b
n1
) (see
Fig. 1(c)), which describes the connectivity be-
tween the different levels of the tree. To perform
a Z measurement M
Z,k
on a qubit at level k, it is
possible to either perform a direct measurement
on this qubit (with success probability P
ph
) or, if
it fails (with probability 1 P
ph
), perform an in-
direct measurement (with probability r
k
). Thus,
the overall success probability of a Z measure-
ment at level k is:
Pr(M
Z,k
) = P
ph
+ (1 P
ph
)r
k
. (5)
To perform an indirect measurement on a qubit
(call it A) at level k, one can use the stabilizing
property of a graph state [56]. It is possible to
deduce the outcome of the Z measurement on
A by performing an X measurement on another
qubit (B) at level k + 1 and a Z measurement on
all the qubits, C
i
, that are in the neighborhood
of B at level k + 2 (see Fig. 2(a)). This works
because of the invariance of graph states when
they are acted upon by their stabilizers:
|Gi = X
B
jN(B)
Z
j
|Gi
= X
B
Z
A
i∈{1,b
k+1
}
Z
C
i
|Gi ,
(6)
so we have:
Z
A
|Gi = X
B
i∈{1,b
k+1
}
Z
C
i
|Gi . (7)
A single indirect measurement has a success
probability s
k
. Note, however, that the tree
structure allows b
k
indirect measurement at-
tempts, and only one needs to succeed to indi-
rectly measure a qubit at level k. So the proba-
bility that at least one indirect measurement suc-
ceeds is
r
k
= 1 (1 s
k
)
b
k
, (8)
with
s
k
= P
ph
Pr(M
Z,k+2
)
b
k+1
. (9)
It is thus possible to derive the success probabil-
ity of a measurement recursively, given that the
qubits at the lowest level can only be measured
directly: Pr(M
Z,n
) = P
ph
. Logical measurements
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 5
A
B
C
2
C
1
Level k
Level k+1
Level k+2
(a) (b)
Figure 2: (a) Indirect measurement of qubit A at the level k using a stabilizer based on qubit B at level k + 1. (b)
Protocol for the deterministic generation of an RGS with logical encoding using matter qubits.
in the X or Z basis are given by [31]:
Pr(M
X,
) = r
0
,
Pr(M
Z,
) = (P
ph
+ (1 P
ph
)r
1
)
b
0
= Pr(M
Z,1
)
b
0
.
(10)
It is interesting to note that logical encoding with
tree graph states can also correct single-qubit er-
rors, as shown in Ref. [31] and described in the
Supplementary Materials. This will be used later
when we evaluate the sensitivity of the RGS pro-
tocol to errors.
2.3 Generation of an RGS
The achievable rate between Alice and Bob also
depends on the repetition rate of the protocol,
which is given by the generation time of the RGS.
Because it is impossible to realize determinis-
tic two-qubit gates on photons with linear op-
tics, such a graph state can either be generated
probabilistically by the recursive fusion of smaller
graphs using linear optics and Bell state measure-
ments as shown in Ref. [31] and [37], or deter-
ministically using a few matter qubits as shown
in Ref. [42]. We now review the latter.
An arbitrary-sized RGS can be generated de-
terministically by following a given sequence
based on four operations on matter qubits: the
emission of a photon maximally entangled with
the matter qubit E
ph
, the Hadamard gate H,
measurements in the Pauli bases M
X
, M
Y
, M
Z
and the CZ gate. The generation of an RGS with
2m arms and a tree graph encoding with branch-
ing vector
~
b = (b
0
, b
1
, ..., b
n1
) requires n+1 mat-
ter qubits Q
1
, ..., Q
n+1
, and is given by the se-
M
Y,Q
1
(M
X,Q
3
M
X,Q
2
CZ
Q
1
,Q
3
CZ
Q
2
,Q
3
E
ph,Q
3
G
3
b
0
)
2m
with G
k
= M
Z,Q
k
H
Q
k
E
ph,Q
k
CZ
Q
k1
,Q
k
G
k+1
b
k2
and G
n+2
= E
ph,Q
n+1
,
(11)
where, for simplicity, we have omitted the single
photonic qubit rotations.
The overall generation time of an RGS using
this procedure is therefore
T
RGS
= 2m
1 + f(
~
b, n 1)
T
E
ph
+ T
M
+ 2m
2 + f(
~
b, n 2)
(T
M
+ T
CZ
)
+ 2mf(
~
b, n 2)T
H
,
(12)
with f (
~
b, k) =
P
k
i=0
Q
i
j=0
b
j
and with T
E
ph
, T
M
,
T
H
and T
CZ
the times for photon emission, mat-
ter qubit measurement, Hadamard and CZ gates,
respectively.
In the following, we make the realistic assump-
tion that the CZ gate time T
CZ
is much longer
than the durations of the other operations, and so
we set T
H
= T
M
= T
E
ph
= 0 for simplicity. With
this assumption, the generation time T
RGS
only
depends on the number of CZ gates and their
duration:
T
RGS
= 2m
2 +
n2
X
k=0
k
Y
j=0
b
j
T
CZ
. (13)
This is for an RGS with 2m arms and a log-
ical tree encoding with branching vector
~
b =
(b
0
, ..., b
n1
). In the following, we will assume
a depth-two tree graph state (
~
b = (b
0
, b
1
)), so
that the full RGS can be generated with only
three matter qubits (n = 2). The total num-
ber of matter qubits in the network is therefore
N
m
= (L/L
0
1)(n + 1) = 3(L/L
0
1).
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 6
3 Performance comparisons
We now evaluate the performance of the RGS
protocol when it is generated deterministically
by a few matter qubits. We compare our re-
sults to the direct fiber transmission limit derived
by Refs. [57, 58] and to the memory-based upper
bound R
(QM)
max
= c/4L found in Sec. 1.
3.1 Optimizing the RGS protocol
(a) (b)
(d)(c)
Figure 3: Maximum achievable rate per matter qubit
R
(RGS)
m
T
CZ
(here normalized by the parameter T
CZ
1
)
and optimal node separation L
0
for the RGS protocol
with depth-2 logical tree encoding with the total dis-
tance fixed at L = 50L
att
for a range of RGS parameters
b
0
, b
1
, m. (a) Three-dimensional plot showing optimal
rate and node separation as a function of b
0
, b
1
and
m. For each point in the plot, the value of L
0
is op-
timized to maximize R
(RGS)
m
T
CZ
. Point sizes represent
maximized R
(RGS)
m
T
CZ
values while point colors repre-
sent the optimal values of L
0
(indicated with the color
scale). The RGS parameters that achieve the largest
value of R
(RGS)
m
T
CZ
in this case are indicated with
dashed lines. The corresponding maximal R
(RGS)
max
T
CZ
is given in Table 2. (b,c,d) Three different orthogonal
two-dimensional slices of the plot shown in panel (a).
In the following, we show how the RGS proto-
col parameters can be optimized to maximize the
overall rate per matter qubit, R
(RGS)
m
, or in the
presence of errors, the secret key rate per matter
qubit. For the moment, we assume perfect pho-
ton collection and detection efficiencies (η
c
η
d
= 1)
and no single-photon errors ( = 0); we take these
effects into account later on. From Eqs. (3), (4),
and (13), the achievable rate per matter qubit
R
(RGS)
m
of the RGS protocol for a total distance
L is inversely proportional to the CZ gate time
T
CZ
, but it depends non-linearly on the separa-
L/L
att
L
0
/L
att
m b
0
b
1
R
(RGS)
max
T
CZ
10 0.23 11 8 4 2.4 × 10
5
25 0.21 13 10 5 6.8 × 10
6
50 0.19 14 10 5 2.8 × 10
6
100 0.17 15 10 5 1.1 × 10
6
150 0.15 16 10 5 6.8 × 10
7
Table 2: Optimal RGS parameters m, b
0
, b
1
and node
separation L
0
for several different total network dis-
tances L. Here, L
att
is the attenuation length of optical
fibers.
tion distance, L
0
, between two RGS nodes and
the RGS shape (number of arms 2m and the tree
branching vector
~
b = (b
0
, b
1
)). Therefore, for each
choice of the total distance L, there is a certain
node separation and RGS shape that maximize
the achievable rate.
The optimization of the rate per matter qubit
for a total distance L = 50L
att
( 1000 km) is
shown in Fig. 3. The position of each point cor-
responds to a specific RGS shape, the color of the
point indicates the node separation L
0
that op-
timizes the rate for that shape, and the size of
the point represents the maximal rate per mat-
ter qubit for these parameters. This optimization
converges, allowing us to extract the optimal RGS
shape and distance L
0
for this particular choice of
the total distance L. The optimization can be re-
peated for various choices of L, and the extracted
optimal parameters are recorded in Table 2.
We compare the RGS protocol to direct fiber
transmission in Fig. 4(a). The maximum achiev-
able rate per matter qubit for the RGS proto-
col, R
(RGS)
max
, is shown as a function of the total
distance L/L
att
. In the case of direct transmis-
sion, we show seven different curves correspond-
ing to the achievable rate for seven different val-
ues of the single-photon source repetition rate.
We see that the RGS protocol outperforms direct
transmission with the highest repetition rate for
L & 30L
att
. In the same figure, we also show how
well the protocol works if we keep the optimized
parameters fixed and change the total distance
L. To demonstrate this, we fix the RGS param-
eters and node separation L
0
to the values that
optimize the rate for a total distance L
tar
. We
then adjust L away from L
tar
without changing
the RGS parameters or L
0
, and we calculate the
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 7
new rate for each value of L. In Fig. 4(a), we
show the resulting rates as a function of L/L
0
for five different choices of L
tar
. We see that the
RGS protocol continues to work well over a broad
range of total distance L when we use parameters
that are optimized for a large total distance L
tar
.
For long distances L in the range 50L
att
200L
att
, R
(RGS)
max
scales approximately as L
1.27
,
and thus the scaling with distance is slightly
worse than the upper bound scaling for memory-
based schemes, R
(QM)
max
L
1
, obtained in Sec. 1.
However, contrary to the memory-based schemes,
there are no fundamental upper limits imposed by
classical signaling on the maximal achievable rate
R
(RGS)
max
, as the latter is inversely proportional to
the CZ gate time, T
CZ
. If T
CZ
is sufficiently
small, then the RGS protocol should surpass the
upper bound of memory-based schemes. This is
illustrated in Fig. 4(b), where the red region of
the plot indicates the regime where the RGS pro-
tocol outperforms memory-based schemes.
From our calculations, it seems that the max-
imum achievable rate per matter qubit, R
(RGS)
max
,
is in principle unbounded since it is inversely pro-
portional to the CZ gate time. We should recall
however that these results are based on the as-
sumption that the CZ gate takes much longer
than the other operations O made on the mat-
ter qubits: T
CZ
T
O
. Decreasing T
CZ
ini-
tially increases the rate, but if T
CZ
becomes small
enough, neglecting the durations of other opera-
tions eventually becomes invalid. Consequently,
to increase the rate further, not only the CZ
gate time but all the operation times should be
reduced simultaneously. If the durations of all
these operations could be made arbitrarily small,
then the rate per matter qubit would increase
to infinity. In practice however, the operation
times could also have an intrinsic lower bound
that limits the performance of the RGS proto-
col, but these lower bounds would depend on the
specific system on which the protocol is imple-
mented, while the limit imposed by classical sig-
naling is much more stringent and general.
For distances below 200L
att
4000 km, a gate
time T
CZ
below 6 × 10
4
t
att
60 ns is suffi-
ciently short to outperform any memory-based
scheme. For this range of total distance, the
optimal value of the node separation L
0
ranges
from 0.15 0.19L
att
3 3.8 km. As an
illustration of the RGS performance, for L =
1000 km and T
CZ
= 10 ns, the total rate is
R = 220 kHz for N
m
= 786 matter qubits used,
(RGS)
max
= 276 Hz per matter qubit,
while R
(QM)
max
= 50 Hz per matter qubit for this
distance.
3.2 Sensitivity to errors
So far, we have considered the generation of a
perfect RGS, which is a pure entangled state of
many photons. Generating such a perfect state
is not feasible experimentally, and so we need to
evaluate the sensitivity of the RGS protocol to
errors. To do so, we now consider single-photon
loss and single-photon errors in our optimization
process.
The single-photon loss (other than the fiber
losses) depends on the probability that an emit-
ted photon is neither collected into the fiber
nor detected; this corresponds to the case where
η
c
η
d
6= 1. Fig. 4(c) shows the optimized rate per
matter qubit for different values of η
c
η
d
. Simi-
larly, Fig. 4(d) compares the performance of the
RGS protocol with the upper bound for memory-
based protocols, R
(QM)
max
, as a function of T
CZ
and
η
c
η
d
for a total distance L = 50L
att
. These re-
sults show that the photon losses must be below
15% (η
c
η
d
> 85%) in order for the RGS proto-
col to outperform both the direct fiber transmis-
sion and memory-based protocols. This rather
stringent requirement is mainly due to the loss-
tolerance of the tree graph state, which only
works when the total photon loss (including fiber
losses) is below 50%.
Apart from the photon losses, the RGS pro-
tocol also depends on other kinds of errors that
reduce the fidelity F
AB
of the final entangled pho-
ton pair shared by Alice and Bob. These er-
rors include, but are not limited to, single-photon
measurement errors, photon Bell-measurement
errors, and depolarization errors. In the present
case where the RGS is generated using matter
qubits, the limited coherence time of the matter
qubits should also limit the fidelity of the entan-
gled photons. For QKD applications, the final
secret key rate depends on the total rate R and
the fidelity F
AB
[59, 60]:
R
skr
= R(1 2h(F
AB
)), (14)
where h is the binary entropy function:
h(F ) = F log
2
(F ) (1 F ) log
2
(1 F ). (15)
Accepted in Quantum 2021-02-03, click title to verify. Published under CC-BY 4.0. 8
|
# I Force and rate of change of momentum
1. Feb 24, 2016
### alkaspeltzar
My question is simply..if force does equate to the rate of change of momentum, then why is it not taughted as this rate rather simply a push or pull?
Is it becuase really they are the same thing and it is much easy to explain/work with? Just curious
Guess up until now I didn't even think of force as a rate of change of momentum...maybe I'm old school
Thanks
2. Feb 24, 2016
### Staff: Mentor
Force is still push or pull. Only NET force (vector sum of pushes and pulls) is rate of change of momentum.
3. Feb 24, 2016
### alkaspeltzar
Okay, so it is only the NET force which can be considered equal to the rate of change of momentum. Got it!
Is it probably because in most applications(or at least what I work with in school and engineering) we only worry about the simple forces that then this relationship doesn't apply, so force is simply the push pull and we leave the rate of change of momentum out of it?
I do understand the relationship between force and momentum now.....I guess what is bothering me is that for years, I have been able to think and do my calcs and never needed to use the relationship until now. Guess it has me confused if I should be thinking of force differently than classic F=ma, push pull that has been drilled into my head.
Thank you
4. Feb 24, 2016
### Staff: Mentor
You can, of course, leave out ma if it is a static equilibrium problem. But, in dynamic situations, ma needs to be included.
5. Feb 24, 2016
### A.T.
An individual force is the rate of momentum transfer.
Net force is the rate of total momentum change.
The F in F=ma stands for net force, which is the rate of total momentum change.
6. Feb 24, 2016
### alkaspeltzar
Okay but my main question is a physical force the same as a rate of change in momentum? Like if I hit a wall with a force is that the same as saying I hit the wall with a rate of change of momentum?
Or is simply rate of change of momentum related to force therefore we can calculate with one or the other?
7. Feb 24, 2016
### Staff: Mentor
A push force or a pull force is sometimes (infrequently) referred to as a rate of momentum transfer. But you can't represent it as mass times acceleration unless it the only (net) force acting.
8. Feb 24, 2016
### HallsofIvy
"Force equals the rate of change of momentum" means that $F= \frac{d(mv)}{dt}$. In the special (but important) case that mass, m, is constant, that is the same as $F= m\frac{dv}{dt}= ma$.
9. Feb 25, 2016
### A.T.
10. Feb 25, 2016
### alkaspeltzar
Sorry bear with me but I am just not getting it.
So are you saying that the physical force(what we long ago defined as a push or pull) is exactly the same as rate of change in momentum, aka the rate of momentum transfer?
Don't you have to have a force to have a rate of change in momentum? Part of me thinks if a body has acceleration, then there must be a force. Likewise, if a body has a rate of change in momentum, it must have a force causing it.....so aren't they two separate things, just related since one cant exist without the other?
And if they are the same, why don't we use one name....why say force if it is really a rate of momentum transfer or visa versa?
I guess I m looking for a simple explanation, please no math at this point.
11. Feb 25, 2016
### Staff: Mentor
Historically, using terms like "rate of momentum transfer" to represent a contact force (push or pull) came about when people began realizing the analogy between the differential equations for the force balances in continua, and the differential equations for heat- and mass transfer. In these sets of equations, the forces per unit area appear in the same locations in the equations as the rate of heat flow per unit area and the rate of mass flow per unit area. So, it became natural for them to start referring to the force per unit area as the rate of momentum flow per unit area (or the rate of momentum transfer per unit area).
12. Feb 25, 2016
### alkaspeltzar
that is not helping me, I am sorry that is more advanced than I can understand. Can you or someone please explain what I have asked above? I just want to know if really, they are the same thing, regardless of name or the math. Looking to have someone explain how a rate of change in momentum really is a contact force
13. Feb 25, 2016
### A.T.
Change and transfer are not the same. Change is the sum of all transfers: the net effect.
For the same reason we use the word "velocity" instead of "rate of position change".
14. Feb 25, 2016
### Staff: Mentor
In the normal context of applying physics at the level you are asking about, I would never refer to an individual force as a rate of change of momentum, unless it was the only force acting on a body, in which case it would also then be equal to the rate of change of momentum of the body. But, if there are multiple forces acting on a body, I would only consider the resultant of these multiple forces (i.e., the net force) as being equal to the rate of change of momentum of the body, and I would not consider each one individually as being the same thing as a rate of change of momentum.
Now this may differ from how A.T. looks at it, but that is just a matter of personal taste and preference.
15. Feb 25, 2016
### alkaspeltzar
A.T., so are you saying that rate of change of momentum is not the same as rate of momentum transfer?
Assuming net forces; "Force equals the rate of change of momentum" means that F=d(mv)dt' , does that mean the FORCE IS PHYSICALLY the rate of change of momentum or is this just a math relation?
16. Feb 25, 2016
### alkaspeltzar
So Chestermiller, what you are saying is that yes, the force and rate of change of momentum are the same, assuming it is the only force(or total net force)?
Doesn't the force CAUSE the rate of change of momentum, so they are different entities still?
17. Feb 25, 2016
### Staff: Mentor
That's how I view it.
I would agree with this interpretation also.
18. Feb 25, 2016
### A.T.
What is the difference and why should we care?
Since both happens simultaneously, how would you know which is the cause and why should we care?
19. Feb 25, 2016
### alkaspeltzar
If they are different entities though, wont they have different units? Unit for rate of change of momentum are same as force, how come?
20. Feb 25, 2016
### alkaspeltzar
Well since highschool, college and now engineering, force has always been a push or a pull, calculated by F=Ma, with the understanding that a change in acceleration or momentum is cuased by a force. Hearing a force said it is a rate of change of momentum seems foreign and is outside my understanding, so I am asking trying to understand how or why it is or isn't the same thing.
|
## What problem are we solving?
For a large system consisting of small entities, the phase space could be extremely large. In this problem, we assume that only one quantitiy is measure on a macroscopic level. Then we have to find the values for other measurable quantities.
## Line of Reasoning
Jaynes pointed out in this paper that we are solving an insufficient reason problem in statistical physics. What we could measure is some macroscopic quantity, from which we derive other macroscopic quantities. That being said, we know a system with a lot of possible microscopic states, ${ s_i }$ while the probabilities of each microscopic state ${p_i }$ is not know at all. We also know a macroscopic quantity $\langle f(s_i) \rangle$ which is defined as
The question that Jaynes asked was the following.
How are we supposed to find another macroscopic quantities that also depends on the microscopic state of the system? Say $g(s_i)$.
It is a quite interesting question for stat mech. In my opinion, it can be generalized. To visualize this problem, we know think of this landscape of the states. Instead of using the state as the dimensions, we use the probabilities as the dimensions since they are unknown. In the end, we have a coordinate system with each dimension as the value of the probabilities ${p_i}$ and the one dimension for the value of $\langle g \rangle (p_i)$ which depends on ${p_i}$. Now we constructed a landscape of $\langle g \rangle (p_i)$. The question is, how do our universe arrange the landscape? Where are we in this landscape if we are at equilibrium?
In Jaynes’ paper, he mentioned several crucial problems.
1. Do we need to know the landscape formed by ${p_i}$ and $\langle f(x_i)\rangle$? No.
2. Do we need to find the exact location of our system? We have to.
3. How? Using max entropy principle.
4. How to calculate another macroscopic quantity based on one observed macroscopic quantity and the conservation of probability density? Using the probabilities found in the previous step.
5. Why max entropy works from the information point of view? Assumping less about the system.
6. Why is the result actually predicting measurements even the theory is purely objective? This shows how simple nature is. Going philosophical.
7. With the generality of the formalism, what else can we do with it to improve our statistical mechanics power? Based on the information we know, we have different formalism of statistical physics.
The idea is great.
However, I think we have to consider another principle. I call this max mutual information principle. Suppose we do not have this max entropy principle. Somehow, we come up with an expression of the average of another quantity. What will happen from a landscape point of view?
If we construct these these two landscapes:
1. $\langle f(x_i)\rangle$ as a function of ${p_i}$,
2. and $\langle g(x_i)\rangle$ as a function of ${p_i}$.
Max mutual information indicates that the ${p_i}$ inferred from the two quantities should be the same. Huh, trivial.
|
Hal will be stopped for maintenance from friday on june 10 at 4pm until monday june 13 at 9am. More information
# The Sorting Order on a Coxeter Group
Abstract : Let $(W,S)$ be an arbitrary Coxeter system. For each sequence $\omega =(\omega_1,\omega_2,\ldots) \in S^{\ast}$ in the generators we define a partial order― called the $\omega \mathsf{-sorting order}$ ―on the set of group elements $W_{\omega} \subseteq W$ that occur as finite subwords of $\omega$ . We show that the $\omega$-sorting order is a supersolvable join-distributive lattice and that it is strictly between the weak and strong Bruhat orders on the group. Moreover, the $\omega$-sorting order is a "maximal lattice'' in the sense that the addition of any collection of edges from the Bruhat order results in a nonlattice. Along the way we define a class of structures called $\mathsf{supersolvable}$ $\mathsf{antimatroids}$ and we show that these are equivalent to the class of supersolvable join-distributive lattices.
Keywords :
Document type :
Conference papers
Domain :
Cited literature [15 references]
https://hal.inria.fr/hal-01185135
Contributor : Coordination Episciences Iam Connect in order to contact the contributor
Submitted on : Wednesday, August 19, 2015 - 11:40:45 AM
Last modification on : Saturday, February 27, 2021 - 4:02:05 PM
Long-term archiving on: : Friday, November 20, 2015 - 10:26:10 AM
### File
dmAJ0135.pdf
Publisher files allowed on an open archive
### Citation
Drew Armstrong. The Sorting Order on a Coxeter Group. 20th Annual International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2008), 2008, Viña del Mar, Chile. pp.411-416, ⟨10.46298/dmtcs.3602⟩. ⟨hal-01185135⟩
Record views
|
## Machine Learning Tutorial
Whenever a machine learning algorithm is implemented on a specific dataset, the performance is judged based on how well it generalizes, i.e how it reacts to new, never-before-seen data. In case the performance of the learning algorithm is not satisfactory or there is room for improvement, certain parameters in the algorithm need to be changed/tuned/tweaked. These parameters are known as ‘hyperparameters’ and the process of varying these hyperparameters to better the learning algorithm’s performance is known as ‘hyperparameter tuning’. These hyperparameters are not learnt directly through the training of algorithms. These values are fixed before the training of the data begins. They deal with parameters such as learning_rate, i.e how quickly the model should be able to learn, how complicated the model is, and so on. There can be a wide variety of hyperparameters for every learning algorithm. Selecting the right set of hyperparameters so as to gain good performance is an important aspect of machine learning. In this post, we will look at the below-mentioned hyperparameter tuning strategies: RandomizedSearchCV GridSearchCV Before jumping into understanding how these two strategies work, let us assume that we will perform hyperparameter tuning on logistic regression algorithm and stochastic gradient descent algorithm. RandomizedSearchCV RandomizedSearch searches for the specific subset of data in a random manner, instead of searching continuously (like how GridSearch does). This reduces the processing time of the hyperparameters. The scikit-learn module has RandomizedSearchCV function that can be used to implement random search. The hyperparameters are defined before searching them. A parameter called ‘n_iter’ is used to specify the number of combinations that are randomly tried. If ‘n_iter’ is too less, finding the best combination is difficult, and if ‘n_iter’ is too large, the processing time increases. It is important to find a balanced value for ‘n_iter’: import pandas as pd train = pd.read_csv("C:\\Users\\Vishal\\Desktop\\train.csv") test = pd.read_csv("C:\\Users\\Vishal\\Desktop\\test_1.csv") X_train = train.drop(['id', 'target'], axis=1) y_train = train['target'] X_test = test.drop(['id'], axis=1) loss = ['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'] penalty = ['l1', 'l2', 'elasticnet'] alpha = [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000] learning_rate = ['constant', 'optimal', 'invscaling', 'adaptive'] class_weight = [{1:0.5, 0:0.5}, {1:0.4, 0:0.6}, {1:0.6, 0:0.4}, {1:0.7, 0:0.3}] eta0 = [1, 10, 100] param_distributions = dict(loss=loss, penalty=penalty, alpha=alpha, learning_rate=learning_rate, class_weight=class_weight, eta0=eta0) from sklearn.linear_model import SGDClassifier from sklearn.model_selection import RandomizedSearchCV sgd = SGDClassifier(loss="hinge", penalty="l2", max_iter=5) random = RandomizedSearchCV(estimator=sgd, param_distributions=param_distributions, scoring='roc_auc', verbose=1, n_jobs=-1, n_iter=1000) random_result = random.fit(X_train, y_train) print('Best Score: ', random_result.best_score_) print('Best Params: ', random_result.best_params_) Output: Best Score: 0.7981584905660377 Best Params: {'penalty': 'elasticnet', 'loss': 'log', 'learning_rate': 'optimal', 'eta0': 1, 'class_weight': {1: 0.5, 0: 0.5}, 'alpha': 0.1} GridSearchCV This approach is considered to be the traditional way of performing hyperparameter optimization. It searches the specific subset of the hyperparameters continuously until a condition is met or the end is reached. Scikit-learn module has the GridSearchCV that can be used to implement this approach. Our set of parameters is defined before searching over it. import pandas as pd train = pd.read_csv("path to train.csv") test = pd.read_csv("path to test_1.csv") X_train = train.drop(['id', 'target'], axis=1) y_train = train['target'] X_test = test.drop(['id'], axis=1) from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression penalty = ['l1', 'l2'] C = [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000] class_weight = [{1:0.5, 0:0.5}, {1:0.4, 0:0.6}, {1:0.6, 0:0.4}, {1:0.7, 0:0.3}] solver = ['liblinear', 'saga'] param_grid = dict(penalty=penalty, C=C, class_weight=class_weight, solver=solver) logistic = LogisticRegression() grid = GridSearchCV(estimator=logistic, param_grid=param_grid, scoring='roc_auc', verbose=1, n_jobs=-1) grid_result = grid.fit(X_train, y_train) print('Best Score: ', grid_result.best_score_) print('Best Params: ', grid_result.best_params_) Output: Best Score: 0.7860030747728861 Best Params: {'C': 1, 'class_weight': {1: 0.7, 0: 0.3}, 'penalty': 'l1', 'solver': 'liblinear'} The advantage of using grid search is that it guarantees in finding an optimal combination from the parameters that are supplied to it. The disadvantage is that it is time consuming when the size of the input dataset is large and it is computationally expensive. This can be overcome with the help of RandomSearch. Conclusion In this post, we understood the usage and significance of hyperparameter tuning, along with 2 important strategies which are used to tune the hyperparameters.
# Hyperparameter Tuning
Whenever a machine learning algorithm is implemented on a specific dataset, the performance is judged based on how well it generalizes, i.e how it reacts to new, never-before-seen data. In case the performance of the learning algorithm is not satisfactory or there is room for improvement, certain parameters in the algorithm need to be changed/tuned/tweaked. These parameters are known as ‘hyperparameters’ and the process of varying these hyperparameters to better the learning algorithm’s performance is known as ‘hyperparameter tuning’.
These hyperparameters are not learnt directly through the training of algorithms. These values are fixed before the training of the data begins. They deal with parameters such as learning_rate, i.e how quickly the model should be able to learn, how complicated the model is, and so on.
There can be a wide variety of hyperparameters for every learning algorithm. Selecting the right set of hyperparameters so as to gain good performance is an important aspect of machine learning.
In this post, we will look at the below-mentioned hyperparameter tuning strategies:
• RandomizedSearchCV
• GridSearchCV
Before jumping into understanding how these two strategies work, let us assume that we will perform hyperparameter tuning on logistic regression algorithm and stochastic gradient descent algorithm.
## RandomizedSearchCV
RandomizedSearch searches for the specific subset of data in a random manner, instead of searching continuously (like how GridSearch does). This reduces the processing time of the hyperparameters. The scikit-learn module has RandomizedSearchCV function that can be used to implement random search. The hyperparameters are defined before searching them.
A parameter called ‘n_iter’ is used to specify the number of combinations that are randomly tried. If ‘n_iter’ is too less, finding the best combination is difficult, and if ‘n_iter’ is too large, the processing time increases. It is important to find a balanced value for ‘n_iter’:
import pandas as pd
X_train = train.drop(['id', 'target'], axis=1)
y_train = train['target']
X_test = test.drop(['id'], axis=1)
loss = ['hinge', 'log', 'modified_huber', 'squared_hinge',
'perceptron'] penalty = ['l1', 'l2', 'elasticnet']
alpha = [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000]
learning_rate = ['constant', 'optimal', 'invscaling', 'adaptive']
class_weight = [{1:0.5, 0:0.5}, {1:0.4, 0:0.6}, {1:0.6, 0:0.4}, {1:0.7, 0:0.3}] eta0 = [1, 10, 100]
param_distributions = dict(loss=loss,
penalty=penalty,
alpha=alpha,
learning_rate=learning_rate,
class_weight=class_weight,
eta0=eta0)
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import RandomizedSearchCV
sgd = SGDClassifier(loss="hinge", penalty="l2", max_iter=5)
random = RandomizedSearchCV(estimator=sgd,
param_distributions=param_distributions,
scoring='roc_auc',
verbose=1, n_jobs=-1,
n_iter=1000)
random_result = random.fit(X_train, y_train)
print('Best Score: ', random_result.best_score_)
print('Best Params: ', random_result.best_params_)
Output:
Best Score: 0.7981584905660377
Best Params: {'penalty': 'elasticnet', 'loss': 'log', 'learning_rate': 'optimal', 'eta0': 1, 'class_weight':
{1: 0.5, 0: 0.5}, 'alpha': 0.1}
## GridSearchCV
This approach is considered to be the traditional way of performing hyperparameter optimization. It searches the specific subset of the hyperparameters continuously until a condition is met or the end is reached. Scikit-learn module has the GridSearchCV that can be used to implement this approach.
Our set of parameters is defined before searching over it.
import pandas as pd
X_train = train.drop(['id', 'target'], axis=1)
y_train = train['target']
X_test = test.drop(['id'], axis=1)
from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression
penalty = ['l1', 'l2']
C = [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000]
class_weight = [{1:0.5, 0:0.5}, {1:0.4, 0:0.6}, {1:0.6, 0:0.4}, {1:0.7, 0:0.3}] solver = ['liblinear', 'saga']
param_grid = dict(penalty=penalty,
C=C,
class_weight=class_weight,
solver=solver)
logistic = LogisticRegression()
grid = GridSearchCV(estimator=logistic,
param_grid=param_grid,
scoring='roc_auc',
verbose=1,
n_jobs=-1)
grid_result = grid.fit(X_train, y_train)
print('Best Score: ', grid_result.best_score_)
print('Best Params: ', grid_result.best_params_)
Output:
Best Score: 0.7860030747728861
Best Params: {'C': 1, 'class_weight': {1: 0.7, 0: 0.3}, 'penalty': 'l1', 'solver': 'liblinear'}
The advantage of using grid search is that it guarantees in finding an optimal combination from the parameters that are supplied to it.
The disadvantage is that it is time consuming when the size of the input dataset is large and it is computationally expensive. This can be overcome with the help of RandomSearch.
#### Conclusion
In this post, we understood the usage and significance of hyperparameter tuning, along with 2 important strategies which are used to tune the hyperparameters.
|
# Math Help - Laplace transform of delta(t - a)
1. ## Laplace transform of delta(t - a)
Hi all,
I am trying to determine the laplace transform of dirac delta function,
delta(t - a).
I end up with:
lim as k ---> 0 [e^(-as) -e^(-as)*e^(-sk)]/ (s*k)
I then use L'Hopitals rule... but i can't seem to get
lim as k--> [s*e^(-as)*e^(-sk)]/ s
Could someone please show me how to get the above equation using L'Hopitals rule... some reason i get 2 other terms which i can't seem to cancel.
ArTiCK
2. Originally Posted by ArTiCK
Hi all,
I am trying to determine the laplace transform of dirac delta function,
delta(t - a).
I end up with:
lim as k ---> 0 [e^(-as) -e^(-as)*e^(-sk)]/ (s*k)
I then use L'Hopitals rule... but i can't seem to get
lim as k--> [s*e^(-as)*e^(-sk)]/ s
Could someone please show me how to get the above equation using L'Hopitals rule... some reason i get 2 other terms which i can't seem to cancel.
|
# Maths Notation for Consecutive height differences. Halp!
• (2 Pages)
• 1
• 2
## 18 Replies - 1673 Views - Last Post: 01 August 2016 - 02:41 PM
### #1 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
# Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 12:30 PM
Let's say I have a list of heights in order: (H1, H2...Hn). If I want to find the maximum height difference between two consecutive elements, how do I write that? If I want the sum of the differences, it's easy:
i=1n-1(Hi+1 - Hi)
(of course the "n-1" and "i=1") should be above and below each other.
I'm pretty sure I can't do this:
maxi=1n-1(Hi+1 - Hi)
Relatedly, how would I write the set of height differences between consecutive heights?
Reason for asking: I've been asked to retrospectively write formal specifications for code that already exists. Fun times!
Is This A Good Question/Topic? 0
## Replies To: Maths Notation for Consecutive height differences. Halp!
### #2 modi123_1
• Suitor #2
Reputation: 13848
• Posts: 55,286
• Joined: 12-June 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 12:44 PM
When you say sum do you mean consecutive sum, or just sum of two elements?
As in items in a list are:
[0] = 5
[1] = 6
[2] = 1
[3] = 7
.. and if you wanted to find [4] it would be 7 + 1... and not 7 + 1 + 6 + 5?
### #3 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 12:51 PM
The notation would be:
$$\max_{i \in [n-1]} H_{i+1} - H_{i}$$
Where $[k] = \{1, 2, \ldots, k\}$.
(I am assuming you can parse the LaTeX )
### #4 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:03 PM
Oh cool. Thanks!
Assuming n = 4, does [n-1] expand to [1, 2, 3]?
Could I also write [1, n-1]?
I think I'm missing something with the where part. I don't understand how it relates to the rest of it, particularly where k comes in.
modi123_1, say I have the heights:
(10, 20, 20, 45, 47)
The differences would be:
(10, 0, 25, 2)
and the maximum difference would be
25
Tripple post:
Is this correct?
$$\label{max height diff} \mathit{answer} = \max_{i = 1}^{n - 1}(H_{i+1} - H_{i})$$
I like it because it's closer to my original thoughts.
But I only like it if it is also correct.
This post has been edited by cfoley: 01 August 2016 - 01:12 PM
### #5 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:14 PM
We have [3] = \{1, 2, 3\}, which is a set. Note that [3] is not a list/tuple.
Quote
Could I also write [1, n-1]?
I would avoid introducing new notation. The bracket notation [n] = \{1, 2, ..., n\} is fairly standard. You see it in texts like Diestel's Graph Theory text and research papers. [1, n] is not standard notation.
### #6 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:16 PM
$$\label{max height diff} \mathit{answer} = \max_{i = 1}^{n - 1}(H_{i+1} - H_{i})$$
### #7 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:28 PM
It's not an iteration. So \max_{i=1}^{n-1} isn't formal notation. The mathematical programming formulation deals with constraints and sets. You have an objective function you want to maximize or minimize with respect to constraints. You should really write:
\max_{i \in [n-1]} H_{i+1} - H_{i}
As an afterthought- are you only interested in magnitude or sign as well? I.e., should your objective function be H_{i+1} - H_{i} or |H_{i+1} - H_{i}| ?
### #8 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:38 PM
Thanks for the explanation. I'm much happier with that now.
The input is constrained to be sorted so the result will alway be positive. The ABS notation isn't necessary but I'll have a think about weather it adds clarity.
However, that leads to another question. How should I go about describing this "list" of heights. It's not a set as there might be duplicates, and as I described, the ordering is important.
### #9 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:40 PM
It's a tuple. You can supply the tuple upfront. Note that a tuple is an element from the Cartesian product of sets. So if we look at (x, y) in the Cartesian plane, (x, y) \in \mathbb{R}^{2}.
I'll also comment that the mathematical programming formulation is useful to know. In operations research, it is common to cope with hard problems by using a linear programming relaxation. That is, we construct an Integer-Linear Programming formulation (note that ILP is NP-Hard) and relax the integer variables to be real-variables. There are algorithmic techniques like branch-and-bound or cutting planes for polytopes to obtain integer solutions (which may not be optimal).
### #10 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:56 PM
All right, so if I want to describe the tuple of height differences, could I write something like this?
\left\{ {(H_{i+1}-H_{i}) \mid i \in [i-1]} \right\}
Mathematical notation has been one of my weaker areas for a while. I can generally read it OK but writing it isn't somehting I have practised.
### #11 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 01:57 PM
I would just stick with the tuple of heights, rather than their differences. It's easier to work with it, and you don't get bogged down in notation.
\max_{i \in [n-1]} H_{i+1} - H_{i} s.t.
H = (H_{1}, ..., H_{n})
Where you replace H_{1}, H_{2}, ..., H_{n} with your fixed values.
### #12 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 02:04 PM
That's not really an option for me. />
The thing I posted above would look a bit like this:
{(Hi+1 - Hi) | i ∈ [n-1]}
This post has been edited by cfoley: 01 August 2016 - 02:05 PM
### #13 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 02:22 PM
So this: {(H_{i+1} - H_{i}) | i ∈ [n-1]} is a set of ordered pairs.
Quote
That's not really an option for me.
Can you clarify this? I was under the impression that:
Quote
Let's say I have a list of heights in order: (H1, H2...Hn). If I want to find the maximum height difference between two consecutive elements, how do I write that?
So we have:
INSTANCE: A sequence H of non-negative integers ordered in non-decreasing order.
OUTPUT: The maximum difference of consecutive terms.
So if we let n := |H|, then the mathematical program would be:
\max_{i \in [n-1]} H_{i+1} - H_{i}
Does this sound correct? If not, what details am I missing?
### #14 cfoley
• Cabbage
Reputation: 2388
• Posts: 5,013
• Joined: 11-December 07
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 02:30 PM
Sorry for the confusion. You have understood (and answered) my original question perfectly. Thanks!
I'm now asking a new question where I want to describe a tuple containing the differences.
Quote
So this: {(H_{i+1} - H_{i}) | i ∈ [n-1]} is a set of ordered pairs.
I don't understand how I have accidentally turned it into pairs. Does this bit (H_{i+1} - H_{i}) not find the difference between the elements in the pairs?
Is it straightforward to make it describe a tuple rather than a set?
### #15 macosxnerd101
• Games, Graphs, and Auctions
Reputation: 12242
• Posts: 45,328
• Joined: 27-December 08
## Re: Maths Notation for Consecutive height differences. Halp!
Posted 01 August 2016 - 02:31 PM
I would just say: Let D be a tuple of length n-1, where D_{i} = H_{i+1} - H_{i}.
Sometimes English helps.
• (2 Pages)
• 1
• 2
.related ul { list-style-type: circle; font-size: 12px; font-weight: bold; } .related li { margin-bottom: 5px; background-position: left 7px !important; margin-left: -35px; } .related h2 { font-size: 18px; font-weight: bold; } .related a { color: blue; }
|
# In a low-energy transmission electron microscope (TEM), the typical average beam current is about -1.00 mu...
###### Question:
In a low-energy transmission electron microscope (TEM), the typical average beam current is about -1.00 mu A. If a material sample is exposed to this electron current for 17.8 min, how many electrons impact the material during that time? electrons
#### Similar Solved Questions
##### How do you write 23.21 in scientific notation?
How do you write 23.21 in scientific notation?...
##### How do you factor x^2-6x-4?
How do you factor x^2-6x-4?...
##### Plasma membranes are best described as a 1.double layer of phospholipids with hydrophobic tails directed toward...
Plasma membranes are best described as a 1.double layer of phospholipids with hydrophobic tails directed toward the cytoplasm of the cell. 2.single layer of phospholipids with water molecules attached along one side. 3.double layer of phospholipids with hydrophilic heads directed toward one another....
##### Which of the designs below would be best suited to ANCOVA? a. Participants were randomly allocated...
Which of the designs below would be best suited to ANCOVA? a. Participants were randomly allocated to one of two stress management therapy groups, or a waiting list control group. The researcher was interested in the relationship between the therapist’s ratings of improvement and stress l...
##### Describe four activities that you, as a health educator, will do to stay current in the...
Describe four activities that you, as a health educator, will do to stay current in the field. Be specific....
##### Combined Gas Law How to 8. A gas has a volume of 35 liters at STP....
Combined Gas Law How to 8. A gas has a volume of 35 liters at STP. What will its volume be at 5.0 atm and 35 °C?...
##### Assume if the shear stress in steel exceeds about 4.00 108 N/m2 the steel ruptures. (a)...
Assume if the shear stress in steel exceeds about 4.00 108 N/m2 the steel ruptures. (a) Determine the shearing force necessary to shear a steel bolt 1.25 cm in diameter. (b) Determine the shearing force necessary to punch a 2.60-cm-diameter hole in a steel plate 0.590 cm thick....
##### Suppose that you were sent to prison for a crime you did not commit. While in...
Suppose that you were sent to prison for a crime you did not commit. While in prison, the warden learns that you have taken financial accounting and are really good at “keeping the books.” In fact, you are so good at accounting that you offer to teach other inmates basic financial skills...
##### The σ population standard deviation is unknown. You collected a simple random sample of the cents...
The σ population standard deviation is unknown. You collected a simple random sample of the cents portions from 100 checks and from 100 credit card charges. The cents portions of the checks have a mean of 23.8 cents and a standard deviation of 32.0 cents. The cents portions of the credit charg...
##### D Question 11 1 pts Factor. Upload a picture of your work and answer. 27c+ 343...
D Question 11 1 pts Factor. Upload a picture of your work and answer. 27c+ 343 Question 9 Factor completely. Upload a picture of your answer. 98a4b - 18b3 D Question 14 Factor by grouping. Upload a picture of your work and answer. 2x2y - 12xy - 3x + 18...
##### Need help solving all parts Reading 4.4-4.5 The Force Method of Analysis for Axially Loaded Members...
need help solving all parts Reading 4.4-4.5 The Force Method of Analysis for Axially Loaded Members 20l3 Ta use the force method of anzlysis to cakulate the reaction forces for a staticaly indsterminat axialy laaded member The stee memoer 뽑hown below has 비 darme er o 72 mm modu s o ela...
##### A geyser sends a blast of boiling water high into the air
A geyser sends a blast of boiling water high into the air. During the eruption, the height (h) (in feet) of the water (t)seconds after being forced out from the ground could be modeled by h = -16t*t + 70t. My question is how do you find the initial velocity of the boiling water?...
##### A streaming subscription service decides to lower their basic package price from $5 to$3. Which...
A streaming subscription service decides to lower their basic package price from $5 to$3. Which of the following is true? Group of answer choices A. The subscription service will have higher revenues at the lower price since demand is inelastic. B. The quantity effect outweighs the price effect. C....
|
# why do environmental problems exist? think of, for instance, the pollutants that come out of... 1 answer below »
why do environmental problems exist? think of, for instance, the pollutants that come out of coal-fired power plants that generate electricity. these include both conventional pollutants that cause smog and can kill people, and greenhouse gases. why do electric utilities produce electricity using coal? if they switched completely to renewable sources, such as wind or solar energy or hydroelectricity, what would be the effects on consumers?
|
Voltage Drop (Three phase)
1. Jul 16, 2008
lukas86
I was just looking for some insight on this problem. It is not a homework problem, my boss just asked me to figure it out.
Now, there is 120V supply, with 600 feet of cable (copper) running to a load, and 600 ft neutral coming back to the supply. Ignoring the resistance of the load, he asked what was the voltage at the load due to the voltage drop in the cables with a current of 12A.
Now, he wanted to know the values for #12 wire, and #8 wire. Below is what I came to...
VD = 1.732*k*Q*I*D/CM (equation from... h t t p : / / ecmweb.com/mag/electric_code_calculations_17/)
VD: Voltage drop
k: Direct current constant (12.9 for copper)
Q: Alternating current adjustment factor (skin effect), I thought it doesn't apply here...
I: Current (I think, it said "The load in amperes at 100%, not 125% for motors or continuous loads." I just put the 12A here)
D: Distance in ft
CM: Cable in Circ-Mils
#12 Wire
VD = 1.732*k*Q*I*D/CM
= 1.732*12.9 * 12A * 600ft / 6529.8CM
= 24.636V
Now, that is what I thought the voltage drop in the 600ft cable to the load was. Now I am confused because at first I thought that since there is also 600ft of cable back to the source, the voltage across the load would be 70.727V seeing as 120V - 24.636V (cable to load) - 24.636V (cable back to source) = 70.727V across the load.
Then I thought again, well... in 3 phase, if perfectly balanced, there is no current in the neutral lead back to the source. So the voltage across the load would be around 95V.
-----------------------------------------------------------------------------------------
For #8 Wire
VD = 1.732*k*Q*I*D/CM
= 1.732*12.9 * 12A * 600ft / 16509CM
= 9.744V
Now then the same situation as for the #12 wire, if I am doing any of this correctly... should be something like 120V - 9.744V *2 = 100.511V across the load or... 120V - 9.744V = 110.255V.
Any help or requests for further understanding so I could respond to get more help would be greatly appreciated. Thanks in advance.
2. Jul 16, 2008
lukas86
Code (Text):
AWG Dia-mils TPI Dia-mm Circ-mils Ohms/Kft Ft/Ohm Ft/Lb Ohms/Lb Lb/Kft *Amps MaxAmps
8 128.49 7.7828 3.2636 16509 0.6282 1591.8 20.011 0.0126 49.973 22.012 33.018
12 80.807 12.375 2.0525 6529.8 1.5883 629.61 50.593 0.0804 19.765 8.7064 13.060
3. Jul 17, 2008
stewartcs
That equation should work just fine.
In a balanced three-phase system the current in the neutral is zero. If it is unbalanced, the current in the neutral will be the phasor sum of the phases. Hence, the total cable drop for any phase would be the algebraic sum of the voltage drop in the hot wire plus the voltage drop in the neutral wire (keeping in mind this is phasor addition).
In a single-phase system you have to include the neutral wire resistance since there is current flowing in it and thus drops some voltage due to its resistance.
CS
4. Jul 17, 2008
lukas86
Alright thanks.
If I took a different approach and took the Ohms/(1000feet) value of the cable, and multiplied that by the length (600 ft) then multiply that by the current... would that be another method?
= .6282 ohms/Kft * 600ft
= .37692 ohms
V = IR
= 12A * .37692 ohms
= 4.52304 V....
So that 4.52 V would be the drop over 600 feet of #8 wire. Is this another way, it gives less voltage drop though. It's just that I am not all that clear on these two approaches.
5. Jul 18, 2008
stewartcs
That would give you the single phase voltage drop in a three-phase system. Since you have a balanced 3-phase system, the total drop would be multiplied by $$\sqrt{3}$$ to account for the phase difference (remember I said you have to perform phasor addition). That is just the equivalent for a balanced 3-phase system.
The approach I gave you the first time and the equation you posted are identical except that the approach I gave you will allow you to calculate an unbalanced load's drop also. Whereas the equation is for a balanced load.
Good luck.
CS
6. Jul 18, 2008
stewartcs
BTW, I forgot to mention that those equations neglect the reactance in the cable. However, they are of course exact for DC.
If you want the to include the reactance, the approximate voltage drop for AC then use the general equation for line-to-neutral:
$$V = IR \cos{\phi} + IX \sin{\phi}$$
Then multiple by 2 for single-phase or $$\sqrt{3}$$ for three-phase to get the line-to-line drop.
Hope that helps.
CS
EDIT: If you need the exact voltage drop formula, let me know (it's longer to type!).
Last edited: Jul 18, 2008
7. Jul 18, 2008
lukas86
Thanks again, in the 2nd part of that eqn. what is the X?
8. Jul 18, 2008
stewartcs
The line reactance for one conductor (in Ohms).
CS
9. Jul 22, 2008
lukas86
This relates to this problem at the mill. There were readings taken from around the mill, on the PDCs and the breakers/equipment relating to each PDC. Now what I was looking for by posting this PDF attachment was anything odd from the readings.
The only thing I really noticed was that in some cases, the equipment had higher readings than the PDC-MAIN voltages. Now the transformer feeds the PDCs, then the PDCs feed the power to the equipment, so I thought the PDCs should have the highest voltages.
Just looking for thoughts though. Thanks.
|
Go to start of banner
# Introduction
This technical note explains how to implement a speed control for a motor.
First, the note introduces the general operating principles of a speed control, regardless of which type of machine is used. Then, the speed controller is tuned using the symmetrical optimum method.
Finally, a practical control implementation is introduced, targeting the B-Box RCP or B-Board PRO with
# General principles
An electrical machine can be modeled mechanically as a rotating mass. Its inertia $//$ can be rotated by applying a torque on it, which is the difference between electro-magnetic torque $//$ and a load torque $//$. The load is external to the motor and cannot be controlled. However the speed $//$ of the machine can be changed by controlling the electro-magnetic torque.
The relation between the speed, the inertia and the torque is formalized by the second law of Newton for rotational movement [1]:
$//$ is the external load torque applied to the shaft and $//$ is the torque due to the frictions.
The transfer function linking the torque to the speed is then:
In most applications, the effect of frictions is negligible in comparison to the load and motor torque. In this case, the transfer function simplifies to:
## Tuning of the digital control
According to the transfer function derived above, the plant is a 1st order system. Therefore, a PI controller is sufficient to follow a constant reference with no permanent error. Since the simplified plant from (3) contains a pure integrator, the proportional and integral terms are tuned using the symmetrical optimum criterion [1][2].
The parameter $//$ represents the total delay of the discrete control, which can be computed using the PN142. An numerical example is presented below.
The symmetrical optimum criterion is sensitive to abrupt changes on the reference value. In this case, a setpoint corrector can be used in order to reduce the overshoot [2]:
An other alternative is to use a rate limiter to limit the variation rate of the speed reference.
The speed controller generates only a torque reference and, therefore, must be cascaded with a torque controller in order to produce an effect on the machine. The torque controller can be implemented using, for instance, Field-Oriented Control (TN111) or Direct Torque Control (AN004).
As developed above, the tuning of the speed controller depends on the total control delay, which can be computed as in PN142. In the case of the speed control, the speed controller is the outer control loop and the torque controller is the inner loop.
The overall cascaded control diagram is shown below.
# B-Box / B-Board implementation
The figure below shows the implementation of cascaded control structure with a Field-Oriented Control as the torque controller. A Finite State Machine (FSM) oversees the operation of the machine. In particular, the FSM receives a desired speed (in rpm) from the user and limits its variation rate to reduce the overshoot of the speed controller.
### Speed controller
In this application example, the speed controller is executed 100 times slower than the torque control. This way, the torque controller has enough time at disposal to regulate the stator currents before the reference from the speed control is updated. Notice that with FOC, the torque reference must be translated into a current reference, using the equation developed in TN111.
### Tuning of the speed controller
Here is a complete numerical example of how to tune the PI of the speed control. The machine parameters are presented in the Experimental results section.
The tuning of the inner loop of the cascaded control is detailed in TN111. As a reminder:
The first step is to identify the transfer function of the speed controller:
The determination of the delays along the control chain of a cascaded control is explained in Draft PN142 v2. In this Simulink implementation, the outer-loop was chosen to the $//$ times slower than the inner-loop. Consequently, the various delays are:
The total delay of the outer-loop is then the sum of the small time constants:
According the symmetrical optimum criterion, the parameters of the PIs are computed as:
# Experimental results
The experimental setup consists of a PMSM supplied by voltage source inverter controlled by a B-Box prototyping controller. The FOC control is implemented using the graphical programming of ACG SDK library for Simulink. The power converter is built from 4x PEB 8032 phase-leg modules (3 phases and 1 braking chopper leg). Another PMSM connected to 3 power resistors is used a brake to generate a load torque.
# Machine parameters
The speed controller was validated on a Permanent Magnets Synchronous Machine (PMSM).
Model: Control techniques 095U2B300BACAA100190 Parameter Value Unit Rated power 1.23 kW Pole pairs 3 - Rated phase voltage 460 V Rated phase current 2.7 A Rated mechanical speed 314 rad/s Rated torque 3.9 Nm Stator resistance 3.4 Ohm Stator inductance (d and q axis) 12.15 mH Permanent magnet flux that encircles the stator winding 0.25 Wb Moment of inertia (PMSM only) 2.9 kg cm2
### Test conditions
• Load torque: 2 Nm (PMSM with resistors as load)
• DC link voltage: 500 V
• Inverter: 4x PEB8032 (one leg for braking chopper)
• Interrupt and sampling frequency: 20 kHz
• Sampling phase: 0.5
• PWM outputs: carrier-based
• Torque control technique: Field-Oriented Control (FOC)
### Results
The tracking performance of the speed controller was validated experimentally by applying a speed reference step from 0 to 1500 rpm. The experiment was performed twice, with two different rate limits on the speed reference.
At first, the rate limiter was set to 100000 rpm/s. As shown below, the PMSM is unable to accelerate at this rate because the speed controller hits its upper saturation limit (1.1 pu of the nominal torque). Additionally, the tuning from the symmetrical optimum criterion leads to an overshoot of 21%.
For the second run, the reference variation was limited to 5000 rpm/s. In this case, the speed control is able to follow the acceleration of the speed reference. Since the control uses a 1st order PI controller, it is not able to completely eliminate the tracking error during the ramp. However, this has the advantage that the mechanical stress on the machine is also reduced since the control does not abruptly apply a high torque at startup. What's more, the overshoot is also reduced to only 4.7%. The only drawback compared to the first experiment is the prolonged settling time of 0.4 s, compared to 0.3 s.
The choice of the speed reference rate limitation is a trade-off between the settling time and the mechanical stress induced by a sudden change of the torque.
|
# Partition Function p(n)
1. Nov 6, 2008
### Izzhov
I am aware that there are several generator functions for the Partition Function p(n), but does anyone know if a closed form formula exists for this function?
2. Nov 7, 2008
### CRGreathouse
http://www.research.att.com/~njas/sequences/A000041 [Broken] has a recursive formula:
$$p(n) = \frac1n\cdot\sum_{k=0}^{n-1}\sigma(n-k)\cdot p(k)$$
Last edited by a moderator: May 3, 2017
3. Nov 7, 2008
### Izzhov
Thanks, but I put the words "closed-form" in italics for a reason.
...unless I'm mistaken on the definition of "closed-form." Which is quite possible.
4. Nov 7, 2008
### CRGreathouse
So tell us what you mean by closed form. (No sigma, no recursion, only +-*/?)
5. Nov 7, 2008
6. Nov 7, 2008
### CRGreathouse
I read the page you linked to, but I didn't see a definition.
7. Nov 7, 2008
### Izzhov
"[A closed-form formula is] a single arithmetic formula obtained to simplify an infinite sum in a general formula." -Wikipedia
Basically, what you said. No infinite sums, recursion, etcetera.
8. Nov 8, 2008
### CRGreathouse
That doesn't tell me what it is at all. Further, I don't see what closed form formulas have to do with infinite sums.
9. Nov 8, 2008
### Izzhov
A closed-form formula is when you take a formula with an infinite sum, such as
$$s = \sum_{k=0}^\infty ar^k$$
and simplify it to an algebraic formula, which in this case would be
$$s = \frac{a}{1 - r}$$
(Assuming, in this case, that r < 1.)
Understand?
10. Dec 21, 2008
### lrs5
11. Dec 21, 2008
### Izzhov
Wait, but that has an infinite sum in it! That doesn't really count. :/
12. Dec 22, 2008
### lrs5
Sure it counts. Just as the closed form for, say, sin(x) = x - x^3/3! + x^5/5! - ...
Perhaps the real question is, what do you want the closed form for? If you want to calculate values exactly, use the recurrence formula. To approximate it, use the asymptotic formula. But I assume you need the closed form for some other reason.
13. Dec 22, 2008
This Wikipedia article on closed-form expressions is probably the one you're looking for, rather than the one Izzhov linked.
I wouldn't consider either of lrs5's examples to be in closed form, because both contain infinite sums.
14. Dec 25, 2008
### Izzhov
So I take it there actually is no closed-form expression then?
15. Jan 7, 2009
### flouran
In other words, you mean that the infinite series can/will converge.
16. Jan 7, 2009
Not just converge, but converge to something that can be expressed with finitely many elementary operations (whatever they may be).
17. Jan 7, 2009
### Kurret
Couldnt we define a closed formula p(n) over a set of functions (for example the elementary functions, or maybe only the functions f(x)=x, f(x)=c) combined with a certain set of operations (for example ^*/-+) as a formula whose number of terms (functions):
1) is not infinite
2) is not depending on n.
18. Jan 7, 2009
### CRGreathouse
I'd prefer to define it symbolically, perhaps with a context-free grammar:
F -> 0
F -> 1
F -> n
F -> -F
F -> F$$^{-1}$$
F -> (F)+(F)
F -> (F)*(F)
F -> (F)^(F)
19. Mar 14, 2011
### jmal
20. Mar 14, 2011
|
Lemma 69.4.4. Let $S$ be a scheme. Let $I$ be a directed set. Let $(X_ i, f_{i'i})$ be an inverse systems over $I$ of algebraic spaces over $S$. If $X_ i$ is reduced for all $i$, then $X$ is reduced.
Proof. Observe that $\mathop{\mathrm{lim}}\nolimits X_ i$ exists by Lemma 69.4.1. Pick $0 \in I$ and choose an affine scheme $V_0$ and an étale morphism $U_0 \to X_0$. Then the affine schemes $U_ i = X_ i \times _{X_0} U_0$ are reduced. Hence $U = X \times _{X_0} U_0$ is a reduced affine scheme as a limit of reduced affine schemes: a filtered colimit of reduced rings is reduced. Since the étale morphisms $U \to X$ form an étale covering of $X$ as we vary our choice of $U_0 \to X_0$ we see that the lemma is true. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
## Preface
As I stated in this previous page, the SEM method that was implemented showed some convergence problems, mainly with 3rd and 4th order averaged terms. So comes the need to do many other simulations to check how SEM works with differents parameters.
We focused some main parameters and we investigate over them. These parameters are:
1. Time
2. Position over the gri
3. Eddies number (--> eddies density)
4. Box volume (x-dimension development)
I believe that, even if might not be as important as time and eddies number, the position needs to be investigated, as well as the box volume, because of completeness.
Another important task that has been developed this week was the rearrangement of the SEM code. In fact, the hight number of simulation that had to be done required lot of CPU time. So I modified my fortran code, deleting some useless functions and modifying the code with the purpouse to reach a faster computational speed. For this task I also used the N. Jarrin's fortran codes as idea to improve my own code. As a result of this work, the simulations speed increased of 20-25% from the previous version of my SEM fortran code.
## Simulation time interval
First we proceed, we need to take a look at the simulation time interval. In fact we must be sure that the simulation reaches its convergence, otherwise any further
discussion would be useless. To check this particular simulation feature I run several long time interval simulation, and check how the main variables behave.
At first sight, the highest averaged order have the lowest convergence speed. This can be easily checked by having a look at the last graphs of my previous result page. So I focused my attenction to these parameters and their output graphs. A 10,000 time-steps simulation long (every time step is 0.005 s, so overall is 50 s) showed that a kind of convergence is reached after 6,000 - 7,000 time-steps (30-35 s). There results came from a 2000 eddies simulations, as the one described in the previous page. Taking a look to another simulation (1000 eddies), to check the results validity, we can confirm (see Fig. 2) the previous hypothesis.
As conclusion of our first discussion we can say that the minimun time step number to be sure that we reach a convergence situation is around 6.000-7.000, even if is firmly suggested to use a highter number.
## Different points results in the same grid
The first obvious analysis is the comparison between 2 or more points results in the same grid. In fact we expect to have very similar results moving around the same grid. This because of the random nature of the SEM method that should not be space-dependent then.
In the graph you can see plotted the %$<u^2>$% in different points all over the grid (you can see the points coordinates in the legend). As we can see, the central point is the one that best fit the theoretical results, instead the lateral points are inside a 20% range around the theoretical result, even if they are not right at the grid boundary.
Some similar results can be seen also taking in consideration other parameters, such as the %$<S_i>$% and the %$<F_i>$% (as you can see below).
In both cases, in fact, the results in the central point is the one that better fits with the theoretical result, and it is in between all other results.
All these results lead us to two different conclusions:
• there is some spacial dependency in the LES method (this dependancy can be dependent from the ratio between grid dimension and eddies lenght-scale)
• the average results all over the grid fits quite well the theorethical result we are expecting.]
## Eddies number dependency
Another very important parameter is the eddies number, or better, the eddies density inside the box. For our simulation, since the volume is %$( 2 \pi )^2 * 4 = 25.13$%, the densiti in so defined as:
%$\rho_{eddy} = \frac{N_{eddies}}{Vol} = \frac{N_{eddies}}{25.13}$%
In the following picture can be compared some results as the density increases. The value considered are:
%$\rho_1 = \frac{500}{25.13} = 19.90 m^{-3}$% %$\rho_2 = \frac{1000}{25.13} = 39.79 m^{-3}$% %$\rho_3 = \frac{2000}{25.13} = 79.59 m^{-3}$% %$\rho_4 = \frac{3000}{25.13} = 119.38 m^{-3}$% %$\rho_5 = \frac{4000}{25.13} = 159.17 m^{-3}$%
Ando here you are some graphs.
FIG 1 FIG 2 FIG 3 FIG 4
From these grapsh, and mainly from the fig. 2 and fig. 4, but also from the others, we can see the strong relationship with the eddies density. In this study, until we reach a density of about 80 eddies per cubed meter. In few words, all parameters shows a better behaviour as the eddies increases. This must be taken in consideration during the simulation. We have to assure a minimum density to be sure a convergence situation has been reached.
By the way, we were expecting such a behaviour for the %$<F_u>$% as from the theory we demonstrated that it has a strange relationship with the eddies number and the box volume, and so with the eddies density.
## Box volume
All the simulation in this section were run with 2000 eddies, that is defined the standard case.
FIG 1 FIG 2 FIG 3 FIG 4
Here I plotted the usually 4 main variable that has to be checked: %$<u^2>, <uv>, <S_u>, <F_u>$%. Here it is more difficult to find a relationship, since the variable seemsto have a kind of random behaviour. In fact,for example, the %$<F_u>$% in this case varies because the eddies density varies with the volume, and the same for all the others paramethers. To get some usefull results is then necessary to complete some simulation with a fixed eddies density, as the one that are now running.
## Future issues
Understand better why the results "jump", particularly and why a non uniform convergence has reached (Ex. the %$<u^2>$% never shows a common behaviour as it should)
NEXT STEP -->
Current Tags:
create new tag
, view all tags
Topic revision: r6 - 2009-11-16 - 16:33:04 - RuggeroPoletto
Main Web
26 Aug 2019
Manchester CfdTm
Code_Saturne
ATAAC
KNOO
DESider
FLOMANIA
|
# Use random-shift Halton sequence to obtain 40 independent estimates for the price of a European call
Background Information:
Random-shift Halton sequence: Consider the first six Halton vectors in dimension $2$, using base $2$ and $3$:
$$\begin{bmatrix} 1/2\\ 1/3 \end{bmatrix}, \begin{bmatrix} 1/4\\ 2/3 \end{bmatrix},\begin{bmatrix} 3/4\\ 1/9 \end{bmatrix},\begin{bmatrix} 1/8\\ 4/9 \end{bmatrix},\begin{bmatrix} 5/8\\ 7/9 \end{bmatrix}, \begin{bmatrix} 3/8\\ 2/9 \end{bmatrix}$$
To obtain independent randomization of this set, generate two random numbers $u_1$,$u_2$ from $U(0,1)$, say, $u_1 = 0.7$ and $u_2 = 0.1$. Add the vector $u = \begin{bmatrix} u_1\\ u_2 \end{bmatrix}$ to each vector above, componentwise, and take the fractional part of the sum. For example, the first vector becomes $$\begin{bmatrix} \{1/2 + 0.7\}\\ \{1/3 + 0.1\} \end{bmatrix}= \begin{bmatrix} 0.2\\ 0.4\overline{3} \end{bmatrix}$$ (here $\{x\}$ denotes the fractional part of $x$). Similarly, add $u$ to the rest of the vectors. Here is the Pseudocode for simulating the geometric Brownian motion model:
Question:
Consider a European call option with parameters: $T = 1$, $K = 50$, $r = 0.1$, $\sigma = 0.3$, and $S_0 = 50$
Use random-shift Halton sequences to obtain $40$ "independent" estimates for the price of the option. For each estimate, use $N = 10,000$ price paths. To simulate a path, you will simulate the geometric Brownian motion model with $\mu = r$, and using $10$ time steps $t_0 = 0$, $t_1 = \Delta t$, $t_2 = 2\Delta t,\ldots,t_{10} = 10\Delta t = T$. Use the Box-Muller method to generate the standard normal numbers.
I have been able to sucessfully use the Box-Muller method to generate the standard normal numbers which I checked are correct. I also have simulate the geometric Brownian motion model with $\mu = r$ but I am not completely sure it is correct.
The part that I am confused about is assuming the geometric Brownian motion model is correct how I apply the random-shift Halton sequences to obtain the $40$ "independent" estimates for the price of the option.
I programmed the latter in C++. Here is the code which should be pretty easy to read:
#include <iostream>
#include <cmath>
#include <math.h>
#include <fstream>
#include <random>
using namespace std;
double phi(long double x);
double Black_Scholes(double T, double K, double S0, double r, double sigma);
double Halton_Seq(int index, double base);
double Box_MullerX(int n, int N,double u_1, double u_2);
double Box_MullerY(int n, int N,double u_1, double u_2);
double Random_Shift_Halton(int n,int N,double mu, double sigma, double T, double S0);
int main(){
int n = 10;
int N = 10000;
// Calculate BS
double T = 1.0;
double K = 50.0;
double r = 0.1;
double sigma = 0.3;
double S0 = 50.0;
double price = Black_Scholes(T,K,S0,r,sigma);
//cout << price << endl;
// Random-shift Halton Sequence
Random_Shift_Halton(n,N,r,sigma,T, S0);
}
double phi(double x){
return 0.5 * erfc(-x * M_SQRT1_2);
}
double Black_Scholes(double T, double K, double S0, double r, double sigma){
double d_1 = (log(S0/K) + (r+sigma*sigma/2.)*(T))/(sigma*sqrt(T));
double d_2 = (log(S0/K) + (r-sigma*sigma/2.)*(T))/(sigma*sqrt(T));
double C = S0*phi(d_1) - phi(d_2)*K*exp(-r*T);
return C;
}
double Halton_Seq(int index, double base){
double f = 1.0, r = 0.0;
while(index > 0){
f = f/base;
r = r + f*(fmod(index,base));
index = index/base;
}
return r;
}
double Box_MullerX(int n, int N,double u_1, double u_2){
double R = -2.0*log(u_1);
double theta = 2*M_PI*u_2;
double X = sqrt(R)*cos(theta);
return X;
}
double Box_MullerY(int n, int N,double u_1, double u_2){
double R = -2.0*log(u_1);
double theta = 2*M_PI*u_2;
double Y = sqrt(R)*sin(theta);
return Y;
}
double Random_Shift_Halton(int n,int N,double mu, double sigma, double T, double S0){
// Generate the Halton_Sequence
double hs[N + 1];
for(int i = 0; i <= N; i++){
hs[i] = Halton_Seq(i,2.0);
}
// Generate two random numbers from U(0,1)
double u[N+1];
random_device rd;
mt19937 e2(rd());
uniform_real_distribution<double> dist(0, 1);
for(int i = 0; i <= N; i++){
u[i] = dist(e2);
}
// Add the vector u to each vector hs and take fraction part of the sum
double shifted_hs[N+1];
for(int i = 0; i <= N; i++){
shifted_hs[i] = hs[i] + u[i];
if(shifted_hs[i] > 1){
shifted_hs[i] = shifted_hs[i] - 1;
}
}
// Use Geometric Brownian Motion
double Z[N];
for(int i = 0; i < N; i++){
if(i % 2 == 0){
Z[i] = Box_MullerX(n,N,shifted_hs[i+1],shifted_hs[i+2]);
}else{
Z[i] = Box_MullerY(n,N,shifted_hs[i],shifted_hs[i+1]);
}
}
double ZZ[N + 1];
ZZ[0] = 0.0;
for(int i = 1; i <= N; i++){
ZZ[i] = Z[i-1];
}
for(int i = 1; i <= N; i++){
//cout << ZZ[i] << endl;
}
double S[N + 1];
S[0] = S0;
double t[n+1];
for(int i = 0; i <= n; i++){
t[i] = (double)i*T/(n-1);
}
for(int j = 1; j <= N; j++){
for(int i = 1; i <= n; i++){
S[i] = S[i-1]*exp((mu - sigma/2)*(t[i] - t[i-1]) + sqrt(sigma)*sqrt(t[i] - t[i-1])*ZZ[i]);
}
}
for(int i = 1; i <= n; i++){
//cout << S[i] << endl;
}
// Use random-shift halton sequence to obtain 40 independent estimates for the price of the option
}
}
If any one has any suggestions on what I do from here I would greatly appreciate it.
• Is this homework/an assignment? I am asking because apart from your question, there are multiple things wrong with your other code. – LocalVolatility Mar 11 '17 at 4:52
Actual Question
I suppose this is homework, so I will only outline the steps. The way I understand this question is as follows:
1. Build a function that simulates different 10,000 sample paths of your underlying asset with 10 equidistant time steps each.
2. For each of these paths, compute the terminal European call option payoff. Their discounted sample mean is your estimate for the European call price.
Repeat steps 1 and 2 for a total of 40 times with different randomized offset for the Halton sequences to get 40 different estimates and record the results.
Other Code
Looking at your code, I can see quite a few issues. Here are some hints to get you started:
• You don't carry out the randomization of the Halton sequence at all. This should happen between generating standard Halton sequences and using them in the Box-Muller transform to generate random normals. The excerpt you posted is very clear on how it is done.
• This code fragment
for(int i = 0; i < N; i++){
if(i % 2 == 0){
Z[i] = Box_MullerX(n,N,hs[i+1],hs[i+2]);
}else{
Z[i] = Box_MullerY(n,N,hs[i],hs[i+1]);
}
}
only happens to not go out-of-bounds because N is even in your main.
• If you want to generate N sample paths, then you'd want S to have dimension N if you just want to keep track of the current spot of each path (which is enough). I.e. S[j] is the current spot of the $j$-th path and you'd have to initialize all entries in S to S0.
for(int j = 1; j <= N; j++){
for(int i = 1; i <= n; i++){
S[i] = S[i-1]*exp((mu - sigma/2)*(t[i] - t[i-1]) + sqrt(sigma)*sqrt(t[i] - t[i-1])*ZZ[i]);
}
}
The outer loop does nothing except for heating up your CPU. You never use the loop variable j but simply run the exact same inner loop N times. Put differently, you simulate a single path only instead of N.
• The Box-Muller transform needs independent uniform on the unit square. You use two successive items from the same Halton sequence. If you create a scatterplot of the supposedly normal variates, you get something like the first of the below plots. The second was produced using two Halton sequences with different bases. See also the corresponding discussion in Chapter 9.3.1 of Jaeckel (2002).
• References
Jaeckel, Peter (2002) Monte Carlo Methods in Financial Engineering: John Wiley & Sons
• Thank you for outlining it for me. Yes, it is homework. I have some questions on your comments. You stated that I don't do the randomization of the Halton sequence. Are you saying that when I generate the Halton sequence I need to store them in a randomized way? Could you clarify what you mean about tha – Wolfy Mar 11 '17 at 6:10
• ** Could you clarify what you mean by the first code block you mentioned. I was just trying to store he $X$'s and $Y$'s in order into the array $Z$. – Wolfy Mar 11 '17 at 6:17
• 1) In your own description it says "add the vector $u$ to each vector above" - you simply do not do this. This has to happen before the Box-Muller transform. Note that this is yet a different issue than the one in my last point. 2) Check what indices of hs you are accessing when N is odd - let's say N = 3. You'll find that hs has size 4 and you try to access the element with index 4 which is out of range. – LocalVolatility Mar 11 '17 at 11:12
• Ok, well $N = 10,000$ so it should be fine for that case since $N$ will not be odd ever. – Wolfy Mar 11 '17 at 16:28
• So to clarify, I have to add the vector $u$ to each vector above before I apply the Box-Mueller tranform, then I proceed to simulating the geometric Brownian motion? – Wolfy Mar 11 '17 at 16:32
|
# SCA: Optimization using Sine Cosine Algorithm In metaheuristicOpt: Metaheuristic for Optimization
## Description
This is the internal function that implements Sine Cosine Algorithm. It is used to solve continuous optimization tasks. Users do not need to call it directly, but just use metaOpt.
## Usage
1 2 SCA(FUN, optimType = "MIN", numVar, numPopulation = 40, maxIter = 500, rangeVar)
## Arguments
FUN an objective function or cost function, optimType a string value that represent the type of optimization. There are two option for this arguments: "MIN" and "MAX". The default value is "MIN", which the function will do minimization. Otherwise, you can use "MAX" for maximization problem. The default value is "MIN". numVar a positive integer to determine the number variables. numPopulation a positive integer to determine the number populations. The default value is 40. maxIter a positive integer to determine the maximum number of iterations. The default value is 500. rangeVar a matrix (2 \times n) containing the range of variables, where n is the number of variables, and first and second rows are the lower bound (minimum) and upper bound (maximum) values, respectively. If all variable have equal upper bound, you can define rangeVar as matrix (2 \times 1).
## Details
This algorithm was proposed by (Mirjalili, 2016). The SCA creates multiple initial random candidate solutions and requires them to fluctuate outwards or towards the best solution using a mathematical model based on sine and cosine functions. Several random and adaptive variables also are integrated to this algorithm to emphasize exploration and exploitation of the search space in different milestones of optimization.
In order to find the optimal solution, the algorithm follow the following steps.
• Initialization: Initialize the first population of candidate solution randomly, calculate the fitness of candidate solution and find the best candidate.
• Update Candidate Position: Update the position with the equation that represent the behaviour of sine and cosine function.
• Update the best candidate if there are candidate solution with better fitness.
• Check termination criteria, if termination criterion is satisfied, return the best candidate as the optimal solution for given problem. Otherwise, back to Update Candidate Position steps.
## Value
Vector [v1, v2, ..., vn] where n is number variable and vn is value of n-th variable.
## References
Seyedali Mirjalili, SCA: A Sine Cosine Algorithm for solving optimization problems, Knowledge-Based Systems, Volume 96, 2016, Pages 120-133, ISSN 0950-7051, https://doi.org/10.1016/j.knosys.2015.12.022
metaOpt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ################################## ## Optimizing the step function # define step function as objective function step <- function(x){ result <- sum(abs((x+0.5))^2) return(result) } ## Define parameter numVar <- 5 rangeVar <- matrix(c(-100,100), nrow=2) ## calculate the optimum solution using Sine Cosine Algorithm resultSCA <- SCA(step, optimType="MIN", numVar, numPopulation=20, maxIter=100, rangeVar) ## calculate the optimum value using step function optimum.value <- step(resultSCA)
|
# Using the 'DiffXTables' R package to detect heterogeneity In DiffXTables: Pattern Analysis Across Contingency Tables
knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) The heterogeneity question asks whether a relationship between two random variables has changed across conditions. It is often fundamental to a scientific inquiry. For example, a biologist could ask whether the relationship between two genes has been modified in a cancer cell from that in a normal cell. The 'DiffXTables' R package answers the heterogeneity question via evaluating statistical evidence for distributional changes in the involved variables from the observed data. Given multiple contingency tables of the same dimension, 'DiffXTables' offers three methods cp.chisq.test(), sharma.song.test(), and heterogenity.test() to test whether the distributions underlying the tables are different. All three tests use the chi-squared distribution to calculate a p-value to indicate the statistical significance of any detected difference across the tables. However, these tests behave sharply different for various types of pattern heterogeneity present in the input tables. Here, we define pattern types, explain the three tests, and illustrate their similarities and differences by examples. These examples reveal inadequacy of the current textbook solution to the contingency table heterogeneity question. ## Types of pattern A pattern is a contingency table tabulating the counts or frequencies observed for a pair of discrete random variables. We study the distributional differences across tables collected from more than one conditions. • First-order differential patterns: Some or all pairs of input tables differ in either row or column marginal distributions. Other pairs share the same underlying joint distribution. • Second-order differential patterns: Some or all pairs of input tables differ in joint distributions not attributed to any difference in marginal distributions. • Differential patterns: Some or all pairs of tables differ in joint distributions. • Conserved patterns: All tables share the same joint distributions. ## The three tests of distributional differences across tables The input to all three tests is two or more contingency tables. The output is chi-squared test statistics, their degrees of freedom, and p-values. They also share the same null hypothesis H0 that all tables are conserved in distributions. However, these tests answer distinct alternative hypotheses. ### 1. The comparative chi-squared test Alternative hypothesis H1: Patterns represented by the tables are differential. The statistical foundation of this test is first established in and the test is then extended to identify differential patterns in networks . It is implemented as the R function cp.chisq.test() in this package. ### 2. The Sharma-Song test Alternative hypothesis H2: Patterns represented by the tables are second-order differential. The test detects differential departure from independence via second-order difference in the joint distributions underlying two or more contingency tables. The test is fully described in (Sharma et al. 2021) . It is implemented as the R function sharma.song.test() in this package. ### 3. The heterogeneity test Alternative hypothesis H1: Patterns represented by the tables are differential. This test is described in (Zar, 2010). Although it widely appears in textbooks, we demonstrate that it is not always powerful in some examples below. It is implemented as the R function heterogenity.test() in this package. ## Examples to illustrate differences among the three tests Here, we show some examples to demonstrate the usage, similarity and difference between the three tests. All these examples represent strong patterns so that the presence of a pattern type is evident. Both the comparative chi-squared test and the Sharma-Song test perform correctly on all five examples; while the heterogeneity test fails on two examples. require(FunChisq) require(DiffXTables) Example 1: Input tables are conserved. At$\alpha=0.05$, all tests perform correctly by not rejecting the null hypothesis of conserved patterns. tables <- list( matrix(c( 14, 0, 4, 0, 8, 0, 4, 0, 12), byrow=TRUE, nrow=3), matrix(c( 7, 0, 2, 0, 4, 0, 2, 0, 6), byrow=TRUE, nrow=3) ) par(mfrow=c(1,2), cex=0.5, oma=c(0,0,2,0)) plot_table(tables[[1]], highlight="none", xlab=NA, ylab=NA) plot_table(tables[[2]], highlight="none", xlab=NA, ylab=NA) mtext("Conserved patterns", outer = TRUE) cp.chisq.test(tables) sharma.song.test(tables) heterogeneity.test(tables) Example 2: Input tables are only first-order differential. At$\alpha=0.05$, cp.chisq.test() performs correctly by declaring differential patterns; sharma.song.test() performs correctly by not declaring second-order differential patterns; and heterogenity.test() performs incorrectly by not declaring the tables as differential. tables <- list( matrix(c( 16, 4, 20, 4, 1, 5, 20, 5, 25), nrow = 3, byrow = TRUE), matrix(c( 1, 1, 8, 1, 1, 8, 8, 8, 64), nrow = 3, byrow = TRUE) ) par(mfrow=c(1,2), cex=0.5, oma=c(0,0,2,0)) plot_table(tables[[1]], highlight="none", col="cornflowerblue", xlab=NA, ylab=NA) plot_table(tables[[2]], highlight="none", col="cornflowerblue", xlab=NA, ylab=NA) mtext("First-order differential patterns", outer = TRUE) cp.chisq.test(tables) sharma.song.test(tables) heterogeneity.test(tables) Example 3: Input tables are only first-order differential. At$\alpha=0.05$, cp.chisq.test() correctly declares differential patterns; sharma.song.test() performs correctly by not declaring second-order differential patterns; and heterogenity.test() correctly declares differential patterns. tables <- list( matrix(c( 8, 1, 1, 38, 4, 5, 1, 1, 17, 1, 2, 1, 1, 9, 1, 2, 1, 1, 4, 1), nrow=4, byrow = TRUE), matrix(c( 1, 2, 1, 1, 2, 2, 9, 1, 1, 4, 2, 13, 1, 1, 1, 3, 45, 2, 1, 7), nrow=4, byrow = TRUE) ) par(mfrow=c(1,2), cex=0.5, oma=c(0,0,2,0)) plot_table(tables[[1]], highlight="none", col="cornflowerblue", xlab=NA, ylab=NA) plot_table(tables[[2]], highlight="none", col="cornflowerblue", xlab=NA, ylab=NA) mtext("First-order differential patterns", outer = TRUE) cp.chisq.test(tables) sharma.song.test(tables) heterogeneity.test(tables) Example 4: Input tables are only second-order differential. At$\alpha=0.05$, cp.chisq.test() correctly declares differential patterns; sharma.song.test() correctly declares second-order differential patterns; and heterogenity.test() correctly declares differential patterns. tables <- list( matrix(c( 4, 0, 0, 0, 4, 0, 0, 0, 4 ), byrow=TRUE, nrow=3), matrix(c( 0, 4, 4, 4, 0, 4, 4, 4, 0 ), byrow=TRUE, nrow=3) ) par(mfrow=c(1,2), cex=0.5, oma=c(0,0,2,0)) plot_table(tables[[1]], highlight="none", col="salmon", xlab=NA, ylab=NA) plot_table(tables[[2]], highlight="none", col="salmon", xlab=NA, ylab=NA) mtext("Second-order differential patterns", outer = TRUE) cp.chisq.test(tables) sharma.song.test(tables) heterogeneity.test(tables) Example 5: Input tables are both first- and second-order differential. At$\alpha=0.05\$, cp.chisq.test() correctly declares differential patterns; sharma.song.test() correctly declares second-order differential patterns; and heterogenity.test() performs incorrectly by not rejecting the tables as having conserved patterns.
tables <- list(
matrix(c(
50, 0, 0, 0,
0, 0, 1, 0,
0, 50, 0, 0,
1, 0, 0, 0,
0, 0, 0, 50
), byrow=T, nrow = 5),
matrix(c(
1, 0, 0, 0,
0, 0, 50, 0,
0, 1, 0, 0,
50, 0, 0, 0,
0, 0, 0, 1
), byrow=T, nrow = 5)
)
par(mfrow=c(1,2), cex=0.5, oma=c(0,0,2,0))
plot_table(tables[[1]], highlight="none", col="orange", xlab=NA, ylab=NA)
plot_table(tables[[2]], highlight="none", col="orange", xlab=NA, ylab=NA)
mtext("Differential patterns", outer = TRUE)
cp.chisq.test(tables)
sharma.song.test(tables)
heterogeneity.test(tables)
## Conclusions
The examples here demonstrate the use of the package. Most importantly, they also suggest that it may be necessary to consider options different from the default textbook solution to determining heterogeneity across contingency tables.
## Try the DiffXTables package in your browser
Any scripts or data that you put into this service are public.
DiffXTables documentation built on May 16, 2021, 9:07 a.m.
|
# 4.05 Patterns
Lesson
## Ideas
We've seen how multiplying by 2 helps us multiply by 4, and then by 8. Let's try this problem to help us remember.
### Examples
#### Example 1
Find 6 \times 8.
Worked Solution
Create a strategy
To multiply 6 by 8 we can multiply 6 by 2 three times.
Apply the idea
So:6\times8=48
Idea summary
We can use any of the following to work out multiplications:
• arrays to show equal sized groups
• patterns, such as doubling and skip-counting
• multiplication tables
## Patterns in multiplication
What if we could use things we already know to solve multiplication or division? We can. Let's see how.
### Examples
#### Example 2
If 2 \times 8 =16, what is 20 \times 8?
Worked Solution
Create a strategy
Since 20 is 10 times larger than 2, we can multiply 16 by 10.
Apply the idea
Idea summary
For every multiplication problem we know, there's another one we also know. If we know our 3 times tables, including 3 \times 7 = 21, then we know that 7 \times 3 = 21.
### Outcomes
#### MA2-6NA
uses mental and informal written strategies for multiplication and division
#### MA2-8NA
generalises properties of odd and even numbers, generates number patterns, and completes simple number sentences by calculating missing values
|
xarray.ufuncs.arccosh¶
xarray.ufuncs.arccosh = <xarray.ufuncs._UFuncDispatcher object>
xarray specific variant of numpy.arccosh. Handles xarray.Dataset, xarray.DataArray, xarray.Variable, numpy.ndarray and dask.array.Array objects with automatic dispatching.
Documentation from numpy:
arccosh(x, /, out=None, *, where=True, casting=’same_kind’, order=’K’, dtype=None, subok=True[, signature, extobj])
Inverse hyperbolic cosine, element-wise.
Parameters: x : array_like Input array. out : ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. where : array_like, optional Values of True indicate to calculate the ufunc at that position, values of False indicate to leave the value in the output alone. **kwargs For other keyword-only arguments, see the ufunc docs. arccosh : ndarray Array of the same shape as x.
Notes
arccosh is a multivalued function: for each x there are infinitely many numbers z such that cosh(z) = x. The convention is to return the z whose imaginary part lies in [-pi, pi] and the real part in [0, inf].
For real-valued input data types, arccosh always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, arccosh is a complex analytical function that has a branch cut [-inf, 1] and is continuous from above on it.
References
[R40] M. Abramowitz and I.A. Stegun, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. http://www.math.sfu.ca/~cbm/aands/
[R41] Wikipedia, “Inverse hyperbolic function”, http://en.wikipedia.org/wiki/Arccosh
Examples
>>> np.arccosh([np.e, 10.0])
array([ 1.65745445, 2.99322285])
>>> np.arccosh(1)
0.0
|
# Mathematics Questions and Answers – Composition of Functions and Invertible Function
«
»
This set of Mathematics written test Questions & Answers focuses on “Composition of Functions and Invertible Function”.
1. The composition of functions is both commutative and associative.
a) True
b) False
Explanation: The given statement is false. The composition of functions is associative i.e. fο(g ο h)=(f ο g)οh. The composition of functions is not commutative i.e. g ο f ≠ f ο g.
2. If f:R→R, g(x)=3x2+7 and f(x)=√x, then gοf(x) is equal to _______
a) 3x-7
b) 3x-9
c) 3x+7
d) 3x-8
Explanation: Given that, g(x)=3x2+7 and f(x)=√x
∴ gοf(x)=g(f(x))=g(√x)=3(√x)2+7=3x+7.
Hence, gοf(x)=3x+7.
3. If f:R→R is given by f(x)=(5+x4)1/4, then fοf(x) is _______
a) x
b) 10+x4
c) 5+x4
d) (10+x4)1/4
Explanation: Given that f(x)=(5+x4)1/4
∴ fοf(x)=f(f(x))=(5+{(5+x4)1/4}4)1/4
=(5+(5+x4))1/4=(10+x4)1/4.
Note: Join free Sanfoundry classes at Telegram or Youtube
4. If f:R→R f(x)=cosx and g(x)=7x3+6, then fοg(x) is ______
a) cos(7x3+6)
b) cosx
c) cos(x3)
d) $$cos(\frac{x^3+6}{7})$$
Explanation: Given that, f:R→R, f(x)=cosx and g(x)=7x3+6
Then, fοg(x) = f(g(x))=cos(g(x))=cos(7x3+6).
5. A function is invertible if it is ____________
a) surjective
b) bijective
c) injective
d) neither surjective nor injective
Explanation: A function is invertible if and only if it is bijective i.e. the function is both injective and surjective. If a function f:A→B is bijective, then there exists a function g:B→A such that f(x)=y⇔g(y)=x, then g is called the inverse of the function.
Take Mathematics - Class 12 Mock Tests - Chapterwise!
Start the Test Now: Chapter 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
6. The function f:R→R defined by f(x)=5x+9 is invertible.
a) True
b) False
Explanation: The given statement is true. A function is invertible if it is bijective.
For one – one: Consider f(x1)=f(x2)
∴ 5x1+9=5x2+9
⇒x1=x2. Hence, the function is one – one.
For onto: For any real number y in the co-domain R, there exists an element x=$$\frac{y-9}{5}$$ such that f(x)=$$f(\frac{y-9}{5})=5(\frac{y-9}{5})$$+9=y.
Therefore, the function is onto.
7. If f:N→N, g:N→N and h:N→R is defined f(x)=3x-5, g(y)=6y2 and h(z)=tanz, find ho(gof).
a) tan(6(3x-5))
b) tan(6(3x-5)2)
c) tan(3x-5)
d) 6 tan(3x-5)2
Explanation: Given that, f(x)=3x-5, g(y)=6y2 and h(z)=tanz,
Then, ho(gof)=hο(g(f(x))=h(6(3x-5)2)=tan(6(3x-5)2)
∴ ho(gof)=tan(6(3x-5)2)
8. Let M={7,8,9}. Determine which of the following functions is invertible for f:M→M.
a) f = {(7,7),(8,8),(9,9)}
b) f = {(7,8),(7,9),(8,9)}
c) f = {(8,8),(8,7),(9,8)}
d) f = {(9,7),(9,8),(9,9)}
Explanation: The function f = {(7,7),(8,8),(9,9)} is invertible as it is both one – one and onto. The function is one – one as every element in the domain has a distinct image in the co – domain. The function is onto because every element in the codomain M = {7,8,9} has a pre – image in the domain.
9. Let f:R+→[9,∞) given by f(x)=x2+9. Find the inverse of f.
a) $$\sqrt{x-9}$$
b) $$\sqrt{9-x}$$
c) $$\sqrt{x^2-9}$$
d) x2+9
Explanation: The function f(x)=x2+9 is bijective.
Therefore, f(x)=x2+9
i.e.y=x2+9
x=$$\sqrt{y-9}$$
⇒f-1 (x)=$$\sqrt{x-9}$$.
10. Let the function f be defined by f(x)=$$\frac{9+3x}{7-2x}$$, then f-1(x) is ______
a) $$\frac{9-3x}{7+2x}$$
b) $$\frac{7x-9}{2x+3}$$
c) $$\frac{2x-7}{3x+9}$$
d) $$\frac{2x-3}{7x+9}$$
Explanation: The function f(x)=$$\frac{9+3x}{7-2x}$$ is bijective.
∴ f(x)=$$\frac{9+3x}{7-2x}$$
i.e.y=$$\frac{9+3x}{7-2x}$$
7y-2xy=9+3x
7y-9=x(2y+3)
x=$$\frac{7y-9}{2y+3}$$
⇒f-1 (x)=$$\frac{7y-9}{2x+3}$$.
Sanfoundry Global Education & Learning Series – Mathematics – Class 12.
To practice all written questions on Mathematics, here is complete set of 1000+ Multiple Choice Questions and Answers.
|
# Close packing of solid spheres
Close packing of identical solid spheres
The solids which have non-directional bonding, their structures are determined on the basis of geometrical consideration. For such solids, it is found that the lowest energy structure is that in which each particle is surrounded by the greatest possible number of neighbours. In order to understand the structures of such solids, let us consider the particles as hard sphere of equal size in three directions. Although there are many ways to arrange the hard spheres but the one in which maximum available space is occupied will be economical which is known as closest packing.
Now we describe the different arrangements of spherical particles of equal size. When the spheres are packed in a plane, they may arrange themselves in following two ways:
(i) The centers of the spheres lie one below another. This type of arrangement is called square close packing. In such packing one sphere touches four other spheres. In this case 52.4% of the volume is occupied. The remaining 47.6% of the volume is empty and is called void volume.
(ii) The spheres of the second row are placed in such a way that they fit themselves in the depressions between the spheres of the first row. The spheres of third row are placed in such a way that they are exactly below the spheres of first row. Evidently the spheres of fourth row will be below the spheres of second row and so on. This type of packing is called hexagonal close packing. In this case 60.4% volume is occupied and 39.6% volume is void volume. Therefore this type of packing is more stable than the square close packing.
In hexagonal close packing, there are two types of voids (open space or space left) which are divided into two sets and for convenience. The spaces marked are curved triangular spaces with tips pointing upwards whereas spaces marked are curved triangular spaces with tips pointing downwards.
Now we extend the arrangement of spheres in three dimensions by placing second close packed layer (hexagonal close packing) (B) on the first layer (A). The spheres of second layer may be placed either on space denoted by or ‘c’. It may be noted that it is not possible to place spheres on both types of voids (i.e., b and c). Thus half of the voids remain unoccupied by the second layer. The second layer also has voids of the types and in order to build up the third layer, there are following two ways:
(1) In one way, the spheres of the third layer lie on the spaces of second layer (B) in such a way that they lie directly above those in the first layer (A). In other words we can say that the third layer becomes identical to the first layer. If the arrangement is continued indefinitely in the same order this is represented as AB AB AB ……………….. This type of arrangement represents hexagonal close packing (hep) symmetry (or structure), which means that the whole structure has only one 6-fold axis of symmetry (i.e., the crystal has same appearance on rotation through an angle of $60^0$).
Another view of hexagonal packing (cp) of spheres
(2) In second way, the spheres of the third layer (C) lie on the second layer (B) in such a way that they lie over the unoccupied spaces ‘C’ of the first layer (A). If this arrangement is continuous indefinitely in the same order this is represented as ABC ABC ABC ABC………..This type of arrangement represents cubic close packed (ccp) structure. This structure has 3-fold axes of symmetry which pass through the diagonal of the cube. Since in this system, there is a sphere at the center of each face of the unit cell hence this structure is also known as face-centred cubic (fcc) structure.
It may be noted that in ccp (or fcc) structures each sphere is surrounded by 12 spheres hence the coordination number of each sphere is 12. The spheres occupy 74% of the total volume and 26% is the empty space in both (hcp and ccp) structures.
There is another possible arrangement of packing of spheres known as body centred cubic (bcc) arrangement. This arrangement is observed in square close packing
(which is slightly more open than hexagonal close packing). In bcc arrangement the spheres of the second layer lie at the space (hollows or voids) in the first layer. Thus each sphere of the second layer touches four spheres of the first layer. Now spheres of the third layer are placed exactly above the spheres of first layer. In this way each sphere of the second layer touches eight spheres (four of Ist layer and four of III rd layer). Therefore coordination number of each sphere is 8 in bcc structure. The spheres occupy 68% of the total volume and 32% of the volume is empty space.
FCC structure
Summary of hcp, ccp (fcc) AND bcc Structures
S .No Properties Hcp Cpp Bcc 1. Arrangement of particles Closely packed Closely packed Not closely packed 2. Volume occupied 74 % 74% 68% 3. Coordination number 12 12` 8 4. Type of packing AB AB AB ABC ABC ——- 5. Malleability Less malleable Malleable Malleable 6. Ductility Ductile Ductile Ductile 7. Examples Mg, Xn, Cd, Ti, V, Cr, Mo Cu, Ag, Au, Ca, Sr, Pt Li, Na, K, Rb, Cs , Fe
Related posts:
1. Characteristics of Solid State Characteristics of the Solid State The solids are characterized by...
2. Crystalline state Crystalline state “A crystal is a solid composed of atoms...
3. Viscosity Viscosity Flowing is one of the characteristic properties of liquids....
4. Metallic Bond Metallic Bond Metals exhibit certain characteristic properties which are as...
5. Surface Tension Surface Tension Any molecule such as B, in the following...
|
location: Publications → journals
Search results
Search: MSC category 06 ( Order, lattices, ordered algebraic structures )
Expand all Collapse all Results 1 - 9 of 9
1. CJM Online first
Dyer, Matthew
On the weak order of Coxeter groups This paper provides some evidence for conjectural relations between extensions of (right) weak order on Coxeter groups, closure operators on root systems, and Bruhat order. The conjecture focused upon here refines an earlier question as to whether the set of initial sections of reflection orders, ordered by inclusion, forms a complete lattice. Meet and join in weak order are described in terms of a suitable closure operator. Galois connections are defined from the power set of $W$ to itself, under which maximal subgroups of certain groupoids correspond to certain complete meet subsemilattices of weak order. An analogue of weak order for standard parabolic subsets of any rank of the root system is defined, reducing to the usual weak order in rank zero, and having some analogous properties in rank one (and conjecturally in general). Keywords:Coxeter group, root system, weak order, latticeCategories:20F55, 06B23, 17B22
2. CJM Online first
Handelman, David
Nearly approximate transitivity (AT) for circulant matrices By previous work of Giordano and the author, ergodic actions of $\mathbf Z$ (and other discrete groups) are completely classified measure-theoretically by their dimension space, a construction analogous to the dimension group used in C*-algebras and topological dynamics. Here we investigate how far from AT (approximately transitive) can actions be which derive from circulant (and related) matrices. It turns out not very: although non-AT actions can arise from this method of construction, under very modest additional conditions, ATness arises; in addition, if we drop the positivity requirement in the isomorphism of dimension spaces, then all these ergodic actions satisfy an analogue of AT. Many examples are provided. Keywords:approximately transitive, ergodic transformation, circulant matrix, hemicirculant matrix, dimension space, matrix-valued random walkCategories:37A05, 06F25, 28D05, 46B40, 60G50
3. CJM 2017 (vol 70 pp. 26)
Bosa, Joan; Petzka, Henning
Comparison Properties of the Cuntz semigroup and applications to C*-algebras We study comparison properties in the category $\mathrm{Cu}$ aiming to lift results to the C*-algebraic setting. We introduce a new comparison property and relate it to both the CFP and $\omega$-comparison. We show differences of all properties by providing examples, which suggest that the corona factorization for C*-algebras might allow for both finite and infinite projections. In addition, we show that R{\o}rdam's simple, nuclear C*-algebra with a finite and an infinite projection does not have the CFP. Keywords:classification of C*-algebras, cuntz semigroupCategories:46L35, 06F05, 46L05, 19K14
4. CJM 2015 (vol 68 pp. 675)
Martínez-de-la-Vega, Veronica; Mouron, Christopher
Monotone Classes of Dendrites Continua $X$ and $Y$ are monotone equivalent if there exist monotone onto maps $f:X\longrightarrow Y$ and $g:Y\longrightarrow X$. A continuum $X$ is isolated with respect to monotone maps if every continuum that is monotone equivalent to $X$ must also be homeomorphic to $X$. In this paper we show that a dendrite $X$ is isolated with respect to monotone maps if and only if the set of ramification points of $X$ is finite. In this way we fully characterize the classes of dendrites that are monotone isolated. Keywords:dendrite, monotone, bqo, antichainCategories:54F50, 54C10, 06A07, 54F15, 54F65, 03E15
5. CJM 2010 (vol 62 pp. 758)
Dolinar, Gregor; Kuzma, Bojan
General Preservers of Quasi-Commutativity Let ${ M}_n$ be the algebra of all $n \times n$ matrices over $\mathbb{C}$. We say that $A, B \in { M}_n$ quasi-commute if there exists a nonzero $\xi \in \mathbb{C}$ such that $AB = \xi BA$. In the paper we classify bijective not necessarily linear maps $\Phi \colon M_n \to M_n$ which preserve quasi-commutativity in both directions. Keywords:general preservers, matrix algebra, quasi-commutativityCategories:15A04, 15A27, 06A99
6. CJM 2003 (vol 55 pp. 3)
Baake, Michael; Baake, Ellen
An Exactly Solved Model for Mutation, Recombination and Selection It is well known that rather general mutation-recombination models can be solved algorithmically (though not in closed form) by means of Haldane linearization. The price to be paid is that one has to work with a multiple tensor product of the state space one started from. Here, we present a relevant subclass of such models, in continuous time, with independent mutation events at the sites, and crossover events between them. It admits a closed solution of the corresponding differential equation on the basis of the original state space, and also closed expressions for the linkage disequilibria, derived by means of M\"obius inversion. As an extra benefit, the approach can be extended to a model with selection of additive type across sites. We also derive a necessary and sufficient criterion for the mean fitness to be a Lyapunov function and determine the asymptotic behaviour of the solutions. Keywords:population genetics, recombination, nonlinear $\ODE$s, measure-valued dynamical systems, Möbius inversionCategories:92D10, 34L30, 37N30, 06A07, 60J25
7. CJM 2002 (vol 54 pp. 757)
Larose, Benoit
Strongly Projective Graphs We introduce the notion of strongly projective graph, and characterise these graphs in terms of their neighbourhood poset. We describe certain exponential graphs associated to complete graphs and odd cycles. We extend and generalise a result of Greenwell and Lov\'asz \cite{GreLov}: if a connected graph $G$ does not admit a homomorphism to $K$, where $K$ is an odd cycle or a complete graph on at least 3 vertices, then the graph $G \times K^s$ admits, up to automorphisms of $K$, exactly $s$ homomorphisms to $K$. Categories:05C15, 06A99
8. CJM 2001 (vol 53 pp. 592)
Perera, Francesc
Ideal Structure of Multiplier Algebras of Simple $C^*$-algebras With Real Rank Zero We give a description of the monoid of Murray-von Neumann equivalence classes of projections for multiplier algebras of a wide class of $\sigma$-unital simple $C^\ast$-algebras $A$ with real rank zero and stable rank one. The lattice of ideals of this monoid, which is known to be crucial for understanding the ideal structure of the multiplier algebra $\mul$, is therefore analyzed. In important cases it is shown that, if $A$ has finite scale then the quotient of $\mul$ modulo any closed ideal $I$ that properly contains $A$ has stable rank one. The intricacy of the ideal structure of $\mul$ is reflected in the fact that $\mul$ can have uncountably many different quotients, each one having uncountably many closed ideals forming a chain with respect to inclusion. Keywords:$C^\ast$-algebra, multiplier algebra, real rank zero, stable rank, refinement monoidCategories:46L05, 46L80, 06F05
9. CJM 1999 (vol 51 pp. 792)
Grätzer, G.; Wehrung, F.
Tensor Products and Transferability of Semilattices In general, the tensor product, $A \otimes B$, of the lattices $A$ and $B$ with zero is not a lattice (it is only a join-semilattice with zero). If $A\otimes B$ is a {\it capped\/} tensor product, then $A\otimes B$ is a lattice (the converse is not known). In this paper, we investigate lattices $A$ with zero enjoying the property that $A\otimes B$ is a capped tensor product, for {\it every\/} lattice $B$ with zero; we shall call such lattices {\it amenable}. The first author introduced in 1966 the concept of a {\it sharply transferable lattice}. In 1972, H.~Gaskill defined, similarly, sharply transferable semilattices, and characterized them by a very effective condition (T). We prove that {\it a finite lattice $A$ is\/} amenable {\it if{}f it is\/} sharply transferable {\it as a join-semilattice}. For a general lattice $A$ with zero, we obtain the result: {\it $A$ is amenable if{}f $A$ is locally finite and every finite sublattice of $A$ is transferable as a join-semilattice}. This yields, for example, that a finite lattice $A$ is amenable if{}f $A\otimes\FL(3)$ is a lattice if{}f $A$ satisfies (T), with respect to join. In particular, $M_3\otimes\FL(3)$ is not a lattice. This solves a problem raised by R.~W.~Quackenbush in 1985 whether the tensor product of lattices with zero is always a lattice. Keywords:tensor product, semilattice, lattice, transferability, minimal pair, cappedCategories:06B05, 06B15
top of page | contact us | privacy | site map |
|
# English/Writing-Mechanics-26-ID
### Quality $\frac{\text{educatuon}}{\text{A}}$, $\frac{\text{morals}}{\text{B}}$ and $\frac{\text{principles}}{\text{C}}$ are required for such an important role.
A. education
B. morales
C. principals
D. No change is necessary.
|
# Need an algorithm to calculate a function
1. Jan 23, 2007
### Mentz114
I'm trying to calculate a table of x vs t, with x as the dependent variable from
this formula
$$t + C = \frac{1}{2}(x\sqrt( 1 - x^2) + arcsin(x) )$$
C=0.4783 is given when t=0 and x = 1/2
I thought it would be simple but my code is giving nonsensical results, viz.
a straight line ! I do get the correct answer when t =0 (1/2) and I can't find a fault in my code. My code also gives a straight line for
t = arcsin(x) + 0.5 which obviously wrong, since x = sin( t - 1/2).
Has anyone got a general algorithm for this ? I'm using simplex but I
haven't tried Newton-Ralphson
This is related to the 'Interesting Oscillator Potential' topic below.
Last edited: Jan 23, 2007
2. Jan 23, 2007
### arildno
You might use power series inversion:
1. Introduce the "tiny" variable $\epsilon=x-\frac{1}{2}$
and find the first few Taylor series terms of the right-hand side about $\epsilon=0$
You have now effectively found t as a power series in $\epsilon$
2. Assume that this power series is invertible, i.e, it exists a function:
$$\epsilon(t)=\sum_{i=0}^{\infty}a_{i}t^{i}$$
If you can find the coefficients $a_{i}$ you're finished!
3.Insert this power series on the epsilon places, and collect terms of equal power in t. (For each finite power of t, only a finite number of terms need to be collected.*)
4. Different powers of t are linearly independent functions, hence all coefficients of the powers must equal zero. This demand of zeroes furnishes you with the equations to determine the coefficients $a_{i}$
*EDIT:
This requires that $a_{0}=0$.
This, however, holds, since $\epsilon=0$ when t=0.
Last edited: Jan 23, 2007
3. Jan 23, 2007
### Mentz114
Thanks, arildno. Food for thought.
|
# Thread: Volume of Revolution
1. ## Volume of Revolution
Find the volume of revolution of the region generated,
bounded by graph $y = 6 - 2x -x^2$
and the indicated line $y = x + 6.$
2. Originally Posted by Shyam
Find the volume of revolution of the region generated,
bounded by graph $y = 6 - 2x -x^2$
and the indicated line $y = x + 6.$
rotated about a line? which one?
3. Originally Posted by skeeter
rotated about a line? which one?
rotated about line y = x+6
4. Originally Posted by Shyam
rotated about line y = x+6
here's the work I did:
After solving the equations of parabola and line, I found their point of intersection.
The point of intersection of parabola and line are (-3, 3) and (0, 6)
$V = \pi \int\limits_{ - 3}^0 {\left[ {\left( {6 - 2x - x^2 } \right)^2 - \left( {x + 6} \right)^2 } \right]} .dx \hfill \\$
$= \pi \int\limits_{ - 3}^0 {\left[ {\left( {x^4 + 4x^3 - 8x^2 - 24x + 36} \right) - \left( {x^2 + 12x + 36} \right)} \right]} .dx \hfill \\$
$= \pi \int\limits_{ - 3}^0 {\left[ {x^4 + 4x^3 - 9x^2 - 36x} \right]} .dx \hfill \\$
$= \pi \left[ {\frac{{x^5 }}
{5} + x^4 - 3x^3 - 18x^2 } \right]_{ - 3}^0 \hfill \\$
$= \pi \left[ {\left( {\frac{{\left( { - 3} \right)^5 }}
{5} + \left( { - 3} \right)^4 - 3\left( { - 3} \right)^3 - 18\left( { - 3} \right)^2 } \right) - \left( {0 + 0 - 0 - 0} \right)} \right] \hfill \\$
$= \pi \left[ {\frac{{ - 243}}
{5} + 81 + 81 + 162} \right] \hfill \\$
$= \pi \left[ {\frac{{ - 243}}
{5} + 324} \right] \hfill \\$
$= \frac{{1377\pi }}
{5} \hfill \\$
This answer seems too big to me. Is it correct?
5. Originally Posted by Shyam
rotated about line y = x+6
what you have shown is the work required for a rotation of the region about the x-axis, not about the line y = x+6.
this is correct ...
$= \pi \left[ {\frac{{x^5 }}
{5} + x^4 - 3x^3 - 18x^2 } \right]_{ - 3}^0 \hfill \\$
one error in the integral evaluation below ... you did the FTC evaluation backwards, should be F(0) - F(-3)
$= \pi \left[ {\left( {\frac{{\left( { - 3} \right)^5 }}
{5} + \left( { - 3} \right)^4 - 3\left( { - 3} \right)^3 - 18\left( { - 3} \right)^2 } \right) - \left( {0 + 0 - 0 - 0} \right)} \right] \hfill \\$
sign error in this next step ... +162 ?
$= \pi \left[ {\frac{{ - 243}}
{5} + 81 + 81 + 162} \right] \hfill \\$
|
##### Notes
This printable supports Common Core Math Standard HSA-SSE.A.1, HSA-SSE.A.1a
##### Print Instructions
NOTE: Only your test content will print.
To preview this test, click on the File menu and select Print Preview.
See our guide on How To Change Browser Print Settings to customize headers and footers before printing.
# Interpreting Parts of an Expression (Grade 9)
Print Test (Only the test content will print)
## Interpreting Parts of an Expression
1.
What do you call the number in front of the variable? For example, the -9 in -9y.
1. term
2. like term
3. coefficient
4. exponent
2.
Which of the following are the coefficient(s) in the expression $15x^2 +3xy - 12y +8 ?$
1. $x, y$
2. $3xy$
3. $8$
4. $15, 3, -12$
3.
How many terms are in the expression below?
$4xy^2+3xy+8z-8$
1. 3
2. 2
3. 1
4. 4
4.
Which of the following are factors of $4x^3(y+2)^2 ?$ Choose all that apply.
1. $4$
2. $y$
3. $x^3$
4. $y+2$
5.
Mike owns a chocolate shop, and is trying to create a new box of chocolate to sell. He is thinking about what to put in it, and how much he will need to charge for it. The candies he is thinking about including are: truffles, T, which cost $1.50 each; pecan chocolates, P, which cost$1.00 each; and solid milk chocolate pieces, M, which cost \$0.75 each. He creates an expression to model this, as follows.
$1.5T + P + 0.75M$
What do the coefficients represent in this expression?
1. The cost per chocolate.
2. The total amount of money the box will cost.
3. The average price of the chocolates.
4. The number of chocolates he will include in the new box of chocolates.
6.
Smart Financial is offering a new savings account, with an annual interest rate of 4%. The amount a person will earn by investing $P$ dollars is represented by the expression $P + 0.04tP$, where $t$ is the amount of time (in years) someone's money stays in the account. What does the second term in this expression represent?
1. The rate of growth of their investment.
2. The total amount of money they have in their account.
3. The interest they will over the given time period.
4. The original amount of money they invested.
7.
Anna has been recording the distance she ran each day this week. So far, she has run 14.3 miles in total, and today is the last day she will be able to run this week. Although she does not know the exact distance of the route she will run today, she knows her pace very well and it stays constant throughout the run. If she will run for t hours, her total distance for the week is given by the expression $14.3 + 8t$. What does the coefficient 8 represent?
1. The average time it takes to run each mile, in hours.
2. Her pace, in miles per hour.
3. The amount of time she will run for, in hours.
4. The distance she will run, in miles.
8.
Carol manages admissions for a small county fair. She has created two expressions, one for the total number of people who have entered (children and adults), and one for the total amount of money collected (adults pay a higher entrance fee than children do). The expression $c + a$ represents the total number of people admitted, where $c$ is number of children, and $a$ is the number of adults admitted. The expression $0.75c + 1.5a$ represents the total amount of money collected, in dollars. What does the term $1.5a$ represent?
1. The amount charged per adult for entry into the fair.
2. The total number of adults who entered the fair.
3. The amount charged per child for entry into the fair.
4. The total amount of money from adults who have entered the fair.
9.
Beth wants to go on a road trip this summer. She will be taking her car, which gets an average of 19 miles per gallon in fuel economy. Beth creates an expression to calculate the total cost of gas for her trip, $1/19 x y$, where x is the typical price of gas (in dollars per gallon) where she will be driving. What does the factor $y$ represent?
1. The number of miles she expects to drive.
2. The total cost of gas, for one fill up.
3. The number of times she expects she will have to stop to fill up her car with gas.
4. The total amount of gas, in gallons, needed to fill her car's gas tank.
10.
Jake and Will work at a bakery. If J represents the number of cakes Jake can bake per day, W represents the number of cakes Will can bake per day, x represents the number of days Jake works in a week, and y represents the number of days Will works in a week, what does the expression $xJ + yW$ represent?
1. The number of hours Jake and Will work in a week.
2. The number of cakes made by Jake and Will in a week.
3. The total number of cakes both Jake and Will could bake per day.
4. The combined number of days Jake and Will work in a week.
You need to be a HelpTeaching.com member to access free printables.
|
Hide
Problem EInnstunguvesen
Students have run into a problem in a computer science lecture at the University of Iceland. Everyone has a laptop, but the lecture hall only has one wall outlet. The students are willing to share this outlet, but some students doubt that one outlet is enough for the whole lecture. Can you determine if this is true? The students can swap the charger in the outlet so quickly the you may assume they do so instantaneously.
Input
The first line of the input contains two numbers, an integer $1 \leq N \leq 10^5$ and a real value $0 \leq r \leq 10^9$ with at most six digits after the decimal. The number $N$ denotes the number of students in the lecture hall and $r$ denotes the number of seconds you get for charging your computer for one second. Note that a computer does not lose charge while it’s being charged. Finally there’s a single line with $N$ integers, $1 \leq b_ i \leq 10^6$, denoting how much time each computer has until it runs out of charge.
Output
Print how much time passes until the first computer will necessarily run out of charge. An answer is considered correct if the absolute or relative error is less than $10^{-6}$. If it is possible to keep all the computers charged indefinitely then print “Endalaust rafmagn”.
Sample Input 1 Sample Output 1
3 1
4 4 4
12
Sample Input 2 Sample Output 2
3 3
1 2 3
Endalaust rafmagn
|
# Equation numbering by chapters and sections
Posted 5 months ago
706 Views
|
5 Replies
|
2 Total Likes
|
Hi, I would like to write a report which includes chapters, sections, and subsections. So, How do I number equations by chapter/section in Mathematica notebooks? For example, this equation is the 4th equation in chapter 2 section 1 : x+y=1 (2.1.4)Kind regards, Omar
5 Replies
Sort By:
Posted 5 months ago
Hi Omar,Assuming that your using the "Textbook" stylesheet that's found in the menu Format > Stylesheet > Book, and the Cell in question has the Cell style "EquationNumbered", you can add an autonumber for the "Section" to the equation number by modifying the CellFrameLabels for the style "EquationNumbered". Select the notebook that you want to change In the menus choose Format > Edit Stylesheet... This will embed a custom stylesheet into your notebook. Paste the following Cell expression and click "Yes" when prompted "to interpret the text". Cell[StyleData["EquationNumbered"], CellFrameLabels->{{None, Cell[ TextData[{"(", CounterBox["BookChapterNumber"], ".", CounterBox["Section"], ".", CounterBox["EquationNumbered"], ")"}]]}, {None, None}}]
Posted 5 months ago
Thanks so much Larry.Kind Regards Omar
Posted 5 months ago
Hi Larry, Please, I would like to make the number of the equation (eg. (2,4,7)) as the following : 2 represents the chapter number. 4 represents the section number. 7 represents the number of the equation in the chapter not in the whole document. I mean when I start a new chapter, say section 5 , chapter 2, the sixth equation in this chapter should take this number (2,5,6).Regards Omar
Hi Omar, Assuming that you are using the aforementioned "Textbook" stylesheet, you made the modification outlined in my original comment, and the first cell in the Notebook is a "BookChapterNumber"-Cell, replace the first cell with following Cell expression: Cell[TextData[ CounterBox["BookChapterNumber"]], "BookChapterNumber", CounterAssignments->{ParentList, {"BookChapterNumber", 1}}] I assume that you used the Textbook styled-notebook template as the basis for your document layout. FYI, the template can be accessed from a palette that is summoned via the menu item, File > New > Styled Notebook.... (see attached screenshot of the palette) In the palette, look for the icon with the "Textbook" caption (i.e., middle row, left-most column) and click the upper half of the icon. Note, when you hover over the top-half of the icon, the word "New" will appear. Click the icon to open a new template.The addition of the option, CounterAssignments->{ParentList, {"BookChapterNumber", 1}}, should be the only difference between the original templated "BookChapterNumber"-Cell and the one above. (Important: The option's rhs value contains the sublist {"BookChapterNumber", 1}. That designation assigns the value 1 to the "BookChapterNumber" counter. However, the value is immediately incremented by 1 because the appearance of the "BookChapterNumber"-Cell in the Notebook causes the counter to increment. CounterBox["BookChapterNumber"] will display 2. Generally, the value assigned to "BookChapterNumber" must equal the desired value - 1.If each chapter is in it's own separate Notebook, replace the "BookChapterNumber"-Cell with the one above, but be sure to replace the value 1 with a value == desired value - 1. For example, for chapter 3 replace the 1 with 2,..., for chapter 10 replace the 1 with 9. Also, remember to add the StyleData["EquationNumbered"]-Cell, as described in my original comment, to each notebook to correct the equation numbers. Attachments:
|
# Why can't I print out my std::string vector?
This topic is 528 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
For some reason my application crashes when I try to print out the values of my std::vector, which is filled with std::strings.
Here's how I declare the vector :
std::vector <std::string> map
{
"###############################################################",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"###############################################################",
};
And here's how I try to print it out :
for (int i = 0; i < map.size(); i++)
{
printf(map[i].c_str());
}
The crash message I get is :
Exception thrown at 0x01157216 in ConsoleGame.exe: 0xC0000005: Access violation reading location 0x00000018.
Edited by BiiXteR
##### Share on other sites
Weird; that code runs fine for me. (It's missing newline characters though)
##### Share on other sites
Weird; that code runs fine for me. (It's missing newline characters though)
Forgot to mention that the error leads me to line 512 in the file xstring.
##### Share on other sites
Try to put together the smallest program that shows the problem. It is likely that you will figure out what the problem is in the process of producing that program. If not, post it here, so we can reproduce the problem on our own.
##### Share on other sites
Thinking it could be from other sections of your program.
##### Share on other sites
Looks like you are calling a member function on a null object.
Such as:
myThing = null;
myThing->DoSomething();
Look on your stack trace, go back out on the stack to the outermost function that you wrote, and look for a null pointer on any object.
For example, maybe a null pointer value for the string you are trying to print.
Forgot to mention that the error leads me to line 512 in the file xstring.
You're looking in the wrong spot. While it is remotely possible that you found a bug in the standard library, consider that the libraries are used by millions of programmers around the globe every day. The odds that it works correctly for everybody else but is broken for you is astonishingly small.
Always assume the error is in your code, not in the standard library.
for (int i = 0; i < map.size(); i++)
{
printf(map.c_str());
}
Following a hunch, it could be that map doesn't exist, or that it doesn't contain a string (it is empty). Edited by frob
##### Share on other sites
Are you by any chance using XCode?
Btw one more thing: It's a security risk to do printf( myCStringVariable ); Perform instead printf( "%s", myCStringVariable );
Cheers
##### Share on other sites
Are you by any chance using XCode?
Btw one more thing: It's a security risk to do printf( myCStringVariable ); Perform instead printf( "%s", myCStringVariable );
Cheers
Could you elaborate on why that's a security risk?
Chances are what he really wanted was std::puts(myCStringVariable)' anyway.
##### Share on other sites
Are you by any chance using XCode?
Btw one more thing: It's a security risk to do printf( myCStringVariable ); Perform instead printf( "%s", myCStringVariable );
Cheers
Could you elaborate on why that's a security risk?
Suppose myCStringVariable contains "%s".
Then we have a statement here that can be rewritten as printf("%s").
This will try to print out whatever is on the stack where printf would look for its second argument - only there isn't one.
This could be anything - a valid pointer, a null pointer, arbitrary data - even executable machine code. An attacker could use this to crash the program, or look at the stack frame and work out where the current function's return address is, or do all manner of nasty things one can do with printf specifiers that point at garbage.
The %n specifier in the wrong hands could be particularly nasty because it allows printf to modify an argument passed to it... Edited by Oberon_Command
##### Share on other sites
Looks like you are calling a member function on a null object.
Such as:
myThing = null;
myThing->DoSomething();
Look on your stack trace, go back out on the stack to the outermost function that you wrote, and look for a null pointer on any object.
For example, maybe a null pointer value for the string you are trying to print.
Forgot to mention that the error leads me to line 512 in the file xstring.
You're looking in the wrong spot. While it is remotely possible that you found a bug in the standard library, consider that the libraries are used by millions of programmers around the globe every day. The odds that it works correctly for everybody else but is broken for you is astonishingly small.
Always assume the error is in your code, not in the standard library.
for (int i = 0; i < map.size(); i++)
{
printf(map.c_str());
}
Following a hunch, it could be that map doesn't exist, or that it doesn't contain a string (it is empty).
I checked the size of the vector, which for some reason is 0.
I don't understand why though, I've tried giving it a value in the constructor, and in the header file and it still has a value of 0.
Here's some more code :
MapManager.h
#pragma once
#include <vector>
#include <iostream>
class MapManager
{
public:
std::vector <std::string> map =
{
"###############################################################",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"###############################################################"
};
void DrawMap();
static MapManager &GetInstance();
private:
MapManager();
~MapManager();
};
MapManager.cpp
#include "MapManager.h"
MapManager::MapManager()
{
}
void MapManager::DrawMap()
{
std::cout << map[0].c_str() << std::endl;
for (int i = 0; i < map.size(); i++)
{
printf(map[i].c_str());
}
}
MapManager &MapManager::GetInstance()
{
MapManager temp;
return temp;
}
MapManager::~MapManager()
{
}
Edited by BiiXteR
##### Share on other sites
MapManager &MapManager::GetInstance()
{
MapManager temp;
return temp;
}
This is the cause of your problem. You end up with a reference to a deleted object because the referent goes out of scope after the return statement. Very bad.
Global variables need to be in static storage. Either make temp a static local, a cass variable, or move it out of your class entirely and make it a namespace-level variable. It's your disguising the global as something else that's hiding your problem in the first place. Another lesson in why globals are poor practice.
ohh, I forgot to add a static to the temp variables, how did I not notice that o.o
Thank you though :P
##### Share on other sites
for(auto& line : map) {
std::cout << line << "\n";
}
Also, before you 'fix' the Meyers Singleton, why does this class need to be a singleton?
##### Share on other sites
ohh, I forgot to add a static to the temp variables, how did I not notice that o.o
Thank you though :P
Seriously, just don't do that.
Let's have a look at your code.
#pragma once
#include <vector>
// don't need iostream for the header
// as a general rule, don't force consumers of a class to include something they don't need
//#include <iostream>
class MapManager
{
// do you really want map to be public?
// anyone using this class could easily overwrite the contents of it
// public:
std::vector <std::string> map =
{
"###############################################################",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"###############################################################"
};
// ok, this needs to be public
public:
void DrawMap();
// ugh, singleton... no need for it.
// static MapManager &GetInstance();
// don't need these, compiler will auto generate them for us
//private:
// MapManager();
// ~MapManager();
};
cpp
#include "MapManager.h"
// include iostream where it's needed
#include <iostream>
// don't need ctor or dtor
void MapManager::DrawMap()
{
// as khatharr said, but without the evil braceatendofstatement style :p
for(auto& line : map)
{
std::cout << line << "\n";
}
}
// nope nope nope nope nope
//MapManager &MapManager::GetInstance()
//{
// MapManager temp;
// return temp;
//}
and using it
MapManager map;
map.DrawMap();
so much cleaner... less code, less bugs... easy to use, hard to abuse.
Edited by ChaosEngine
##### Share on other sites
As a side note, you should not pass your strings as the format argument to printf(). Either do this:
printf("%s", map[i].c_str());
Or use puts(), since you're not really formatting anything (note: adds a newline for you)
puts(map[i].c_str());
The reason for this is because otherwise it's possible to pass arbitrary format options to printf() without meaning to (say your map contained "%" characters) and it can possibly be dangerous.
Edited by TheComet
##### Share on other sites
As a side note, you should not pass your strings as the format argument to printf(). Either do this:
printf("%s", map[i].c_str());
Or use puts(), since you're not really formatting anything (note: adds a newline for you)
puts(map[i].c_str());
The reason for this is because otherwise it's possible to pass arbitrary format options to printf() without meaning to (say your map contained "%" characters) and it can possibly be dangerous.
##### Share on other sites
ohh, I forgot to add a static to the temp variables, how did I not notice that o.o
Thank you though :P
Seriously, just don't do that.
Let's have a look at your code.
#pragma once
#include <vector>
// don't need iostream for the header
// as a general rule, don't force consumers of a class to include something they don't need
//#include <iostream>
class MapManager
{
// do you really want map to be public?
// anyone using this class could easily overwrite the contents of it
// public:
std::vector <std::string> map =
{
"###############################################################",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"# #",
"###############################################################"
};
// ok, this needs to be public
public:
void DrawMap();
// ugh, singleton... no need for it.
// static MapManager &GetInstance();
// don't need these, compiler will auto generate them for us
//private:
// MapManager();
// ~MapManager();
};
cpp
#include "MapManager.h"
// include iostream where it's needed
#include <iostream>
// don't need ctor or dtor
void MapManager::DrawMap()
{
// as khatharr said, but without the evil braceatendofstatement style :p
for(auto& line : map)
{
std::cout << line << "\n";
}
}
// nope nope nope nope nope
//MapManager &MapManager::GetInstance()
//{
// MapManager temp;
// return temp;
//}
and using it
MapManager map;
map.DrawMap();
so much cleaner... less code, less bugs... easy to use, hard to abuse.
Should I still not do this even if many other classes needs MapManager?
(sorry for the super late reply...)
for(auto& line : map) {
std::cout << line << "\n";
}
Also, before you 'fix' the Meyers Singleton, why does this class need to be a singleton?
I made it a singleton since it was gonna be used by many other classes, figured it would be better that way than sending references everywhere.
Edited by BiiXteR
##### Share on other sites
A singleton is rarely a good solution. In a well-designed program you typically don't have that many dependencies, so perhaps you need to rethink your design. Sending the references everywhere at least makes the dependencies explicit. Otherwise you'll try to reuse some piece of code for a new project and you'll realize that the whole project has become one monolithic mess and you can't do it.
If you end up using global state of some sort, what is the advantage of using a singleton over using a global variable?
##### Share on other sites
Should I still not do this even if many other classes needs MapManager? (sorry for the super late reply...)
You've fallen for the most common misuse of the singleton pattern. The singleton is there is ensure a single instance of an object, not a single point of access.
Everyone who misuses a singleton in this fashion says the same thing: "I don't want to pass my X/Manager/God/Controller/Whatever object everywhere". As @Álvaro said, at least then your dependency is obvious.
If you find that you're passing a certain object everywhere down through layers of code, this is a code smell and a sign that you should refactor your code.
That said, sometimes a global really is the lesser evil. Things like a log object for example.
But let's say you look at your options and you decide that it really would be less painful to not pass MapManager everywhere. In that case, just create a global and own that decision.
You have you MapManager class defined as normal (MapManager.h/.cpp). These files should know nothing about the global, otherwise you can't use them without it.
Globals.h
#include "MapManager.h"
// namespaces are great!
namespace InsertGameNameHere
{
namespace Globals
{
// the global map state
extern MapManager TheMap;
}
}
Then in your main.cpp instantiate TheMap
#include "Globals.h"
// instantiate the map. this should be in exactly one place in the code
MapManager InsertGameNameHere::Globals::TheMap;
int main()
{
InsertGameNameHere::Globals::TheMap.DrawMap();
// do more stuff
}
Anywhere else you need to access the map, just include Globals.h without instantiating the map.
Monster.cpp
#include "Globals.h"
namespace InsertGameNameHere
{
void SpawnMonster()
{
Monster monster;
monster.SetPosition(Globals::TheMap.GetRandomSpawnPoint());
// etc.
}
}
##### Share on other sites
Should I still not do this even if many other classes needs MapManager? (sorry for the super late reply...)
You've fallen for the most common misuse of the singleton pattern. The singleton is there is ensure a single instance of an object, not a single point of access.
Everyone who misuses a singleton in this fashion says the same thing: "I don't want to pass my X/Manager/God/Controller/Whatever object everywhere". As @Álvaro said, at least then your dependency is obvious.
If you find that you're passing a certain object everywhere down through layers of code, this is a code smell and a sign that you should refactor your code.
That said, sometimes a global really is the lesser evil. Things like a log object for example.
But let's say you look at your options and you decide that it really would be less painful to not pass MapManager everywhere. In that case, just create a global and own that decision.
You have you MapManager class defined as normal (MapManager.h/.cpp). These files should know nothing about the global, otherwise you can't use them without it.
Globals.h
#include "MapManager.h"
// namespaces are great!
namespace InsertGameNameHere
{
namespace Globals
{
// the global map state
extern MapManager TheMap;
}
}
Then in your main.cpp instantiate TheMap
#include "Globals.h"
// instantiate the map. this should be in exactly one place in the code
MapManager InsertGameNameHere::Globals::TheMap;
int main()
{
InsertGameNameHere::Globals::TheMap.DrawMap();
// do more stuff
}
Anywhere else you need to access the map, just include Globals.h without instantiating the map.
Monster.cpp
#include "Globals.h"
namespace InsertGameNameHere
{
void SpawnMonster()
{
Monster monster;
monster.SetPosition(Globals::TheMap.GetRandomSpawnPoint());
// etc.
}
}
`
I'll read up some more on singletons as well.
Thanks for the help :)
|
# Ordinary representations associated to modularforms: Étale submodules versus étale quotients
Fix an odd prime $p$ and an integer $r\geq 1$. Fix a Dirichlet character $\chi$ of primitive conductor $p^r$. Let $f\in S_2(p^r,\chi)$ be a normalized newform of weight $2$, level $p^r$ and nebentype $\chi$. Assume $f$ is ordinary at $p$.
Let $K_{f,p}$ be the finite extension of $\mathbb{Q}_p$ generated by the fourier coefficients of $f$. Associated to $f$ there is a Galois representation $$\varrho_f: G_{\mathbb{Q}} \longrightarrow \mathrm{GL}_2(K_{f,p})$$ which is unramified at all primes different from $p$. By a theorem of Mazur and Wiles, the restriction of $\varrho_f$ to a decomposition subgroup $D_p$ of $G_{\mathbb Q}$ is of the form $$\varrho_{f|D_p}: \begin{pmatrix} \alpha_p & \star \\ 0 & \beta_p \end{pmatrix}$$ on a suitable basis. Here $\alpha_p$ and $\beta_p$ are characters of $D_p$.
Pretty much is known about the behavior of these two characters, but a precise description of them depends on the way one has constructed $\varrho_f$. To be sure what are we talking about, let me sketch (and fix) one of the various possible such constructions (all them are the same up to twist):
Let $X_r:=X_1(p^r)/\mathbb{Q}$ denote the modular curve associated to the moduli problem of classifying pairs $(E,i)$ where $E$ is an elliptic curve and $i$ is an isomorphism between the group scheme $\mu_{p^r}$ and a closed finite flat subgroup scheme of $E$. This model of $X_1(p^r)$ over $\mathbb{Q}$ satisfies that the cusp at infinity is a rational point. Don't confuse this model with another natural model $X'_1(p^r)$ this curve has over $\mathbb{Q}$, which arises by replacing the above level structure $i$ by the level structure consisting of the choice of a point of exact order $p^r$ on $E$. Curves $X_1(p^r)$ and $X'_1(p^r)$ are not isomorphic over $\mathbb{Q}$, but one is the twist of the other over $\mathbb{Q}(\mu_{p^r})$.
Let $A_f/\mathbb{Q}$ denote the abelian variety associated by Eichler-Shimura to $f$. This is a simple abelian variety over $\mathbb{Q}$ equipped with a surjection $\mathrm{Jac}(X_r) \rightarrow A_f$. The endomorphism algebra $\mathrm{End}(A_f)\otimes \mathbb{Q}$ is isomorphic to a number field $K$ such that $d=[K:\mathbb{Q}]=\mathrm{dim}(A_f)$. Let $V_p(A_f):=H^1_{\mathrm{et}}(\bar{A}_f,\mathbb{Q}_p)(1)$ denote the $p$-adic Tate module of $A_f$. The Tate twist "$(1)$" by the cyclotomic character is taken here so that $V_p(A_f)$ is Kummer self-dual, that is to say, there is an isomorphism of Galois modules $V_p(A_f) \simeq \mathrm{Hom}(V_p(A_f),\mathbb{Q}_p(1))$. The Galois group $G_{\mathbb{Q}}$ acts on $V_p(A_f)$ in a natural way, giving rise to a representation $r: G_{\mathbb{Q}} \longrightarrow \mathrm{GL}(V_p(A_f))\simeq \mathrm{GL}_{2d}(\mathbb{Q}_p)$. Since the action of $G_{\mathbb{Q}}$ commutes with the endomorphisms of $A_f$, it follows that $r$ factors through $\prod_{\wp} \mathrm{GL}_{2}(K_{\wp})$, where $\wp$ runs over the prime ideals of $K$ over $p$ and $K_{\wp}$ denotes the completion of $K$ with respect to this prime. Projecting to one of these factors yields the sough-after representation, denoted above $$\varrho_f: G_{\mathbb{Q}} \longrightarrow \mathrm{GL}_2(K_{f,p}).$$
The question I would like to ask is: with this so-defined $\varrho_f$, what is $\alpha_p$ and what $\beta_p$? One of the two is surely the unramified character $\eta_f$ sending $\mathrm{Frob}_p$ to $a_p(f)$ (or its inverse, $\eta_f^{-1}$), and the other encodes also the cyclotomic character and the finite character $\chi$.
There are many papers where one can find explicit formulae describing $\alpha_p$ and $\beta_p$, but most often they state something like "associated to an ordinary newform $f$ there is a Galois representation whose restriction to $D_p$ is ... with $\alpha_p = ...$ and $\beta_p= ...$". My question asks for something more precise: if $\varrho_f$ is the one constructed above, who is $\alpha_p$ and who is $\beta_p$?
The usual normalizations for Galois representation attached to an eigencusp form $f$ is either to take the étale cohomology or the usual Tate module of the abelian variety cut out in the Jacobian of the modular curve by the morphism $\lambda_{f}$ sending the Hecke operator $T(\ell)$ to $a_\ell$ such that $T(\ell)f=a_\ell f$. If I understood correctly your careful explanations, your choice of normalization is thus slightly unusual because you chose to take étale cohomology and then twisted by the cyclotomic character.
Under the usual étale normalization, the determinant sends the geometric Frobenius morphism $Fr(\ell)$ for $\ell\neq p$ to $\chi(\ell)\ell$ so under your normalization, the determinant of $V_{p}(A_{f})$ sends the geometric Frobenius $Fr(\ell)$ for $\ell\neq p$ to $\chi(\ell)\ell^{-1}$ (the $-1=2-1-2$ comes from the determinant of the étale representation, contributing a $2-1$ and the Tate twist, contributing a $-2$) so the determinant of $V_p(A_f)$ is $\chi_{cyc}\chi$.
In your situation, neither $\alpha$ nor $\beta$ is $\eta_{f}$. Rather, $\alpha$ is $\chi_{cyc}\eta_f$ and $\beta$ is $\chi\eta_f^{-1}$.
It is not clear to me whether you want a proof of this statement or not, but anyway, here follows a sketch of the simplest that I know (and it is already quite hard, in fact). First notice that it is enough to prove that $\alpha_{et}$, the character appearing in the étale normalization without twist, is $\eta_{f}$. In order to prove this latter claim, first assume that $\pi(f)$ is a principal series at $p$, then appeal to the existence of compatible system of Galois representations, then switch to an auxiliary prime $q$ and then use local-global compatibility of the Langlands correspondence (proved in this case by Deligne, Langlands and Carayol). This compatibility shows that the $p$-divisible group attached to $H^1_{et}$ has a subgroup on which $T(p)$ acts by its invertible eigenvalue (and hence on which $Fr(p)$ acts by the invertible eigenvalue of $T(p)$). In order to prove the claim when $\pi(f)_{p}$ is Steinberg and to complete the proof, use the density of principal series points in a Hida family to reduce to the principal series case.
• Thanks a lot, Olivier, it's a very interesting way of proving that, pretty different to Mazur-Wiles'. Let's use the etale cohomology with no twist if you prefer, in which case you tell me $\alpha=\eta_f$ and $\beta=\chi \chi_{\mathrm{cyc}}^{-1} \eta_f^{-1}$. Where are you using the model of $X_r$ I fixed? If instead of that, we use $X'_r$, what is according to you $\alpha'$ and $\beta'$? Perhaps $\alpha'=\eta_f \chi^{-1}$ and $\beta'=\chi_{\mathrm{cyc}}^{-1}\eta_f^{-1}$? Oct 22 '13 at 21:58
• Your choice of a model is used in my sketch of proof to choose an adelic compact open subgroup in the Langlands correspondence: with a 1 in the below right corner or in the upper left corner mod $p^r$. Your model corresponds to 1 in the below right corner. Choosing the other model, or the other compact open subgroup, amounts to the twisting involution on GL2 and thus to the Fricke involution on the modular curves. I don't know offhand what happens to $\alpha$ and $\beta$ then and anyway recommend you work out your own normalization carefully. Mistakes are too easy to make otherwise. Oct 22 '13 at 22:17
|
# Turbulence modification by periodically modulated scale-depending forcing
A.K. Kuczaj, B.J. Geurts, D. Lohse, W. van de Water
Research output: Contribution to conferencePaperpeer-review
## Abstract
The response of turbulent flow to time-modulated forcing is studied by direct numerical simulation of the Navier-Stokes equations. The forcing is modulated via periodic energy input variations at a frequency $\omega$. Such forcing of the large-scales is shown to yield a response maximum at frequencies in the range of the inverse of the large-eddy turnover time. Time-modulated broad-band forcing is also studied in which a wide spectrum of length-scales is forced simultaneously. If smaller length-scales are explicitly agitated by the forcing, the response maximum is found to occur at higher frequencies and to become less pronounced. In case the forced spectrum is sufficiently wide, a response maximum was not observed. At sufficiently high frequencies the amplitude of the kinetic energy response decreases as $1/ \omega$, consistent with theoretical predictions.
Original language English - 4 Published - 2006 Conference on Turbulence and Interactions, Porquerolles, France: Proceedings on the Conference on Turbulence and Interactions - Duration: 29 May 2006 → 2 Jun 2006
### Conference
Conference Conference on Turbulence and Interactions, Porquerolles, France 29/05/06 → 2/06/06
• EWI-7521
• IR-63572
• METIS-236805
## Fingerprint
Dive into the research topics of 'Turbulence modification by periodically modulated scale-depending forcing'. Together they form a unique fingerprint.
|
# 1.1 Introduction to whole numbers (Page 6/13)
Page 6 / 13
## Practice makes perfect
Identify Counting Numbers and Whole Numbers
In the following exercises, determine which of the following numbers are counting numbers whole numbers.
$0,\frac{2}{3},5,8.1,125$
1. 0, 5, 125
2. 0, 5, 125
$0,\frac{7}{10},3,20.5,300$
$0,\frac{4}{9},3.9,50,221$
1. 50, 221
2. 0, 50, 221
$0,\frac{3}{5},10,303,422.6$
Model Whole Numbers
In the following exercises, use place value notation to find the value of the number modeled by the $\text{base-10}$ blocks.
561
407
Identify the Place Value of a Digit
In the following exercises, find the place value of the given digits.
$579,601$
1. 9
2. 6
3. 0
4. 7
5. 5
1. thousands
2. hundreds
3. tens
4. ten thousands
5. hundred thousands
$398,127$
1. 9
2. 3
3. 2
4. 8
5. 7
$56,804,379$
1. 8
2. 6
3. 4
4. 7
5. 0
1. hundred thousands
2. millions
3. thousands
4. tens
5. ten thousands
$78,320,465$
1. 8
2. 4
3. 2
4. 6
5. 7
Use Place Value to Name Whole Numbers
In the following exercises, name each number in words.
$1,078$
One thousand, seventy-eight
$5,902$
$364,510$
Three hundred sixty-four thousand, five hundred ten
$146,023$
$5,846,103$
Five million, eight hundred forty-six thousand, one hundred three
$1,458,398$
$37,889,005$
Thirty seven million, eight hundred eighty-nine thousand, five
$62,008,465$
The height of Mount Ranier is $14,410$ feet.
Fourteen thousand, four hundred ten
The height of Mount Adams is $12,276$ feet.
Seventy years is $613,200$ hours.
Six hundred thirteen thousand, two hundred
One year is $525,600$ minutes.
The U.S. Census estimate of the population of Miami-Dade county was $2,617,176.$
Two million, six hundred seventeen thousand, one hundred seventy-six
The population of Chicago was $2,718,782.$
There are projected to be $23,867,000$ college and university students in the US in five years.
Twenty three million, eight hundred sixty-seven thousand
About twelve years ago there were $20,665,415$ registered automobiles in California.
The population of China is expected to reach $1,377,583,156$ in $2016.$
One billion, three hundred seventy-seven million, five hundred eighty-three thousand, one hundred fifty-six
The population of India is estimated at $1,267,401,849$ as of July $1,2014.$
Use Place Value to Write Whole Numbers
In the following exercises, write each number as a whole number using digits.
four hundred twelve
412
two hundred fifty-three
thirty-five thousand, nine hundred seventy-five
35,975
sixty-one thousand, four hundred fifteen
eleven million, forty-four thousand, one hundred sixty-seven
11,044,167
eighteen million, one hundred two thousand, seven hundred eighty-three
three billion, two hundred twenty-six million, five hundred twelve thousand, seventeen
3,226,512,017
eleven billion, four hundred seventy-one million, thirty-six thousand, one hundred six
The population of the world was estimated to be seven billion, one hundred seventy-three million people.
7,173,000,000
The age of the solar system is estimated to be four billion, five hundred sixty-eight million years.
Lake Tahoe has a capacity of thirty-nine trillion gallons of water.
39,000,000,000,000
The federal government budget was three trillion, five hundred billion dollars.
Round Whole Numbers
In the following exercises, round to the indicated place value.
Round to the nearest ten:
1. $386$
2. $2,931$
1. 390
2. 2,930
Round to the nearest ten:
1. $792$
2. $5,647$
Round to the nearest hundred:
1. $13,748$
2. $391,794$
1. 13,700
2. 391,800
Round to the nearest hundred:
1. $28,166$
2. $481,628$
Round to the nearest ten:
1. $1,492$
2. $1,497$
1. 1,490
2. 1,500
Round to the nearest thousand:
1. $2,391$
2. $2,795$
Round to the nearest hundred:
1. $63,994$
2. $63,949$
1. $64,000$
2. $63,900$
Round to the nearest thousand:
1. $163,584$
2. $163,246$
## Everyday math
Writing a Check Jorge bought a car for $\text{24,493}.$ He paid for the car with a check. Write the purchase price in words.
Twenty four thousand, four hundred ninety-three dollars
Writing a Check Marissa’s kitchen remodeling cost $\text{18,549}.$ She wrote a check to the contractor. Write the amount paid in words.
Buying a Car Jorge bought a car for $\text{24,493}.$ Round the price to the nearest:
1. ten dollars
2. hundred dollars
3. thousand dollars
4. ten-thousand dollars
1. $24,490 2.$24,500
3. $24,000 4.$20,000
Remodeling a Kitchen Marissa’s kitchen remodeling cost $\text{18,549}.$ Round the cost to the nearest:
1. ten dollars
2. hundred dollars
3. thousand dollars
4. ten-thousand dollars
Population The population of China was $1,355,692,544$ in $2014.$ Round the population to the nearest:
1. billion people
2. hundred-million people
3. million people
1. $1,000,000,000$
2. $1,400,000,000$
3. $1,356,000,000$
Astronomy The average distance between Earth and the sun is $149,597,888$ kilometers. Round the distance to the nearest:
1. hundred-million kilometers
2. ten-million kilometers
3. million kilometers
## Writing exercises
In your own words, explain the difference between the counting numbers and the whole numbers.
Answers may vary. The whole numbers are the counting numbers with the inclusion of zero.
Give an example from your everyday life where it helps to round numbers.
## Self check
After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.
If most of your checks were...
…confidently. Congratulations! You have achieved the objectives in this section. Reflect on the study skills youused so that you can continue to use them. What did you do to become confident of your ability to do these things? Bespecific.
…with some help. This must be addressed quickly because topics you do not master become potholes in your road tosuccess. In math, every topic builds upon previous work. It is important to make sure you have a strong foundationbefore you move on. Who can you ask for help? Your fellow classmates and instructor are good resources. Isthere a place on campus where math tutors are available? Can your study skills be improved?
…no—I don’t get it! This is a warning sign and you must not ignore it. You should get help right away or you willquickly be overwhelmed. See your instructor as soon as you can to discuss your situation. Together you can come upwith a plan to get you the help you need.
how can chip be made from sand
is this allso about nanoscale material
Almas
are nano particles real
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
no can't
Lohitha
where is the latest information on a no technology how can I find it
William
currently
William
where we get a research paper on Nano chemistry....?
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
A soccer field is a rectangle 130 meters wide and 110 meters long. The coach asks players to run from one corner to the other corner diagonally across. What is that distance, to the nearest tenths place.
Jeannette has $5 and$10 bills in her wallet. The number of fives is three more than six times the number of tens. Let t represent the number of tens. Write an expression for the number of fives.
What is the expressiin for seven less than four times the number of nickels
How do i figure this problem out.
how do you translate this in Algebraic Expressions
why surface tension is zero at critical temperature
Shanjida
I think if critical temperature denote high temperature then a liquid stats boils that time the water stats to evaporate so some moles of h2o to up and due to high temp the bonding break they have low density so it can be a reason
s.
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
|
# [DR011] Neural Turing Machine
Posted on July 08, 2017
This popular work combines traditional neural network with an extra memory. By controlling the read heads and write heads, the network, or controller, is able to store and retrieve information from the memory. The memory accessing resembles Turing Machine writing and reading from a paper tape.
Although researchers are astonished at its advent, I think the model itself is just an extended version of LSTM if you treat the memory module as a special gate. However, training such a meticulously designed model to handle data properly is a very admirable effort. The model enables researchers to explore what is learned and to express the trained model in the form of pseudo-code. Apart from the design, the experiments (copying and sorting sequences) are totally different from traditional tasks.
If you are familiar with RNN/LSTM and attention model (check my previous post), you may interpret Neural Turing Machine (NTM) as an improved soft-itemwise attention model that operates on memory chunks
So basically, NTM can be summarized into the first and second figure introduced in this post.
NTM is composed of controller and memory. The controller receives the input, communicate with the memory, and produce the output.
The memory at time step $t$ is a $N \times M$ matrix, where $N$ is the number of slots in the memory, and $M$ is the size of each memory slot. To address of the location in the memory, they use a normalized weighting $w_t$, such that:
The memory reading $r_t$ given address vector $w_t$ is a weighted sum over all rows:
The writing procedure is like the forget gate and output gate in traditional LSTM. In their terms, an erase vector $e_t$ and an add vector $a_t$ are learned to control the memory decay and addition respectively:
Stop here and you will find that the NTM is highly similar with LSTM except that in LSTM, he memory is updated by fully connecting with its previous copy and current input, while in NTM the updates of memory is controlled with attention mechanism.
Now, how to generate the addressing vector $w_t$ seems to be a crucial problem.
Let’s dive into it step by step.
During Content Addressing, they learn $w_t^c$ by calculating:
where $K[\cdot, \cdot]$ is a measure of similarity. I called their addressing procedure as “an improved soft-itemwise attention” for a reason because this part is virtually a standard soft-attention mechanism.
However, the use of $K[\cdot, \cdot]$ still surprises me. To calculate a traditional soft-attention, we usually just learn a vector and normalize it over positions. NTM learns a key or a probe to distribute larger weight to those similar memories. It is quite contradictory. On one hand, if NTM wants to locate a memory slot $i_0$ precisely, it has to recall and generate $k_t$ that looks like $M_t(i_0)$. On the other hand, NTM can locate a slot by recalling a fuzzy key $k_t$ and then retrieve the precise version of that memory. The authors relate it with Hopfield Network while I think it is more like Self Organizing Mapping or Winner-Takes-All model.
During Interpolation, a standard forget gate is employed:
During Convolutional Shift, a shift vector $s_t$ is trained to circularly shift $w$:
Before Convolutional Shift, the address $w_t^{\cdot}$ is a soft-attention vector, which means nothing will change if you shuffle the memory location. Shifting in addressing allows the controller to traverse all memory by simply shifting with unit step, which proves to be an essential tool to arrange memory.
During Sharpening, the $w_t$ is re-normalized into a more sparse vector by:
, where $\gamma_t \geq 1$.
I strongly recommend you to read the experiment part by yourselves. It is very interesting, innovative and incisive.
[DR011]: There are many details I want to comment on:
(1) the use of $K[\cdot, \cdot]$ is novel. It allows the controller to retrieve precise memory by recalling a fuzzy one. Quite like how human searches for information. I wonder what will happen if we change the cosine metric with other metrics and what happens when it is replaced with a standard soft-attention vector.
(2) The design of shifting plays a significant part in handling sequences. Moving relatively rather than absolutely along the tape is a basic operation of Turing Machine. What will happen if we add the shifting part to LSTM? Maybe it can work as well as LSTM. (3)
(3) As impressive their experiment is, it lacks ablation experiments to examine how much each component in the addressing contributes.
(4) Another great benefit of having an external memory is that a program can setup the environment without inputs. Like setting up global variables before major loops in many algorithms. However, NTM does not have that spare time to setup the memory in the experiments. It is required to output immediately after the input terminates.
(5) NTM fails to establish feasible loops by its own. It can loop as the input sequence coming in. However, it fails to generally create a loop counter (in Section 4.2). I think introducing some integer registers or discrete operations might solve this problem.
(6) Disentangling and complicating neural network is always a hot topic. It is natural that researcher will find the detailed model is much easier to train than the black box one, like NTM versus LSTM. The extra bonus is that we can visualize and understand how the neural network accomplishes the mission. However, although the feature is learned by the neural network itself, we actually introduce many hand-crafted operations into the model design itself. It is ironic that NTM, which aims to learn more generally than LSTM, fails to generalize on the design of the model. We have already outshined hand-crafted program via machine learning, surpassed hand-crafted feature via deep learning, and am going to transcend hand-crafted model via some kind of general learning. (Maybe Genetic Algorithm? :D) Or model with such generalizing ability does not exist according to the No Free Lunch Theorem.
None
|
# Math Help - Solving equations?
1. ## Solving equations?
I have an equation;
7 = 2
3+a a - 2
can I solve it this way?
6a + 2a = 8a
7a - 14 7a -14
= a - 14
The method I have used is to cross multiply, but I am unsure?
2. ## Re: Solving equations?
Originally Posted by David Green
I have an equation;
7 = 2
3+a a - 2
You can say $\frac{7}{3+a}=\frac{2}{a-2}$
$\\ 7a-14 =6+2a$.
Now solve for $a$ .
3. ## Re: Solving equations?
Originally Posted by Plato
You can say $\frac{7}{3+a}=\frac{2}{a-2}$
$\\ 7a-14 =6+2a$.
Now solve for $a$ .
Does that mean that now there is two (a) I have to factor one out?
4. ## Re: Solving equations?
Originally Posted by David Green
Does that mean that now there is two (a) I have to factor one out?
No, this is a linear equation with variables on both sides. You need the two terms with "a" on one side, and everything else on the other side.
5. ## Re: Solving equations?
Originally Posted by David Green
Does that mean that now there is two (a) I have to factor one out?
I don't even know what that question means, much less how to answer it.
What level are you on?
What are you studying?
Can you solve $7a-14=6+2a$ for $a~?$
6. ## Re: Solving equations?
Originally Posted by Plato
I don't even know what that question means, much less how to answer it.
What level are you on?
What are you studying?
Can you solve $7a-14=6+2a$ for $a~?$
I am only a beginner
Introduction to maths
a(7 - 14)(6 + 2) ???
7a - 14 = 6 + 2a
7a - 2a = 6 + 14
5a = 20
a = 4
how am I doing!
7. ## Re: Solving equations?
Originally Posted by David Green
I am only a beginner
Introduction to maths
\begin{align*} \frac{7}{3+a}&=\frac{2}{a-2} \\ 7a-14 &=6+2a\\ 5a&=20\text{ subtract 2a and add 14} \\ a&=4\end{align*}
8. ## Re: Solving equations?
Originally Posted by Plato
\begin{align*} \frac{7}{3+a}&=\frac{2}{a-2} \\ 7a-14 &=6+2a\\ 5a&=20\text{ subtract 2a and add 14} \\ a&=4\end{align*}
Thank you, so I did get it right with a little help, after all "every little helps"
|
# Marginal distribution of sum of Poisson random variables from i=2 to n
This is a simple question but I'm not sure what to do when we're not summing the distributions from $i=1$ to $i=n$.
Let's assume $X_1, \dots, X_n$ are i.i.d observations sampled from a Poisson distribution with parameter $\lambda > 0$. Let $S = \sum_{i=1}^{n} X_i$ and $W = \sum_{i=2}^{n} X_i$.
What are the marginal distributions of $S$ and $W$?
I think the marginal distribution of $S$ is $e^{-\lambda} \sum_{i=1}^{n} \dfrac{\lambda^{x_i}}{{x_i}!}$. But what about W? Do I just take the marginal distribution of $S$ and subtract one Poisson random variable from it?
Since $S$ is the sum of $n$ Poisson random variables with parameter $\lambda$, $S$ should be poisson with parameter $n\lambda$. By similar logic, $W$ is the sum of $n-1$ Poisson random variables with parameter $\lambda$, so $W$ must also be Poisson with parameter $(n-1)\lambda$.
|
Question
A student takes an exam containing 15 true or false questions. if the student guesses, what is the probability that he will get exactly 13 questions right?
1. niczorrrr
The probability that a student will get exactly 13 questions right is 0.003.
Let E be an event that student gives 13 right answers.
According to the given question.
An exam contain total number of questions is 15.
Now,
The probability that a student gives a right answer, p = 1/2 = 0.5
And, the probability that a student gives a wrong answer, q = 1 – 1/2 = 1/2 = 0.5
Therefore,
The probability that a student will get exactly 13 questions right is given by
⇒ P(E) = C(15, 13) p^(13)q^(15 – 13)
⇒ P(E) = (15!/13!2!)(0.5)^13(0.5)^2
⇒ P(E) = (15(14)/2 ) × ( 0.00012) × 0.25
⇒ P(E) = 105 × 0.00012 × 0.25
⇒ P(E) = 0.00315
Hence, the probability that a student will get exactly 13 questions right is 0.003.
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 28 May 2016, 17:59
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# GMAT Problem Solving (PS)
new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous 1 ... 4 5 6 7 8 9 10 11 ... 249 Next Search for:
Topics Author Replies Views Last post
Announcements
100
150 Hardest and easiest questions for PS Tags:
Bunuel
5
15850
20 May 2016, 07:55
698
GMAT PS Question Directory by Topic & Difficulty
bb
0
257188
22 Feb 2012, 11:27
Topics
Three boxes have an average weight of 7 kg and a median
getzgetzu
12
2028
05 Mar 2013, 07:14
5
On an aerial photograph, the surface of a pond appears as
M8
8
3575
21 Jul 2015, 02:42
5
During a trip, Francine traveled x percent of the total
M8
3
8128
01 Mar 2013, 03:56
7
An equilateral triangle that has an area of 9*3^1/2
getzgetzu
10
1833
21 Nov 2013, 01:49
4
Of the total amount that Jill spent on a shopping trip, excluding taxe
dinesh8
5
2914
20 Jul 2015, 14:14
5
For which of the following functions is f(a+b)=f(b)+f(a) for
lan583
3
5379
28 Jul 2014, 13:59
2
Machine A and machine B are each used to manufacture 660 spr
gmatmba
7
1728
27 Jun 2014, 02:47
40
At the end of the day, February 14th, a florist had 120 Go to page: 1, 2 Tags: Difficulty: 700-Level, Overlapping Sets, Word Problems, Source: Grockit
21
8429
28 Feb 2016, 10:25
1
Of the three-digit integers greater than 700, how many have two digits Tags:
Rayn
4
1583
10 Jan 2015, 05:51
8
The main ingredient in a certain prescription drug capsule cost $500 kuristar 12 10140 14 Jan 2016, 04:43 7 How many odd numbers between 10 and 1,000 are the squares of integers? MBAlad 6 9917 01 Jun 2015, 01:42 6 In the addition table shown above, what is the value of m + shampoo 3 6798 26 Aug 2014, 12:02 21 A certain bag of gemstones is composed of two-thirds BG 11 5309 28 Feb 2016, 10:35 4 If the average (arithmetic mean) of 5 positive temperatures SimaQ 7 3429 03 Dec 2013, 02:05 19 A bullet train leaves Kyoto for Tokyo traveling 240 miles per hour at buckkitty 17 3515 26 Feb 2016, 05:11 12 When a truck travels at 60 miles per hour, it uses 30% more faifai0714 14 3720 17 Jan 2015, 22:39 3 (1001^2-999^2)/(101^2-99^2)= Googlore 6 1712 07 Jan 2014, 23:49 18 A store currently charges the same price for each towel that Go to page: 1, 2 Tags: Difficulty: 600-700 Level, Roots, Word Problems, Source: PowerPrep chiragr 30 14833 01 Mar 2014, 06:49 Three grades of milk are 1 percent, 2 percent and 3 percent njvenkatesh 5 2008 17 Oct 2012, 04:53 20 The chart below lists the odds that a horse will place in shobhitb 16 2366 19 Oct 2015, 11:54 20 When positive integer x is divided by positive integer y, lan583 8 7914 06 Sep 2013, 00:46 31 Pipe P can drain the liquid from a tank in 3/4 the time that X & Y 11 3733 08 May 2016, 04:31 5 A computer can perform 1,000,000 calculations per second. At this rate Yurik79 10 2924 10 Jun 2015, 02:17 13 Salesperson A's compensation for any week is$360 plus 6
jbsears
16
5142
13 Nov 2014, 10:19
3
If an integer n is to be chosen at random from the integers 1 to 96,
GMATCUBS21
15
1766
24 Apr 2015, 08:26
3
Working alone, Printers X, Y, and Z can do a certain printin
necromonger
3
1809
06 Mar 2014, 00:20
22
Joshua and Jose work at an auto repair center with 4 other Go to page: 1, 2 Tags: Difficulty: Sub-600 Level, Probability, Source: GMAT Prep
aiming700plus
23
4563
05 Feb 2016, 12:02
13
If n is positive, which of the following is equal to
kook44
20
5551
28 Mar 2016, 04:08
22
On July 1 of last year, total employees at company E was dec
M8
6
7906
16 Sep 2013, 16:05
16
A qualified worker digs a well in 5 hours. He invites 2 appr Go to page: 1, 2 Tags: Difficulty: 700-Level, Work/Rate Problems, Source: GMAT Club Tests
Hermione
26
3752
24 Jan 2016, 02:27
31
If d is the standard deviation x, y, and z, what is the stan
gmacvik
7
10117
02 Oct 2013, 03:30
2
If an item that originally sold for z dollars was marked up
humans
3
3360
24 Nov 2014, 06:33
6
If f(x) = 4x^2 - 36x + 77 for all real values of x from -10 and 10,
kevincan
8
1219
24 Jul 2015, 01:30
4
The probability of pulling a black bal out of a glass jar is
andrecrompton
4
1206
13 Sep 2014, 07:04
31
If x is a positive integer, what is the units digit of
MA
18
6162
18 Mar 2016, 02:17
22
Al can complete a particular job in 8 hours. Boris can
X & Y
13
4493
07 Dec 2015, 18:49
37
There are three secretaries who work for four departments. Go to page: 1, 2 Tags: Difficulty: 700-Level, Combinations, Geometry, Probability, Source: Kaplan Advanced
mahesh004
20
3967
10 Jun 2015, 10:31
9
Together, Andrea and Brian weigh p pounds; Brian weighs 10 p
Dallas Jockey
12
2521
17 Jan 2016, 14:39
6
In N is a positive integer less than 200, and 14N/60 is an integer, th
likar
4
5434
21 Dec 2014, 04:17
7
A ladder 25 feet long is leaning against a wall that is perp
Meow
15
2483
18 Jun 2014, 00:56
35
A boat traveled upstream a distance of 90 miles at an
CfaMiami
9
23614
10 Feb 2014, 22:12
20
Tanya prepared 4 letters to be sent to 4 different
tealeaflin
11
6558
07 Jul 2014, 12:22
14
The Sum of first N consecutive odd integers is N^2. What is
ghantark
15
6667
24 Apr 2016, 01:23
33
5 blue marbles, 3 red marbles and 4 purple marbles are Go to page: 1, 2 Tags: Difficulty: 700-Level, Probability, Source: 800 Score
londonluddite
20
9996
02 Jan 2016, 19:45
2
The equation (M + 6)/36 = (p – 7)/21 relates two temperature
SimaQ
6
2994
26 Nov 2015, 16:58
13
A part-time employee whose hourly wage was increased by 25
SimaQ
15
5321
26 Aug 2015, 21:32
27
Currently, y percent of the members on the finance committee
u2lover
16
4549
02 Mar 2016, 23:30
4
S is the infinite sequence S1 = 2, S2 = 22, S3 = 222,...S =
u2lover
13
1475
11 Dec 2014, 10:29
52
Each digit in the two-digit number G is halved to form a new Go to page: 1, 2 Tags: Difficulty: 700-Level, Arithmetic, Word Problems
u2lover
37
7886
12 Mar 2016, 16:21
11
Given that p is a positive even integer with a positive
u2lover
17
3413
05 Oct 2015, 00:02
new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous 1 ... 4 5 6 7 8 9 10 11 ... 249 Next Search for:
Who is online In total there are 5 users online :: 0 registered, 0 hidden and 5 guests (based on users active over the past 15 minutes) Users browsing this forum: No registered users and 5 guests Statistics Total posts 1490431 | Total topics 181076 | Active members 446204 | Our newest member mn95616
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
# Help with Multivariate SDE Timeseries
I think we had it right the first time. error_s is a just a vector of i.i.d standard normals
error_s = pm.Normal('error_s',mu=0.0,sd=1, shape=len(returns))
Though I’m still getting an error about input dimension
ValueError: Input dimension mis-match. (input[0].shape[0] = 3812, input[1].shape[0] = 3813)
I got it to run but the results don’t make sense. I don’t think this is the right way to solve the problem. The additional normal random required to pass in the observed returns is undesirable. There’s got to be a way to write a multivariate class to support this
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3.distributions.timeseries import EulerMaruyama
returns = np.genfromtxt(pm.get_data("SP500.csv"))
def lin_sde(Vt, sigma_v, rho, beta_v, alpha_v, error_s):
return alpha_v + beta_v * Vt + sigma_v * np.sqrt(Vt) * rho * error_s, sigma_v * np.sqrt(Vt) * np.sqrt(1-rho*rho)
with pm.Model() as sp500_model:
alpha_v=pm.Normal('alpha',0,sd=100)
beta_v=pm.Normal('beta',0,sd=10)
sigma_v=pm.InverseGamma('sigmaV',2.5,0.1)
rho=pm.Uniform('rho',-.9,.9)
error_s = pm.Normal('error_s',mu=0.0,sd=1)
Vt = EulerMaruyama('Vt', 1.0, lin_sde, [sigma_v, rho, alpha_v, beta_v, error_s], shape=len(returns),testval=np.repeat(.01,len(returns)))
expected = pm.Deterministic('expected', np.sqrt(Vt) * error_s)
Yt = pm.Normal('Yt', mu=expected, sd=0.01, observed=returns)
with sp500_model:
pm.traceplot(trace, [alpha_v,beta_v,sigma_v,rho]);
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(returns)
ax.plot(trace['Vt',::5].T, 'r', alpha=.03);
ax.set(xlabel='time', ylabel='returns')
ax.legend(['S&P500', 'stoch vol']);
Going way back to your original question, this is possible by
1, Cast your observed into a theano shared variable.
2, Concatenate the observed with the latent process.
And then pass it to the observed in MvGaussianRandomWalk.
Can’t you use a pm.Potential for this?
1 Like
I may have found a way to workaround the problem but am getting errors. I defined \epsilon_t^s b using pm.Normal like before. I pass this into the drift coefficient of the variance process SDE, as before. For the return process, I can’t use pm.Deterministic because I can’t pass in the observed returns. But what if I specify the return process as an SDE with 1) \epsilon_t^s passed into the drift coefficient and 2) a zero diffusion coefficient?
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3.distributions.timeseries import EulerMaruyama
returns = np.genfromtxt(pm.get_data("SP500.csv"))
def Vt_sde(Vt, sigma_v, rho, beta_v, alpha_v, error_s):
return alpha_v + beta_v * Vt + sigma_v * np.sqrt(Vt) * rho * error_s, sigma_v * np.sqrt(Vt) * np.sqrt(1-rho*rho)
def Yt_sde(Vt, error_s):
return np.sqrt(Vt) * error_s,0
with pm.Model() as sp500_model:
alpha_v=pm.Normal('alpha',0,sd=100)
beta_v=pm.Normal('beta',0,sd=10)
sigma_v=pm.InverseGamma('sigmaV',2.5,0.1)
rho=pm.Uniform('rho',-.9,.9)
error_s = pm.Normal('error_s',mu=0.0,sd=1)
Vt = EulerMaruyama('Vt', 1.0, Vt_sde, [sigma_v, rho, alpha_v, beta_v, error_s], shape=len(returns),testval=np.repeat(.01,len(returns)))
Yt = EulerMaruyama('Yt', 1.0,Yt_sde, [Vt, error_s], observed=returns)
Unfortunately I am getting the following error…
TypeError: Yt_sde() takes 2 positional arguments but 3 were given
I’m not sure how pm.Potential would help here. I can’t pass an observed variable into pm.Potential
Is there a reason why pm.Deterministic does not allow you to pass in an observed variable? Are there any workarounds?
This might work. I can work out the log probability of the posterior for this problem, which I can implement as class similar to MVGuassianRandomWalk. How would I implement an example like what you’ve described for a simple MvGuassianRandomWalk?
Thanks everyone for the help I really hope I can get this model implemented somehow!
Any help on this? I’m at a dead end.
pymc3 needs a likelihood function to sample, and if you observed a function of some random variables (ie., observing a Deterministic) the likelihood expression is not trivial. Workaround is using a normal distribution with the deterministic output as mu and a small sd (.1 - 1) and observed=data.
So you can first do s_data = theano.shared(data), and then after you write down your model, you use theano.tensor.stack or theano.tensor.concatenate to concatenate the latent observed with s_data and feed it to MvGuassianRandomWalk.
I tried implementing the model without correlation as a starting point but am getting a “Bad Initial Energy” error. I tried setting init=‘adapt_diag’ in pm.sample but still get the error. I even tried changing my priors to ensure they are positive.
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3.distributions.timeseries import EulerMaruyama
returns = np.genfromtxt(pm.get_data("SP500.csv"))
def Vt_sde(Vt, sigmaV, kappa, theta):
return kappa*(theta-Vt), sigmaV *np.sqrt(Vt)
with pm.Model() as sp500_model:
theta=pm.HalfNormal('theta',sd=0.4)
kappa=pm.HalfNormal('kappa',sd=2)
sigmaV=pm.InverseGamma('sigmaV',2.5,0.1)
Vt = EulerMaruyama('Vt',1.0,Vt_sde,[sigmaV, kappa, theta],shape=len(returns),testval=np.repeat(.01,len(returns)))
volatility_process = pm.Deterministic('volatility_process', np.sqrt(Vt))
r = pm.Normal('r', mu=0,sd=volatility_process, observed=returns)
with sp500_model:
trace = pm.sample(2000,chains=1,cores=1)
ValueError: Bad initial energy: inf. The model might be misspecified.
Ok I think I solved that issue…need to ensure Vt is positive when taking the square root. Now I just need to get the correlation worked in
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3.distributions.timeseries import EulerMaruyama
returns = np.genfromtxt(pm.get_data("SP500.csv"))
def Vt_sde(Vt, sigmaV, kappa, theta):
return kappa*(theta-Vt), sigmaV *np.sqrt(np.abs(Vt))
with pm.Model() as sp500_model:
theta=pm.HalfNormal('theta',sd=0.4)
kappa=pm.HalfNormal('kappa',sd=2)
sigmaV=pm.InverseGamma('sigmaV',2.5,0.1)
Vt = EulerMaruyama('Vt',1.0,Vt_sde,[sigmaV, kappa, theta],shape=len(returns),testval=np.repeat(.01,len(returns)))
volatility_process = pm.Deterministic('volatility_process', np.sqrt(np.abs(Vt)))
r = pm.Normal('r', mu=0,sd=volatility_process, observed=returns)
with sp500_model:
trace = pm.sample(2000,chains=1,cores=1)
pm.traceplot(trace, [sigmaV, kappa, theta])
Even with that fix, the output doesn’t make any sense.
Well, since I cant get this model working even when the volatility process is independent, perhaps we can try a model that does work. This is a similar model but instead of variance being stochastic, log-variance follows an AR(1) process:
Y_t are the observed returns and V_t the latend variance process
Y_t=\sqrt{V_{t-1}}\epsilon_t^s
log(V_t)=\alpha_v+\beta_vlog(V_{t-1})+\sigma_v\epsilon_t^v
For independent epsilon, implementation is straightforward. The log-variance process can implemented either using the AR class or EurlerMaruyama.
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3.distributions.timeseries import EulerMaruyama
returns = np.genfromtxt(pm.get_data("SP500.csv"))
def logV_sde(logV, sigmaV, beta, alpha):
return alpha + beta * logV, sigmaV
with pm.Model() as sp500_model:
alpha=pm.Normal('alpha',0,sd=100)
beta=pm.Normal('beta',0,sd=10)
sigmaV=pm.InverseGamma('sigmaV',2.5,0.1)
logV = pm.AR('logV', [alpha,beta], sd=sigmaV**.5,constant=True,shape=len(returns))
#logV = EulerMaruyama('logV',1.0,logV_sde,[sigmaV, beta, alpha],shape=len(returns),testval=np.repeat(.01,len(returns)))
volatility_process = pm.Deterministic('volatility_process', pm.math.exp(.5*logV))
r = pm.Normal('r', mu=0,sd=volatility_process, observed=returns)
with sp500_model:
trace = pm.sample(2000,chains=1,cores=1)
pm.traceplot(trace, [alpha,beta,sigmaV]);
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(returns)
ax.plot(trace['volatility_process',::5].T, 'r', alpha=.03);
ax.set(xlabel='time', ylabel='returns')
ax.legend(['S&P500', 'stoch vol']);
the results make sense, and the volatility trace is similar to what we see in the PyMC3 stochastic volatility example:
Using this model as starting point, I’d like to introduce a correlation \rho between \epsilon_t^s and \epsilon_t^v . I’ve tried a workaround discussed earlier in this thread but the introduction of a third random variable produces nonsensical results. I think the proper solution is define the multivariate posterior and pass in the observed returns. This I don’t know how to do
1 Like
Oh yes I remember @twiecki mentioned somewhere that if you have double stochastic (i.e., random walk of variance plus random walk of mean) it is extremely difficult to do inference.
1 Like
I don’t think this particular problem is that difficult, it can be done using Metropolis but it tends to be slow. I think NUTS can handle it if we specify the model correctly.
Here’s how I think we could set up the model:
• Create a new timeseries class similar to MVGuassianRandomWalk
• Modify the logp function to accommodate the bivariate posterior of Y and V. In the example of the log-vol model above, the logvol follows an AR1 process so the code will look similar to the AR1 class
• Once the logp function is defined correctly, pass in the observed return series concatenated with the latent state
Alternatively, we can take the approach outlined in this paper
In section 2.3 the authors consider the same log-vol model with correlated errors and adopt a Metropolis scheme
Here’s a possible workaround…though I still prefer the type of solution involving a new multivariate class…
is it possible to define a Deterministic that’s a recursive function? this would be similar to the vol-update function in the GARCH class.
Consider the model again:
Y_t=\sqrt{V_{t-1}}\epsilon_t^s
log(V_t)=\alpha_v+\beta_vlog(V_{t-1})+\sigma_v\epsilon_t^v
\epsilon_t^s and \epsilon_t^v have correlation \rho
Set \epsilon^s=\rho\epsilon_t^v+\sqrt{1-\rho^2}\epsilon_t^n so that \epsilon_t^v and \epsilon_t^n are independent standard normals. Then we have
Y_t=\sqrt{V_{t-1}}[\rho\epsilon_t^v+\sqrt{1-\rho^2}\epsilon_t^n]
log(V_t)=\alpha_v+\beta_vlog(V_{t-1})+\sigma_v\epsilon_t^v
Now I can set up the model as follows
alpha=pm.Normal('alpha',0,sd=100)
beta=pm.Normal('beta',0,sd=10)
sigma_v=pm.InverseGamma('sigma_v',2.5,0.1)
epsilon_v=pm.Normal('error_v',mu=0,sd=1)
logV(t) is now a deterministic function of epsilon_v and the priors, but is recursive since it also depends on logV(t-1) this is the missing step
volatility=pm.Deterministic('volatility', pm.math.exp(.5*logV))
Yt=pm.Normal('Yt',mu=volatility*rho*error_v,sd=volatility*np.sqrt(1-rho*rho),observed=returns)
would something like this work?
I think I was able to implement the workaround, but the sampling is extremely slow.
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3.distributions.timeseries import EulerMaruyama
import theano.tensor as tt
from theano import scan
returns = np.genfromtxt(pm.get_data("SP500.csv"))
def get_volatility(x,initial_vol,alpha,beta,sigma):
x = x[:-1]
def volatility_update(x, vol, a, b,sigma):
return a + beta * vol + sigma * x
vol, _ = scan(fn=volatility_update,
sequences=[x],
outputs_info=[initial_vol],
non_sequences=[alpha, beta,
sigma])
return tt.concatenate([[initial_vol], vol])
with pm.Model() as sp500_model:
alpha=pm.Normal('alpha',0,sd=100)
beta=pm.Normal('beta',0,sd=10)
sigma_v=pm.InverseGamma('sigma_v',2.5,0.1)
error_v=pm.Normal('error_v',mu=0,sd=1,shape=len(returns))
rho=pm.Uniform('rho',-.9,.9)
logV=get_volatility(error_v,.01,alpha,beta,sigma_v)
volatility_process = pm.Deterministic('volatility_process', pm.math.exp(.5*logV))
r=pm.Normal('r',mu=volatility_process*rho*error_v,sd=volatility_process*np.sqrt(1-rho*rho),observed=returns)
with sp500_model:
trace = pm.sample(2000,chains=1,cores=1)
pm.traceplot(trace, [alpha,beta,sigma_v]);
fig, ax = plt.subplots(figsize=(14, 8))
ax.plot(returns)
ax.plot(trace['volatility_process',::5].T, 'r', alpha=.03);
ax.set(xlabel='time', ylabel='returns')
ax.legend(['S&P500', 'stoch vol']);
2 Likes
Were you ever able to get this code up and running? the thread ended without a known solution
No, not within PyMC3.
1 Like
Did you make it work with another package? Would be interested to hear how it went.
Regarding @jungpenglao 's comment that it’s not trivial to allow observed Deterministic variables, what’s the blocker?
Say we observe \tilde{x}_i = f(x, a), where x \sim N(0, b^2), a and b are stochastic variables and \tilde{x} is the observed variable. If we invert f we can write f^{-1}(\tilde{x}_i, a) \sim N(0, b^2). Given values for a and b we could calculate the likelihood easily, and also its partial derivatives with a and b (or am I missing something).
So is it possible to enter a Deterministic variable as the observed value of a stochastic variable? I might be missing something here.
When f is easy to work with and can be automatically inverted, then perhaps that’s the case. There are many situations where it’s not feasible to invert f (or even tell if f is invertible to begin with), so getting a general purpose solution to this is out of reach at the moment.
|
A primal-dual symmetric relaxation for homogeneous conic systems
We address the feasibility of the pair of alternative conic systems of constraints Ax = 0, x \in C, and -A^T y \in C^*, defined by an m by n matrix A and a cone C in the n-dimensional Euclidean space. We reformulate this pair of conic systems as a primal-dual pair of conic programs. Each of the conic programs corresponds to a natural relaxation of each of the two conic systems. When C is a self-scaled cone with a known self-scaled barrier, the conic programming reformulation can be solved via interior-point methods. For a well-posed instance A, a strict solution to one of the two original conic systems can be obtained in a number of interior-point iterations proporcional to Renegar's condition number of the matrix A, namely, the reciprocal of the relative distance from A to the set of ill-posed instances.
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Sep 2019, 01:05
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# In the x-y plane, the area of the region bounded by the graph of |x +
Author Message
TAGS:
### Hide Tags
Manager
Joined: 09 Jun 2009
Posts: 138
In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
Updated on: 26 Oct 2017, 06:46
11
85
00:00
Difficulty:
55% (hard)
Question Stats:
62% (01:57) correct 38% (02:05) wrong based on 876 sessions
### HideShow timer Statistics
In the x-y plane, the area of the region bounded by the graph of |x + y| + |x - y| = 4 is
A. 8
B. 12
C. 16
D. 20
E. 24
Originally posted by papillon86 on 08 Nov 2009, 14:11.
Last edited by Bunuel on 26 Oct 2017, 06:46, edited 3 times in total.
Edited the question and added the OA
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
08 Nov 2009, 14:34
17
20
papillon86 wrote:
In x-y plane, the area of the region bounded by the graph of |x+y| + |x-y| = 4 is
a) 8
b) 12
c) 16
d) 20
Need help in solving equations involving Mod......
help?
OK, there can be 4 cases:
|x+y| + |x-y| = 4
A. x+y+x-y = 4 --> x=2
B. x+y-x+y = 4 --> y=2
C. -x-y +x-y= 4 --> y=-2
D. -x-y-x+y=4 --> x=-2
The area bounded by 4 graphs x=2, x=-2, y=2, y=-2 will be square with the side of 4 so the area will be 4*4=16.
Attachment:
MSP17971c13h40gd024h6g10000466ge1e9df941i96.gif [ 1.86 KiB | Viewed 24560 times ]
_________________
##### General Discussion
Manager
Affiliations: PMP
Joined: 13 Oct 2009
Posts: 224
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
08 Nov 2009, 15:27
Bunuel wrote:
papillon86 wrote:
In x-y plane, the area of the region bounded by the graph of |x+y| + |x-y| = 4 is
a) 8
b) 12
c) 16
d) 20
Need help in solving equations involving Mod......
help?
I've never seen such kind of question in GMAT before.
OK there can be 4 cases:
|x+y| + |x-y| = 4
A. x+y+x-y = 4 --> x=2
B. x+y-x+y = 4 --> y=2
C. -x-y +x-y= 4 --> y=-2
D. -x-y-x+y=4 --> x=-2
The area bounded by 4 graphs x=2, x=-2, y=2, y=-2 will be square with the side of 4 so the area will be 4*4=16.
Why cant we consider (4,0) and (0,4) as points on graph ? then area would be different... , right?
_________________
Thanks, Sri
-------------------------------
keep uppp...ing the tempo...
Press +1 Kudos, if you think my post gave u a tiny tip
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
08 Nov 2009, 15:39
srini123 wrote:
Why cant we consider (4,0) and (0,4) as points on graph ? then area would be different... , right?
First of all we are not considering points separately, as we have X-Y plane and roots of equation will represent lines, we'll get the figure bounded by this 4 lines. The equations for the lines are:
x=2
x=-2
y=2
y=-2
This lines will make a square with the side 4, hence area 4*4=16.
Second: points (4,0) or (0,4) doesn't work for |x+y| + |x-y| = 4.
_________________
Manager
Affiliations: PMP
Joined: 13 Oct 2009
Posts: 224
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
08 Nov 2009, 16:58
Bunuel wrote:
srini123 wrote:
Why cant we consider (4,0) and (0,4) as points on graph ? then area would be different... , right?
First of all we are not considering points separately, as we have X-Y plane and roots of equation will represent lines, we'll get the figure bounded by this 4 lines. The equations for the lines are:
x=2
x=-2
y=2
y=-2
This lines will make a square with the side 4, hence area 4*4=16.
Second: points (4,0) or (0,4) doesn't work for |x+y| + |x-y| = 4.
Thanks Bunuel, I used similar method for a similar question and I got wrong answer
the question was
what is the area bounded by graph$$|x/2| + |y/2| = 5$$?
I got hunderd since
x=10
x=-10
y=10
y=-10
isnt the area 400 ? the answer given was 200, please explain
_________________
Thanks, Sri
-------------------------------
keep uppp...ing the tempo...
Press +1 Kudos, if you think my post gave u a tiny tip
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
08 Nov 2009, 17:41
8
3
srini123 wrote:
Thanks Bunuel, I used similar method for a similar question and I got wrong answer
the question was
what is the area bounded by graph$$|x/2| + |y/2| = 5$$?
I got hunderd since
x=10
x=-10
y=10
y=-10
isnt the area 400 ? the answer given was 200, please explain
I think this one is different.
$$|\frac{x}{2}| + |\frac{y}{2}| = 5$$
After solving you'll get equation of four lines:
$$y=-10-x$$
$$y=10+x$$
$$y=10-x$$
$$y=x-10$$
These four lines will also make a square, BUT in this case the diagonal will be 20 so the $$Area=\frac{20*20}{2}=200$$. Or the $$Side= \sqrt{200}$$, area=200.
If you draw these four lines you'll see that the figure (square) which is bounded by them is turned by 90 degrees and has a center at the origin. So the side will not be 20.
Also you made a mistake in solving equation. The red part is not correct. You should have the equations written above.
In our original question when we were solving the equation |x+y| + |x-y| = 4 each time x or y were cancelling out so we get equations of a type x=some value twice and y=some value twice. And these equations give the lines which are parallel to the Y or X axis respectively so the figure bounded by them is a "horizontal" square (in your question it's "diagonal" square).
Hope it's clear.
_________________
Manager
Affiliations: PMP
Joined: 13 Oct 2009
Posts: 224
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
08 Nov 2009, 20:23
Thanks Bunuel , once again wonderful explanation +1 Kudos..
have a good day...
_________________
Thanks, Sri
-------------------------------
keep uppp...ing the tempo...
Press +1 Kudos, if you think my post gave u a tiny tip
Retired Moderator
Joined: 16 Nov 2010
Posts: 1308
Location: United States (IN)
Concentration: Strategy, Technology
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
21 May 2011, 05:46
1
|x-y| = x-y if x-y > 0
|x-y| = -(x-y) if x-y < 0
x+y > 0 => x > -y then x !> y
x+y + x - y = 4
x = 2
-x - y + x - y = 4 (if x < -y, then x !< y)
y = -2
x + y -x + y = 4
=> y = 2
-x-y + x - y = 4
=> y = -2
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
Intern
Joined: 22 May 2011
Posts: 1
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
22 May 2011, 07:21
Given |x-y| + |x+y| = 4
I don't understand why can't |x-y| and |x+y| be 1 and 3 instead of 2 and 2! (which again equals 4)
Can any one please explain this to me?
Thanks & Regards,
Vinu
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 9644
Location: Pune, India
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
22 May 2011, 08:38
VinuPriyaN wrote:
Given |x-y| + |x+y| = 4
I don't understand why can't |x-y| and |x+y| be 1 and 3 instead of 2 and 2! (which again equals 4)
Can any one please explain this to me?
Thanks & Regards,
Vinu
Look at the solution given by Bunuel above. When you solve it, you get four equations.
One of them is x = 2 which means that x = 2 and y can take any value. If y = 1, |x-y| = 1 and |x+y| = 3.
For different values of y, |x-y| and |x+y| will get different values. We are not discounting any of them.
_________________
Karishma
Veritas Prep GMAT Instructor
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
15 Feb 2012, 00:25
prashantbacchewar wrote:
In the X-Y plane, the area of the region bounded by the graph of |x + y| + |x – y| = 4 is
(1) 8
(2) 12
(3) 16
(4) 20
(5) 24
Some questions on the same subject to practice:
m06-5-absolute-value-108191.html
graphs-modulus-help-86549.html
m06-q5-72817.html
if-equation-encloses-a-certain-region-110053.html
Hope it helps.
_________________
Manager
Joined: 25 Aug 2011
Posts: 135
Location: India
GMAT 1: 730 Q49 V40
WE: Operations (Insurance)
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
27 Feb 2012, 23:33
Hi,
Can this be solved by graphing. If yes .. how do we graph the equation with 2 mod parts
VeritasPrepKarishma wrote:
VinuPriyaN wrote:
Given |x-y| + |x+y| = 4
I don't understand why can't |x-y| and |x+y| be 1 and 3 instead of 2 and 2! (which again equals 4)
Can any one please explain this to me?
Thanks & Regards,
Vinu
Look at the solution given by Bunuel above. When you solve it, you get four equations.
One of them is x = 2 which means that x = 2 and y can take any value. If y = 1, |x-y| = 1 and |x+y| = 3.
For different values of y, |x-y| and |x+y| will get different values. We are not discounting any of them.
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
27 Feb 2012, 23:42
1
devinawilliam83 wrote:
Hi,
Can this be solved by graphing. If yes .. how do we graph the equation with 2 mod parts
VeritasPrepKarishma wrote:
VinuPriyaN wrote:
Given |x-y| + |x+y| = 4
I don't understand why can't |x-y| and |x+y| be 1 and 3 instead of 2 and 2! (which again equals 4)
Can any one please explain this to me?
Thanks & Regards,
Vinu
Look at the solution given by Bunuel above. When you solve it, you get four equations.
One of them is x = 2 which means that x = 2 and y can take any value. If y = 1, |x-y| = 1 and |x+y| = 3.
For different values of y, |x-y| and |x+y| will get different values. We are not discounting any of them.
Yes, it can be done by graphing. |x+y| + |x-y| = 4 can expand in four different wasy:
A. x+y+x-y = 4 --> x=2
B. x+y-x+y = 4 --> y=2
C. -x-y +x-y= 4 --> y=-2
D. -x-y-x+y=4 --> x=-2
So you can draw all these four lines x=2, x=-2, y=2, y=-2 to get a square with the side of 4:
Attachment:
Square.gif [ 1.86 KiB | Viewed 29422 times ]
See more examples here:
m06-5-absolute-value-108191.html
graphs-modulus-help-86549.html
m06-q5-72817.html
if-equation-encloses-a-certain-region-110053.html
Hope it helps.
_________________
Senior Manager
Joined: 13 Aug 2012
Posts: 405
Concentration: Marketing, Finance
GPA: 3.23
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
05 Dec 2012, 23:17
(1) derive all equations from |x+y| + |x-y| = 4
x+y+x-y =4 ==> x=2
x+y-x+y =4 ==> y=2
-x-y+x-y =4 ==> y=-2
-x-y-x+y =4 ==> x=-2
(3) Notice you have formed a square region bounded by x=2, y=2, y=-2 and x=-2 lines
(4) Area = 4*4 = 16
For more detailed solutions for similar question types:
_________________
Impossible is nothing to God.
Senior Manager
Joined: 13 Aug 2012
Posts: 405
Concentration: Marketing, Finance
GPA: 3.23
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
13 Dec 2012, 04:06
2
eaakbari wrote:
Quote:
OK there can be 4 cases:
|x+y| + |x-y| = 4
A. x+y+x-y = 4 --> x=2
B. x+y-x+y = 4 --> y=2
C. -x-y +x-y= 4 --> y=-2
D. -x-y-x+y=4 --> x=-2
Any absolute values such as |x| = 5 could mean that x = 5 or x = -5.
Derive both (-) and (+) possibilities.
For the problem: |x+y| + |x-y| = 4
We could derive two possibilities for |x+y| could be -(x+y) and (x+y)
We could derive two possibilities for |x-y| could be -(x-y) and (x-y)
This is the reason why we have 4 derived equations.
(x+y) + (x-y) = 4
(x+y) - (x-y) = 4
-(x+y) + (x-y) = 4
-(x+y) - (x-y) = 4
Just simplify those...
If you want more practice on this question type: http://burnoutorbreathe.blogspot.com/2012/12/absolute-values-solving-for-area-of.html
_________________
Impossible is nothing to God.
VP
Status: Been a long time guys...
Joined: 03 Feb 2011
Posts: 1023
Location: United States (NY)
Concentration: Finance, Marketing
GPA: 3.75
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
13 Dec 2012, 04:26
Bunuel wrote:
srini123 wrote:
Thanks Bunuel, I used similar method for a similar question and I got wrong answer
the question was
what is the area bounded by graph$$|x/2| + |y/2| = 5$$?
I got hunderd since
x=10
x=-10
y=10
y=-10
isnt the area 400 ? the answer given was 200, please explain
I think this one is different.
$$|\frac{x}{2}| + |\frac{y}{2}| = 5$$
After solving you'll get equation of four lines:
$$y=-10-x$$
$$y=10+x$$
$$y=10-x$$
$$y=x-10$$
These four lines will also make a square, BUT in this case the diagonal will be 20 so the $$Area=\frac{20*20}{2}=200$$. Or the $$Side= \sqrt{200}$$, area=200.
If you draw these four lines you'll see that the figure (square) which is bounded by them is turned by 90 degrees and has a center at the origin. So the side will not be 20.
Also you made a mistake in solving equation. The red part is not correct. You should have the equations written above.
In our original question when we were solving the equation |x+y| + |x-y| = 4 each time x or y were cancelling out so we get equations of a type x=some value twice and y=some value twice. And these equations give the lines which are parallel to the Y or X axis respectively so the figure bounded by them is a "horizontal" square (in your question it's "diagonal" square).
Hope it's clear.
Hii Bunuel.
What is the best approach of finding the points of intersection in order to make the square.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
13 Dec 2012, 04:30
Marcab wrote:
Bunuel wrote:
srini123 wrote:
Thanks Bunuel, I used similar method for a similar question and I got wrong answer
the question was
what is the area bounded by graph$$|x/2| + |y/2| = 5$$?
I got hunderd since
x=10
x=-10
y=10
y=-10
isnt the area 400 ? the answer given was 200, please explain
I think this one is different.
$$|\frac{x}{2}| + |\frac{y}{2}| = 5$$
After solving you'll get equation of four lines:
$$y=-10-x$$
$$y=10+x$$
$$y=10-x$$
$$y=x-10$$
These four lines will also make a square, BUT in this case the diagonal will be 20 so the $$Area=\frac{20*20}{2}=200$$. Or the $$Side= \sqrt{200}$$, area=200.
If you draw these four lines you'll see that the figure (square) which is bounded by them is turned by 90 degrees and has a center at the origin. So the side will not be 20.
Also you made a mistake in solving equation. The red part is not correct. You should have the equations written above.
In our original question when we were solving the equation |x+y| + |x-y| = 4 each time x or y were cancelling out so we get equations of a type x=some value twice and y=some value twice. And these equations give the lines which are parallel to the Y or X axis respectively so the figure bounded by them is a "horizontal" square (in your question it's "diagonal" square).
Hope it's clear.
Hii Bunuel.
What is the best approach of finding the points of intersection in order to make the square.
I'd say substituting x=0 and y=0 in the equations of lines and making a drawing.
_________________
Senior Manager
Status: Verbal Forum Moderator
Joined: 17 Apr 2013
Posts: 450
Location: India
GMAT 1: 710 Q50 V36
GMAT 2: 750 Q51 V41
GMAT 3: 790 Q51 V49
GPA: 3.3
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
21 Aug 2013, 21:51
Bunuel wrote:
srini123 wrote:
Why cant we consider (4,0) and (0,4) as points on graph ? then area would be different... , right?
First of all we are not considering points separately, as we have X-Y plane and roots of equation will represent lines, we'll get the figure bounded by this 4 lines. The equations for the lines are:
x=2
x=-2
y=2
y=-2
This lines will make a square with the side 4, hence area 4*4=16.
Second: points (4,0) or (0,4) doesn't work for |x+y| + |x-y| = 4.
The side of the square can't be 4, instead its sqrt(8)
_________________
Like my post Send me a Kudos It is a Good manner.
My Debrief: http://gmatclub.com/forum/how-to-score-750-and-750-i-moved-from-710-to-189016.html
Math Expert
Joined: 02 Sep 2009
Posts: 58105
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
22 Aug 2013, 03:21
honchos wrote:
Bunuel wrote:
srini123 wrote:
Why cant we consider (4,0) and (0,4) as points on graph ? then area would be different... , right?
First of all we are not considering points separately, as we have X-Y plane and roots of equation will represent lines, we'll get the figure bounded by this 4 lines. The equations for the lines are:
x=2
x=-2
y=2
y=-2
This lines will make a square with the side 4, hence area 4*4=16.
Second: points (4,0) or (0,4) doesn't work for |x+y| + |x-y| = 4.
The side of the square can't be 4, instead its sqrt(8)
The side of the square IS 4:
Attachment:
MSP39361d6ehgde6ie87a8800003827f7f92a367c60.gif [ 1.86 KiB | Viewed 12136 times ]
_________________
Manager
Joined: 21 Oct 2013
Posts: 180
Location: Germany
GMAT 1: 660 Q45 V36
GPA: 3.51
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink]
### Show Tags
27 Jan 2014, 03:00
Bunuel,
wouldn't it be sufficient to look at only two cases?
(x+y) + (x-y) = 4 ==> x=2
(x+y) - (x-y) = 4 ==> y=2
Which would give us 2*2 * 4 = 16?
Re: In the x-y plane, the area of the region bounded by the graph of |x + [#permalink] 27 Jan 2014, 03:00
Go to page 1 2 Next [ 38 posts ]
Display posts from previous: Sort by
|
<P>A few months ago I went to the <A href="http://www.dotnetrocks.com/">.NET Rocks!</A> Roadshow event here in Phoenix. It was a great show and I ended up winning "some really cool PDA/Phone device from Cingular". Apparently, Cingular hadn't even finalized what the device would be so the winners were told we would find something out around the end of December. It was suggested that the device might be an <A href="http://www.engadget.com/2005/09/23/hp-ipaq-hw6715-in-the-flesh/">iPAQ 67xx</A>.</P>
Well, December turned into February and the iPaq turned into the Cingular 2125. I decided to unlock it so I could try it out with my T-Mobile account. It worked great, but I was really looking forward to a pocket PC Phone with a QWERTY keyboard. So, I decided to sell it on eBay.
After I sell it I will be buying a T-Mobile MDA. I tried one out at the local T-Mobile store and it is a lot smaller than I expected and the slide-out keyboard seems to work pretty nice.
So, if you are looking for an unlocked Cingular 2125, head on over to my auction and bid!
|
# F( + W) ft) 1. (10 points) Use (he limit delinition of the derivative ( Iim to find the derivative
###### Question:
f( + W) ft) 1. (10 points) Use (he limit delinition of the derivative ( Iim to find the derivative of {m) = at thc point 4 Ous
#### Similar Solved Questions
##### Let f(z,y) = (YVz,z?e"). Calculate the derivative Df at (1,0) Let R 7 R? be a curve such that c(0) (1,0) and such that c(0) = (1,1) Use the chain rule to calculate D(f 0 c)(0).
Let f(z,y) = (YVz,z?e"). Calculate the derivative Df at (1,0) Let R 7 R? be a curve such that c(0) (1,0) and such that c(0) = (1,1) Use the chain rule to calculate D(f 0 c)(0)....
##### Exercise 5.5 For the regression in-sample predicted values Yi show that Yi IX~N(x B,02hii_ where hii are the leverage values hii =xi (Xx)-' xi.
Exercise 5.5 For the regression in-sample predicted values Yi show that Yi IX~N(x B,02hii_ where hii are the leverage values hii =xi (Xx)-' xi....
##### I need a detailed solution please, thanks! 4. Find the initial and final values of the...
I need a detailed solution please, thanks! 4. Find the initial and final values of the function ft) whose Laplace Transform is given by: 2s +1 F(s) = s2+ s + 1 5. Find the inverse Laplace Transform of (a) F(s)= 1 s(s2+23+2) (b) F(s)= s(s2+a2)...
##### If the area of the shaded region is A, what is ff(x)dx?fsf ()dx =Edit
If the area of the shaded region is A, what is ff(x)dx? fsf ()dx = Edit...
##### Fontut"cnoolR dued Wic[1t,2.31 . (4.5.6]F Dxacd ureand 14' tch 4a{3,4.6]
Fontut" cnool R dued Wic [1t,2.31 . (4.5.6] F Dxacd ure and 14' tch 4a {3,4.6]...
##### On December 31, 2018, Squidward Corporation issued $500,000, 8%, 20-year bonds for$414,210 cash when the...
On December 31, 2018, Squidward Corporation issued $500,000, 8%, 20-year bonds for$414,210 cash when the market rate of interest was 10%. The bonds pay interest semi-annually each June 30 and December 31. Squidward uses the effective interest method of amortization to amortize and premium or discou...
##### 1) Explain the logic behind the experiment by Tonegawa and colleagues that was responsible for the...
1) Explain the logic behind the experiment by Tonegawa and colleagues that was responsible for the discovery of enhancers. Which treatments in experiment served as controls, and what did they control for. 2) Histones proteins have been extremely highly conserved during evolution. The histones found ...
##### Find the exact length of the curve.x = 2 + 3t2, y = 9 + 2t3, 0 ≤ t ≤ 5
Find the exact length of the curve. x = 2 + 3t2, y = 9 + 2t3, 0 ≤ t ≤ 5...
##### So the assigment given is to draw a CNC machine figure with atleast 40 lines of...
so the assigment given is to draw a CNC machine figure with atleast 40 lines of g-code (USING ONLY G91, G90, G0,G1,G2,G3,G70,G71,G90)! DO NOT USE R, T01 OR M0 Only the simple (G91, G90, G0,G1,G2,G3,G70,G71,G90) codes are allowed to be used NO DRILLING CODES!!!!!!!!! Use basic shapes like triangles a...
##### Solve the problem. 15) A bank's loan officer rates applicants for credit. The ratings are normally...
Solve the problem. 15) A bank's loan officer rates applicants for credit. The ratings are normally distributed with a mean of 200 and a standard deviation of 50. If 40 different applicants are randomly selected, find the probability that their mean is above 215...
##### 1. point charge equivalently show that the scalar potential and electric field of a moving with...
1. point charge equivalently show that the scalar potential and electric field of a moving with constant velocity a can be written Ver, t = t 9 - 4TE0 R (1-v²sn²0/0²) as Ecř, t) - Site ATTEO (1-r*sino/) / R = r _ vt...
##### I need help with these 4 questions 4. The global economy has three cell phone users...
I need help with these 4 questions 4. The global economy has three cell phone users for every fixed line user. Two in every three cell phone users lives in a developing nation and the growth rate is fastest in Africa In African in 50 had a cell phone; in 2009, it was 14 in 50. Describe the changes...
##### Name each of the following compounds:
Name each of the following compounds:...
##### Taxation
Jack, a CPA and tax preparer, was preparing a return for a new client, Mary, who was a traveling insurance salesperson. She told Jack she claims standard mileage as a business expense. When asked, she said her company has always reimbursed her auto expense at half the federal mileage rate. While rev...
##### Onsider the hypotheses shown below Given that x : 106, σ-27, n-47, α = 0.10, complete...
onsider the hypotheses shown below Given that x : 106, σ-27, n-47, α = 0.10, complete parts a through c below. Ho: μ-116 HA-1 116 a. State the decision rule in terms of the critical value(s) of the test statistic. the critical value(s). Reject the null hypothesis if the calculated va...
##### ONLY A) B) D) 4 Let X be a single observation from the density f(x; 0)=...
ONLY A) B) D) 4 Let X be a single observation from the density f(x; 0)= Ox® -110, 1)(x), where 0 >0. (a) In testing Ho: 0 <1 versus H 1:8 > 1, find the power function and size of the test given by the following: Reject H , if and only if X > . (6) Find a most powerful size-a test of ...
##### An IV is infusing at 35 gtt/min. If the drop factor is 10 gtt/mL, what would...
An IV is infusing at 35 gtt/min. If the drop factor is 10 gtt/mL, what would this rate of flow equal in mL/hr?...
##### 53 Oulsi28HoijuJ jtilog 04 h (54n,* }Mi (4liton8 (A0 poarito} IJee thz nat io2r tezire cf tbe sries expunsou e Toroua &2 aacwer % inr dutinal plecea4 i0 Bpprtimate the iniegral
53 Oulsi 28 HoijuJ jtilog 04 h (54n,* } Mi (4liton 8 (A0 poarito} IJee thz #nat io2r tezire cf tbe sries expunsou e Toroua &2 aacwer % inr dutinal plecea 4 i0 Bpprtimate the iniegral...
##### Frat C A6/el Cul,Vhu *llonnenz r4z0l?(Wsnn-0Gcu
Frat C A 6/el Cul, Vhu *llonnenz r4z0l? (Wsnn-0Gcu...
##### A student ran the following reaction in the laboratory at 225 K: 2NOBr(g) 2 2NO(g) +...
A student ran the following reaction in the laboratory at 225 K: 2NOBr(g) 2 2NO(g) + Brz(g) When she introduced 0.198 moles of NOBr(g) into a 1.00 liter container, she found the equilibrium concentration of Br2(g) to be 1.89x10-2 M. Calculate the equilibrium constant, Kc, she obtained for this react...
##### 2. Abagail has an estimated Cobb-Douglas utility function of U = 982592.75 for food, 91, and...
2. Abagail has an estimated Cobb-Douglas utility function of U = 982592.75 for food, 91, and housing, 92. The price for food is arbitrarily set at $1 per unit Pi =$1 and the average monthly rent near UCF, P2, is $1.50 per sq ft. Jackie, like the average UCF student spends$800 on food and housing p...
##### Jbeue funcuton delined by f (z)10r' The gruph of f , the derivalre Ii=V3and 0 {<vV<i< 0and* > v3IViendt > v64<vj
jbeue funcuton delined by f (z) 10r' The gruph of f , the derivalre Ii= V3and 0 {<v V<i< 0and* > v3 I Viendt > v6 4<vj...
##### Tutored Practice Problem 9.3.2 COUNTS TOWARDS GRADE Write net ionic equations for precipitation reactions Close Problem...
Tutored Practice Problem 9.3.2 COUNTS TOWARDS GRADE Write net ionic equations for precipitation reactions Close Problem Write the net ionic equation for the precipitation reaction that occurs when aqueous solutions of cobalt(II) nitrate and ammonium carbonate are combined. Use the pull-down boxes to...
##### Your clients Timmy and Tammy likewise are asset-rich / cash-poor. But what makes them “rich” is...
Your clients Timmy and Tammy likewise are asset-rich / cash-poor. But what makes them “rich” is their home, which is worth way more than they still owe on it. But they’ve both been out of work because of the pandemic and can’t pay their bills. Several of their creditors have ...
##### For the beam shown, use the Conjugate Beam Method to determine a) The deflection at F,...
For the beam shown, use the Conjugate Beam Method to determine a) The deflection at F, b) The slopes on both sides of the hinge at C. 90 KN 160 kN.m E B D F A 2 m ! 2 m 2 m 2 m + 2 m + 1 1 EI - Constant...
##### Let p(z) be polynomial with complex coefficients_Prove that if |p(z)| 2 1for all z € C, then pis constant polynomial2. Prove that if Ip(z)| < 1for all z € C, then p is a constant polynomial
Let p(z) be polynomial with complex coefficients_ Prove that if |p(z)| 2 1for all z € C, then pis constant polynomial 2. Prove that if Ip(z)| < 1for all z € C, then p is a constant polynomial...
##### Problem 24.(4 points. Find the area between the curve y = fl) = ~r_ +4r ad above tle ~-axis _Problem 25.(4 points ) Find the ara under the curve ! f(r) e from T ~]to ,
Problem 24.(4 points. Find the area between the curve y = fl) = ~r_ +4r ad above tle ~-axis _ Problem 25.(4 points ) Find the ara under the curve ! f(r) e from T ~]to ,...
##### The unknown amteni / tor the netwoik given bclow Is50 A10A1O0 A40 A40 A60 A60 A
The unknown amteni / tor the netwoik given bclow Is 50 A 10A 1O0 A 40 A 40 A 60 A 60 A...
##### Problem Body Mass Index (BMI) is utilized by doctors to determine if a child is developing properh because it is relatively cheap t0 calculate. The formula for calculating BMI is given BMI = where W is the weight of the child in kilograms and H is the height of the child in meters_The following table shows the 2g0 of @ child and their height and weight at that age: Age: H (meters}: W (kilograms) 0.94 14.2 1.003 15.4 1.079 17.9 L155 19.9 121l 22.4 1.282 25.8 Notlce that H is a function of A, and
Problem Body Mass Index (BMI) is utilized by doctors to determine if a child is developing properh because it is relatively cheap t0 calculate. The formula for calculating BMI is given BMI = where W is the weight of the child in kilograms and H is the height of the child in meters_The following tabl...
##### Answer the following questions about the function whose derivative is given below: a. What are the critical points of f? b. On what open intervals is increasing decreasing? c. At what points, if any, does assume local maximum and minimum values?f' (x) = (8 sin X - 8)(2 cos X+ 1), OsxsZI
Answer the following questions about the function whose derivative is given below: a. What are the critical points of f? b. On what open intervals is increasing decreasing? c. At what points, if any, does assume local maximum and minimum values? f' (x) = (8 sin X - 8)(2 cos X+ 1), OsxsZI...
##### 1. Which of the following securities present a better investment? Assume that the investment amount is...
1. Which of the following securities present a better investment? Assume that the investment amount is flexible and you may choose to purchase more than one of each security. A. A security that costs you $1000/contract now and will give you$2800/contract back in 10 years B. An investment account th...
##### 2~ + 1)622 +1)86+1)
2~ + 1)6 22 +1)86+1)...
##### Write a system of equations modeling the given conditions. Then solve the system by the substitution method and find the two numbers.The difference between two numbers is $5 .$ Four times the larger number is 6 times the smaller number. Find the numbers.
Write a system of equations modeling the given conditions. Then solve the system by the substitution method and find the two numbers. The difference between two numbers is $5 .$ Four times the larger number is 6 times the smaller number. Find the numbers....
|
# Did photon obey Uncertainity Principle?
1. Jul 23, 2009
### sndtam
Photon has no mass.So when we apply Uncertainity principle to photon
position*velocity*mass=greater than plancks constant
So when we use mass as null,the equation implies that 0<h.
That is h is negative.But h is positive.
Please explain me this.
2. Jul 23, 2009
### xepma
The uncertainty relation follows from the commutation relation of the momentum and position operators. The photon does not carry a mass, but it does carry a momentum.
3. Jul 23, 2009
### humanino
This is not momentum. This is an approximation of momentum at low velocity. The correct dispersion relations for the massless particles as the photon is
$E=pc=\hbar\omega=\hbar c k$
Last edited: Jul 23, 2009
4. Jul 23, 2009
### ZapperZ
Staff Emeritus
The single-slit diffraction phenomenon is the clearest example of photons "obeying" the HUP.
Zz.
5. Jul 23, 2009
### sndtam
Thank you
6. Jul 23, 2009
### Naty1
As far as we know, everything obeys quantum principles...including HUP...
7. Jul 23, 2009
### maverick_starstrider
I refuse to obey your quantum principles ;)
8. Jul 23, 2009
### sndtam
Thanks a lot
9. Jul 24, 2009
### scott.ager
A smart dope is in direct violation of my HUP.
- Werner
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
|
# Similarities / diffs between diffusion & wave propagation
• I
Hi,
I'm a second year undergrad and we've covered the heat equation,
∇^{2}\Psi = \frac{1}{c^{2}}\frac{\partial^2 \Psi}{\partial t^2}
and the wave equation,
D∇^{2}u= \frac{\partial u}{\partial t}
in our differential equations course. Both Diffusion and wave propagation have wave like solutions, for example,
u= Ce^{-\sqrt{w/2D} x } \sin{(\sqrt{w/2D} x - wt)}
\Psi = \Psi_{0} e^{i(kx-wt)}
but are quite different phenomena. Could someone briefly explain the similarities/ differences in the phenomena and the solutions and how this relates to the differential equations please? Thanks.
Related Other Physics Topics News on Phys.org
Chandra Prayaga
First, the difference in the equations themselves: The diffusion equation is first order in time, whereas the wave equation is second order. Now the solutions. The solution (3) to the diffusion equation is not a propagating wave, and is not a solution to the wave equation. Similarly, the solution (4) to the wave equation is not a solution of the diffusion equation.
That's great, thanks, but I thought (3) is a propagating wave that is attenuated with distance (propagating wave enveloped by a decaying exponential). Also, I was wondering what the physical difference between the two phenomena is that leads to the difference in the equations, ie why we don't just have a simple propagating wave solution for heat diffusion?
Chandra Prayaga
The wave equation (1) does not allow an attenuating solution. You can check that by substituting the attenuating function (3) into (1). Any solution of the wave equation is of the form f(x ± vt). The decaying solution is not of that form.
The difference between the two equations (and their solutions) is in time-reversal symmetry:
In equation (1), if you change t to - t, the equation is the same. That translates to the fact (easy to check) that both a forward propagating wave (unattenuated), and a backward propagating wave are solutions of the same equation, and are actually possible phenomena. You see both waves happening all the time.
The diffusion equation (2), on the other hand, is not invariant under time reversal. It represents a macroscopic irreversible process, for example, heat conduction from a high temperature region to a low temperature region, spreading of a drop of ink through a body of water, etc. these processes never happen in the reverse direction. These are inherently dissipative processes. The reverse processes (conduction from low to high temperature, the ink drop gathering back together) would violate the second law of thermodynamics. They are not solutions of the diffusion equation.
The attenuating "wave" is actually a dissipative process, in which energy is transferred from the "wave" into several other modes. That process is also irreversible, and is a solution of the diffusion equation.
|
# College Math Teaching
## July 1, 2013
### Mathematics: aids the conceptual understanding of elementary physics
Filed under: applications of calculus, editorial, elementary mathematics, pedagogy, physics — collegemathteaching @ 4:52 pm
I was blogging about the topic of how “classroom knowledge” turns into “walking around knowledge” and came across an “elementary physics misconceptions” webpage at the University of Montana. It is fun, but it helped me realize how easy things can be when one thinks mathematically.
This becomes very easy if one does a bit of mathematics. Let $m$ represent the mass of the object; $F = 10 = ma$ implies that $a = \frac{10}{m}$ which isn’t that important; we’ll just use $a$. Now putting into vector form we have $\vec{a}(t) = a \vec{i}, \vec{v}(0) = V_i \vec{j}, \vec{s}(0) = \vec{0}$. By elementary integration, obtain $\vec{v} = at \vec{i} + V_i \vec{j}$ and integrate again to obtain $\vec{s}(t) = \frac{1}{2}at^2\vec{i}+(V_i)t\vec{j}$ which has parametric equations $x(t) = \frac{a}{2}t^2, y(t) = V_i t$ which has a “sideways parabola” as a graph.
Let’s look at another example:
So what is going on? Force $F = \frac{d}{dt}(mv) = \frac{dm}{dt}v + m\frac{dv}{dt} = 0$. The first term is thrust and is against the direction of acceleration. So we have:$1000 = m\frac{dv}{dt}$ which, upon integration, implies that $\frac{1000}{m} t + v_0 = v(t)$ and so we see that the rocket continues to speed up at a constant acceleration.
These problems are easier with mathematics, aren’t they? 🙂
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.