text
stringlengths 104
605k
|
---|
[Solution] Air Conditioner Temperature solution codechef
Air Conditioner Temperature solution codechef
There are three people sitting in a room – Alice, Bob, and Charlie. They need to decide on the temperature to set on the air conditioner. Everyone has a demand each:
• Alice wants the temperature to be at least A degrees.
• Bob wants the temperature to be at most B degrees.
• Charlie wants the temperature to be at least C degrees.
Can they all agree on some temperature, or not?
Air Conditioner Temperature solution codechef
• The first line of input will contain a single integer T, denoting the number of test cases.
• Each test case consists of a single line which contains three integers – A, B, C.
Output Format
For each test case, output on a new line, “Yes” or “No”. “Yes”, if they can decide on some temperature which fits all their demands. Or “No”, if no temperature fits all their demands.
You may print each character of the string in either uppercase or lowercase (for example, the strings NOnONo, and no will all be treated as identical).
Constraints
• 1 \leq T \leq 100
• 1 \leq A, B, C \leq 100
Air Conditioner Temperature solution codechef
Input
Output
4
30 35 25
30 35 40
30 35 35
30 25 35
Yes
No
Yes
No
Air Conditioner Temperature solution codechef Explanation
Test Case 1: Alice wants the temperature to be \ge 30, Bob wants it to be \le 35, and Charlie wants it to be \ge 25. The temperatures 30, 31, 32, 33, 34, 35 all satisfy all their demands. So they can choose any of these 6 temperatures, and so the answer is “Yes”.
Test Case 2: Alice wants the temperature to be \ge 30, Bob wants it to be \le 35, and Charlie wants it to be \ge 40. A number can’t be both \ge 40, and \le 35. So there is no temperature that satisfies all their demands. So the answer is “No”.
Test Case 3: Alice wants the temperature to be \ge 30, Bob wants it to be \le 35, and Charlie wants it to be \ge 35. The temperature 35 satisfies all their demands. So the answer is “Yes”.
Test Case 4: Alice wants the temperature to be \ge 30, Bob wants it to be \le 25, and Charlie wants it to be \ge 35. A number can’t be both \ge 30, and \le 25. So there is no temperature that satisfies all their demands. So the answer is “No”.
|
# Reddish tint with spectral renderer
I'm trying to implement a spectral path tracer and comparing results from my program with renders of the same scene done with pbrt and mitsuba. The scene is just some spheres in a box with one point light. All the surfaces have simple Lambertian BRDFs. The light has "uniform" spectral intensity (i.e., every wavelength has same power as others), same holds for surface reflectances. When I try to render the scene in mitsuba I get a "grayscale" image, as expected:
However, if pbrt's used to render the same scene, result image has a reddish tint:
With my program I get similar result:
After some digging through source code, I believe difference lies in mitsuba multiplying spectral intensities by D65 spectrum.
Is it reasonable to expect the image of the described scene to be "in grayscale"? If so, do light intensity spectra indeed are required to be multiplied by D65 spectrum? Again, if so, why pbrt doesn't do it?
Is it reasonable to expect the image of the described scene to be "in grayscale"? If so, do light intensity spectra indeed are required to be multiplied by D65 spectrum?
My expectation from a physically-based spectral rendered is that it would
1. compute the physical spectra
2. use the CIE color matching functions to relate that to a quantified color space (eg CIEXYZ)
3. convert that into the destination color space.
D65 shall not appear anywhere till step 3, and only if we actually render into sRGB (or similar).
A flat spectra translates to XYZ = (0.33, 0.33, 0.33), which is RGB = (1.20, 0.95, 0.91) prior to gamma compression. So the "red tint" is the expected result following the above procedure.
Don't know why Mitsuba would have such a 'feature'.
|
## Precalculus (6th Edition) Blitzer
The first term of $\sum\limits_{i=1}^{20}{\left( 6i-\text{ }4 \right)}$ is 2 and the last term is 116.
To find the first term, we will substitute $i=1$ in the given equation \begin{align} & 6i-4=6\left( 1 \right)-4 \\ & =6-4 \\ & =2 \end{align} To find the last term, we will substitute $i=20$ in the given equation \begin{align} & 6i-4=6\left( 20 \right)-4 \\ & =120-4 \\ & =116 \end{align}
|
1. Jan 7, 2008
mathboy
Let T:V -> V be a linear operator on a finite-dimensional inner product space V.
Prove that rank(T) = rank(T*).
So far I've proven that rank (T*T) = rank(T) by showing that ker(T*T) = ker(T). But I can't think of how to go from there.
2. Jan 8, 2008
mathboy
Oh, should I use the fact that the number of linearly independent rows of a matrix A is equal to the number of linearly independent columns of A*, and that rank(T) = rank of matrix represenation of T wrt to a basis? Or is there a better way?
rank(T)=rank([T])=rank([T]*)=rank([T*])=rank(T*)
Last edited: Jan 8, 2008
3. Jan 8, 2008
MathematicalPhysicist
well, dimV=dimT+dimKerT=dimT*+dimkerT*
but kerT=kerT*, one way to prove it is if
v not zero in ker T, then T(v)=0, so 0=<Tv,v>=<v,T*v>
but v isn't zero then T*(v)=0, you can do it also vice versa.
4. Jan 8, 2008
mathboy
But <v,T*v>=0 implies T*v=0 only if <v,T*v>=0 for ALL v in V, not just for v in kerT.
5. Jan 8, 2008
morphism
imT* is the orthogonal complement of kerT. So dimV=rankT*+dimkerT=rankT+dimkerT.
6. Jan 8, 2008
mathboy
Wow! Morphism, do you look these proofs up somewhere, or do you figure it out completely from scratch? If the latter, then you must be a genius!
7. Jan 8, 2008
morphism
It's just experience and not any form of genius - I already knew that fact, and it turned out to be helpful here.
8. Oct 4, 2010
Cassandra2182
How is the adjoint of the adjoint of a linear operator, the linear operator itself? i.e. A**=A?
|
## Sunday, January 29, 2006
### Cellphones induce greater market efficiency on the fish market
The best Indian economics conference that I know is the NCAER-NBER Neemrana conference, which takes place in Dec/Jan every year. This year, one fascinating paper was by Robert Jensen of Harvard, who has worked on the impact of cellular telephony on pricing efficiency on the market for fish in Kerala. It's a great story, and Swaminathan Aiyar wrote his column in Times of India this morning using it. I haven't been able to locate a PDF of the full paper by Prof. Jensen. It's well worth reading - please tell me if you find the URL.
As Jensen emphasises, what is going on is the power of the I' in IT. Putting better information in the hands of economic agents transforms their thought process and maximisation. I feel similarly about the work that's been done in India on bringing transparency to previously opaque market, by polling dealer markets. This includes MIBOR (done by NSE) and the NCDEX polled prices for the commodity spot markets (done by CMIE and CRISIL).
(Times of India has a really terrible website, so while I normally like to link to original sources, I have instead linked up to Swami's website. If someone from TOI reads this, please see these guidelines for you are violating most of them.).
1. hi Sir,
It is really fascinating how competition and technology together can affect such positive changes across villages and societies.
Found these two pdfs from the professor you mentioned -
web.mit.edu/sinana/www/sari.pdf
www.cid.harvard.edu/cr/pdf/gitrr2002_ch07.pdf
(This is a similar study done in the Chinese market by the same gentleman)
2. Ravi,
No, these two URLs are both related (insofar as Robert Jensen does lots of work on related themes) but they are not the paper on Kerala, mobile phones, and fish markets. I hope that papers turns up soon on the web.
3. Hi
I had been hearing about the Kerala fishermen from BPL executives for some time - BPL was one of the two circle operators in Kerala.
But didn't realise how efficiently the telecom network served the waterways (and thus the river/sea economy) until I travelled, two or three years ago, by speed boat from Cochin to Alleppey and Kumarakom on the backwaters.
Amazingly, I had a signal on my mobile phone right through despite it getting pretty isolated in some parts. The guy who was driving the boat would keep getting phone calls all the time. So, he had one hand on the phone and one the wheel, the boat's wheel !
And guess what ? Driving back from Kumarakom to Cochin by road, there were long stretches when there would be no phone signal !! Don't know if its changed now.
4. Dear Ajay,
The gains in efficiency cited need to be deconstructed' into:
(a) the usage of technology to lock-in prices - thus in essence cellphones enable a standard futures contract markets. an empirical test should be that the ex-post cell-phones the lagged prices of marine products are significant predictors of spot prices. ex-post the lagged effect of the prices of the marine products should be less stochastic, and thus the trend is significant but lagged deviations from the mean are insignificant.
(b) on a more macro-level, the risk-minimization due to technology leads to improved consumption smoothing in this micromarket - which in turn leaves other related-micromarkets (like the small restaurants, food hawkers) to either forgo, or exploit, the gains from informational arbitrage - thus causing an "informational cascade" across sectors.
don't know where to find data for (a) to write a paper. what do you think?
cheers. k.,
5. Why do you expect that in an efficient market, there should be bigger AR terms? Jensen does look at simple volatility - both across space and time - and he finds that post cellphones, the vol goes down. In the pre-cellphone period, there were episodes where the price of fish (a perishable) used to go to 0! Now those episodes don't happen.
He also tries to think about who captured what part of the welfare gains. Some of it went to the producer, some of it went to consumers. I wish we got a URL for his paper or slideshow. Thus far, it doesn't seem to be up on the net.
6. CORRECTION TO POST # 4. NOTE THE CAPITALIZED CHANGES.
(MISTAKE DUE TO LATE NIGHT FORAYS ON CAFFEINE!)
a) the usage of technology to lock-in prices - thus in essence cellphones enable a standard futures contract markets. an empirical test should be that the ex-ANTE cell-phones the lagged prices of marine products are significant predictors of spot prices. ex-post the lagged effect of the prices of the marine products should be less stochastic, and thus the trend is significant but lagged deviations from the mean are insignificant.
7. Hi Ajay,
The article is now posted at the following URL:
http://www.swaminomics.org/articles/20060129_Cell_ridge_the_dig.htm
Thanks,
Mukul.
http://mukulblog.blogspot.com/
8. Thanks, Mukul, I used the new URL!
9. Ajay,
Did you finally manage to locate the full text of Jensen's study on the Kerala Fishermen? If so, please advise on the same.
10. The Economist has an article on Jensen's study (May 10th 2007). it cites the following reference:
*“The Digital Provide: Information (technology), market performance and welfare in the South Indian fisheries sector”, by Robert Jensen. To be published in the Quarterly Journal of Economics, August 2007.
11. Here are two studies, I found in this blog
http://bayesianheresy.blogspot.com/2007/05/how-mobile-phone-brought-down-price-of.html
LaTeX mathematics works. This means that if you want to say $10 you have to say \$10.
|
# Putting Several Event Types in the Same Topic – Revisited
The following post originally appeared in the Confluent blog on July 8, 2020.
In the article Should You Put Several Event Types in the Same Kafka Topic?, Martin Kleppmann discusses when to combine several event types in the same topic and introduces new subject name strategies for determining how Confluent Schema Registry should be used when producing events to an Apache Kafka® topic.
Schema Registry now supports schema references in Confluent Platform 5.5, and this blog post presents an alternative means of putting several event types in the same topic using schema references, discussing the advantages and disadvantages of this approach.
### Constructs and constraints
Apache Kafka, which is an event streaming platform, can also act as a system of record or a datastore, as seen with ksqlDB. Datastores are composed of constructs and constraints. For example, in a relational database, the constructs are tables and rows, while the constraints include primary key constraints and referential integrity constraints. Kafka does not impose constraints on the structure of data, leaving that role to Confluent Schema Registry. Below are some constructs when using both Kafka and Schema Registry:
• Message: a data item that is made up of a key (optional) and value
• Topic: a collection of messages, where ordering is maintained for those messages with the same key (via underlying partitions)
• Schema (or event type): a description of how data should be structured
• Subject: a named, ordered history of schema versions
The following are some constraints that are maintained when using both Kafka and Schema Registry:
• Schema-message constraints: A schema constrains the structure of the message. The key and value are typically associated with different schemas. The association between a schema and the key or value is embedded in the serialized form of the key or value.
• Subject-schema constraints: A subject constrains the ordered history of schema versions, also known as the evolution of the schema. This constraint is called a compatibility level. The compatibility level is stored in Schema Registry along with the history of schema versions.
• Subject-topic constraints: When using the default TopicNameStrategy, a subject can constrain the collection of messages in a topic. The association between the subject and the topic is by convention, where the subject name is {topic}-key for the message key and {topic}-value for the message value.
### Using Apache Avro™ unions before schema references
As mentioned, the default subject name strategy, TopicNameStrategy, uses the topic name to determine the subject to be used for schema lookups, which helps to enforce subject-topic constraints. The newer subject-name strategies, RecordNameStrategy and TopicRecordNameStrategy, use the record name (along with the topic name for the latter strategy) to determine the subject to be used for schema lookups. Before these newer subject-name strategies were introduced, there were two options for storing multiple event types in the same topic:
• Disable subject-schema constraints by setting the compatibility level of a subject to NONE and allowing any schema to be saved in the subject, regardless of compatibility
• Use an Avro union
The second option of using an Avro union was preferred, but still had the following issues:
• The resulting Avro union could become unwieldy
• It was difficult to independently evolve the event types contained within the Avro union
By using either RecordNameStrategy or TopicRecordNameStrategy, you retain subject-schema constraints, eliminate the need for an Avro union, and gain the ability to evolve types independently. However, you lose subject-topic constraints, as now there is no constraint on the event types that can be stored in the topic, which means the set of event types in the topic can grow unbounded.
### Using Avro unions with schema references
Introduced in Confluent Platform 5.5, a schema reference is comprised of:
• A reference name: part of the schema that refers to an entirely separate schema
• A subject and version: used to identify and look up the referenced schema
When registering a schema to Schema Registry, an optional set of references can be specified, such as this Avro union containing reference names:
[
"io.confluent.examples.avro.Customer",
"io.confluent.examples.avro.Product",
"io.confluent.examples.avro.Payment"
]
When registering this schema to Schema Registry, an array of reference versions is also sent, which might look like the following:
[
{
"name": "io.confluent.examples.avro.Customer",
"subject": "customer",
"version": 1
},
{
"name": "io.confluent.examples.avro.Product",
"subject": "product",
"version": 1
},
{
"name": "io.confluent.examples.avro.Order",
"subject": "order",
"version": 1
}
]
As you can see, the Avro union is no longer unwieldy. It is just a list of event types that will be sent to a topic. The event types can evolve independently, similar to when using RecordNameStrategy and TopicRecordNameStrategy. Plus, you regain subject-topic constraints, which were missing when using the newer subject name strategies.
However, in order to take advantage of these newfound gains, you need to configure your serializers a little differently. This has to do with the fact that when an Avro object is serialized, the schema associated with the object is not the Avro union, but just the event type contained within the union. When the Avro serializer is given the Avro object, it will either try to register the event type as a newer schema version than the union (if auto.register.schemas is true), or try to find the event type in the subject (if auto.register.schemas is false), which will fail. Instead, you want the Avro serializer to use the Avro union for serialization and not the event type. In order to accomplish this, set these two configuration properties on the Avro serializer:
• auto.register.schemas=false
• use.latest.version=true
Setting auto.register.schemas to false disables automatic registration of the event type, so that it does not override the union as the latest schema in the subject. Setting use.latest.version to true causes the Avro serializer to look up the latest schema version in the subject (which will be the union) and use that for serialization; otherwise, if set to false, the serializer will look for the event type in the subject and fail to find it.
### Using JSON Schema and Protobuf with schema references
Now that Confluent Platform supports both JSON Schema and Protobuf, both RecordNameStrategy and TopicRecordNameStrategy can be used with these newer schema formats as well. In the case of JSON Schema, the equivalent of the name of the Avro record is the title of the JSON object. In the case of Protobuf, the equivalent is the name of the Protobuf message.
Also like Avro, instead of using the newer subject-name strategies to combine multiple event types in the same topic, you can use unions. The Avro union from the previous section can also be modeled in JSON Schema, where it is referred to as a "oneof":
{
"oneOf": [
{ "$ref": "Customer.schema.json" }, { "$ref": "Product.schema.json" },
{ "$ref": "Order.schema.json } ] } In the above schema, the array of reference versions that would be sent might look like this: [ { "name": "Customer.schema.json", "subject": "customer", "version": 1 }, { "name": "Product.schema.json", "subject": "product", "version": 1 }, { "name": "Order.schema.json", "subject": "order", "version": 1 } ] As with Avro, automatic registration of JSON schemas that contain a top-level oneof won’t work, so you should configure the JSON Schema serializer in the same manner as the Avro serializer, with auto.register.schemas set to false and use.latest.version set to true, as described in the previous section. In Protobuf, top-level oneofs are not permitted, so you need to wrap the oneof in a message: syntax = "proto3"; package io.confluent.examples.proto; import "Customer.proto"; import "Product.proto"; import "Order.proto"; message AllTypes { oneof oneof_type { Customer customer = 1; Product product = 2; Order order = 3; } } Here are the corresponding reference versions that could be sent with the above schema: [ { "name": "Customer.proto", "subject": "customer", "version": 1 }, { "name": "Product.proto", "subject": "product", "version": 1 }, { "name": "Order.proto", "subject": "order", "version": 1 } ] One advantage of wrapping the oneof with a message is that automatic registration of the top-level schema will work properly. In the case of Protobuf, all referenced schemas will also be auto registered, recursively. You can do something similar with Avro by wrapping the union with an Avro record: { "type": "record", "namespace": "io.confluent.examples.avro", "name": "AllTypes", "fields": [ { "name": "oneof_type", "type": [ "io.confluent.examples.avro.Customer", "io.confluent.examples.avro.Product", "io.confluent.examples.avro.Order" ] } ] } This extra level of indirection allows automatic registration of the top-level Avro schema to work properly. However, unlike Protobuf, with Avro, the referenced schemas still need to be registered manually beforehand, as the Avro object does not have the necessary information to allow referenced schemas to be automatically registered. Wrapping a oneof with a JSON object won’t work with JSON Schema, since a POJO being serialized to JSON doesn’t have the requisite metadata. Instead, optionally annotate the POJO with a @Schema annotation to provide the complete top-level JSON Schema to be used for both automatic registration and serialization. As with Avro, and unlike Protobuf, referenced schemas need to be registered manually beforehand. ### Getting started with schema references Schema references are a means of modularizing a schema and its dependencies. While this article shows how to use them with unions, they can be used more generally to model the following: • Nested records in Avro • import statements in Protobuf • $ref statements in JSON Schema
As mentioned in the previous section, if you’re using Protobuf, the Protobuf serializer can automatically register the top-level schema and all referenced schemas, recursively, when given a Protobuf object. This is not possible with the Avro and JSON Schema serializers. With those schema formats, you must first manually register the referenced schemas and then the top-level schema. Manual registration can be accomplished with the REST APIs or with the Schema Registry Maven Plugin.
As an example of using the Schema Registry Maven Plugin, below are schemas specified for the subjects named all-types-value, customer, and product in a Maven POM.
<plugin>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry-maven-plugin</artifactId>
<version>${confluent.version}</version> <configuration> <schemaRegistryUrls> <param>http://127.0.0.1:8081</param> </schemaRegistryUrls> <subjects> <all-types-value>src/main/avro/AllTypes.avsc</all-types-value> <customer>src/main/avro/Customer.avsc</customer> <product>src/main/avro/Product.avsc</product> </subjects> <schemaTypes> <all-types-value>AVRO</all-types-value> <customer>AVRO</customer> <product>AVRO</product> </schemaTypes> <references> <all-types-value> <reference> <name>io.confluent.examples.avro.Customer</name> <subject>customer</subject> </reference> <reference> <name>io.confluent.examples.avro.Product</name> <subject>product</subject> </reference> </all-types-value> </references> </configuration> <goals> <goal>register</goal> </goals> </plugin> Each reference can specify a name, subject, and version. If the version is omitted, as with the example above, and the referenced schema is also being registered at the same time, the referenced schema’s version will be used; otherwise, the latest version of the schema in the subject will be used. Here is the content of AllTypes.avsc, which is a simple union: [ "io.confluent.examples.avro.Customer", "io.confluent.examples.avro.Product" ] Here is Customer.avsc, which contains a Customer record: { "type": "record", "namespace": "io.confluent.examples.avro", "name": "Customer", "fields": [ {"name": "customer_id", "type": "int"}, {"name": "customer_name", "type": "string"}, {"name": "customer_email", "type": "string"}, {"name": "customer_address", "type": "string"} ] } And here is Product.avsc, which contains a Product record: { "type": "record", "namespace": "io.confluent.examples.avro", "name": "Product", "fields": [ {"name": "product_id", "type": "int"}, {"name": "product_name", "type": "string"}, {"name": "product_price", "type": "double"} ] } Next, register the schemas above using the following command: mvn schema-registry:register The above command will register referenced schemas before registering the schemas that depend on them. The output of the command will contain the ID of each schema that is registered. You can use the schema ID of the top-level schema with the console producer when producing data. Next, use the console tools to try it out. First, start the Avro console consumer. Note that you should specify the topic name as all-types since the corresponding subject is all-types-value according to TopicNameStrategy. ./bin/kafka-avro-console-consumer --topic all-types --bootstrap-server localhost:9092 In a separate console, start the Avro console producer. Pass the ID of the top-level schema as the value of value.schema.id. ./bin/kafka-avro-console-producer --broker-list localhost:9092 --topic all-types --property value.schema.id={id} --property auto.register=false --property use.latest.version=true At the same command line as the producer, input the data below, which represent two different event types. The data should be wrapped with a JSON object that specifies the event type. This is how the Avro console producer expects data for unions to be represented in JSON. { "io.confluent.examples.avro.Product": { "product_id": 1, "product_name" : "rice", "product_price" : 100.00 } } { "io.confluent.examples.avro.Customer": { "customer_id": 100, "customer_name": "acme", "customer_email": "[email protected]", "customer_address": "1 Main St" } } The data will appear at the consumer. Congratulations, you’ve successfully sent two different event types to a topic! And unlike the newer subject name strategies, the union will prevent event types other than Product and Customer from being produced to the same topic, since the producer is configured with the default TopicNameStrategy. ### Summary Now there are two modular ways to store several event types in the same topic, both of which allow event types to evolve independently. The first, using the newer subject-name strategies, is straightforward but drops subject-topic constraints. The second, using unions (or oneofs) and schema references, maintains subject-topic constraints but adds further structure and drops automatic registration of schemas in the case of a top-level union or oneof. If you’re interested in querying topics that combine multiple event types with ksqlDB, the second method, using a union (or oneof) is the only option. By maintaining subject-topic constraints, the method of using a union (or oneof) allows ksqlDB to deal with a bounded set of event types as defined by the union, instead of a potentially unbounded set. Modeling a union (also known as a sum type) by a relational table is a solved problem, and equivalent functionality will most likely land in ksqlDB in the future. # Playing Chess with Confluent Schema Registry Previously, the Confluent Schema Registry only allowed you to manage Avro schemas. With Confluent Platform 5.5, the schema management within Schema Registry has been made pluggable, so that custom schema types can be added. In addition, schema plugins have been developed for both Protobuf and JSON Schema. Now Schema Registry has two main extension points: 1. REST Extensions 2. Schema Plugins In reality, the schema management within Schema Registry is really just a versioned history mechanism, with specific rules for how versions can evolve. To demonstrate both of the above extension points, I’ll show how Confluent Schema Registry can be turned into a full-fledged chess engine.1 ### A Schema Plugin for Chess A game of chess is also a versioned history. In this case, it is a history of chess moves. The rules of chess determine whether a move can be applied to a given version of the game. To represent a version of a game of chess, I’ll use Portable Game Notation (PGN), a format in which moves are described using algebraic notation. However, when registering a new version, we won’t require that the client send the entire board position represented as PGN. Instead, the client will only need to send the latest move. When the schema plugin receives the latest move, it will retrieve the current version of the game, check if the move is compatible with the current board position, and only then apply the move. The new board position will be saved in PGN format as the current version. So far, this would allow the client to switch between making moves for white and making moves for black. To turn Schema Registry into a chess engine, after the schema plugin applies a valid move from the client, it will generate a move for the opposing color and apply that move as well. In order to take back a move, the client just needs to delete the latest version of the game, and then make a new move. The new move will be applied to the latest version of the game that is not deleted. Finally, in order to allow the client to play a game of chess with the black pieces, the client will send a special move of the form {player as black}. This is a valid comment in PGN format. When this special move is received, the schema plugin will simply generate a move for white and save that as the first version. Let’s try it out. Assuming that the chess schema plugin has been built and placed on the CLASSPATH for the Schema Registry2, the following properties need to be added to schema-registry.properties3 schema.providers=io.yokota.schemaregistry.chess.schema.ChessSchemaProvider resource.static.locations=static resource.extension.class=io.yokota.schemaregistry.chess.SchemaRegistryChessResourceExtension The above properties not only configure the chess schema plugin, but also the chess resource extension that will be used in the next section. Once Schema Registry is up, you can verify that the chess schema plugin was registered. $ curl http://localhost:8081/schemas/types
["CHESS","JSON","PROTOBUF","AVRO"]
Let’s make the move d4.
$curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \ --data '{"schema": "d4", "schemaType": "CHESS"}' \ http://localhost:8081/subjects/newgame/versions {"id":1} Schema Registry returns the ID of the new version. Let’s examine what the version actually looks like. $ curl http://localhost:8081/subjects/newgame/versions/latest
{
"subject": "newgame",
"version": 1,
"id": 1,
"schemaType": "CHESS",
"schema": "1.d4 d5"
}
Schema Registry replied with the move d5.
We can continue playing chess with Schema Registry in this fashion, but of course it isn’t the best user experience. Let’s see if a REST extension will help.
### A REST Extension for Chess
A single-page application (SPA) with an actual chess board as the interface would provide a much better experience. Therefore, I’ve created a REST Extension that wraps a Vue.js SPA for playing chess. When the user makes a move on the chess board, the SPA sends the move to Schema Registry, retrieves the new board position, determines the last move played by the computer opponent, and makes that move on the board as well.
With the REST extension configured as described in the previous section, you can navigate to http://localhost:8081/index.html to see the chess engine UI presented by the REST extension. When playing a chess game, the game history will appear below the chess board, showing how the board position evolves over time.
Here is an example of the REST extension in action.
As you can see, schema plugins in conjunction with REST extensions can provide a powerful combination. Hopefully, you are now inspired to customize Confluent Schema Registry in new and creative ways. Have fun!
# Building A Graph Database Using Kafka
I previously showed how to build a relational database using Kafka. This time I’ll show how to build a graph database using Kafka. Just as with KarelDB, at the heart of our graph database will be the embedded key-value store, KCache.
### Kafka as a Graph Database
The graph database that I’m most familiar with is HGraphDB, a graph database that uses HBase as its backend. More specifically, it uses the HBase client API, which allows it to integrate with not only HBase, but also any other data store that implements the HBase client API, such as Google BigTable. This leads to an idea. Rather than trying to build a new graph database around KCache entirely from scratch, we can try to wrap KCache with the HBase client API.
HBase is an example of a wide column store, also known as an extensible record store. Like its predecessor BigTable, it allows any number of column values to be associated with a key, without requiring a schema. For this reason, a wide column store can also be seen as two-dimensional key-value store.1
I’ve implemented KStore as a wide column store (or extensible record store) abstraction for Kafka that relies on KCache under the covers. KStore implements the HBase client API, so it can be used wherever the HBase client API is supported.
Let’s try to use KStore with HGraphDB. After installing and starting the Gremlin console, we install KStore and HGraphDB.
$./bin/gremlin.sh \,,,/ (o o) -----oOOo-(3)-oOOo----- plugin activated: tinkerpop.server plugin activated: tinkerpop.utilities plugin activated: tinkerpop.tinkergraph gremlin> :install org.apache.hbase hbase-client 2.2.1 gremlin> :install org.apache.hbase hbase-common 2.2.1 gremlin> :install org.apache.hadoop hadoop-common 3.1.2 gremlin> :install io.kstore kstore 0.1.0 gremlin> :install io.hgraphdb hgraphdb 3.0.0 gremlin> :plugin use io.hgraphdb After we restart the Gremlin console, we configure HGraphDB with the KStore connection class and the Kafka bootstrap servers.2 We can then issue Gremlin commands against Kafka. $ ./bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
plugin activated: io.hgraphdb
plugin activated: tinkerpop.tinkergraph
gremlin> cfg = new HBaseGraphConfiguration()\
......1> .set("hbase.client.connection.impl", "io.kstore.KafkaStoreConnection")\
......2> .set("kafkacache.bootstrap.servers", "localhost:9092")
==>io.hgraphdb.HBaseGraphConfiguration@41b0ae4c
gremlin> graph = new HBaseGraph(cfg)
==>hbasegraph[hbasegraph]
gremlin> g = graph.traversal()
==>graphtraversalsource[hbasegraph[hbasegraph], standard]
==>v[0371a1db-8768-4910-94e3-7516fc65dab3]
==>v[3bbc9ce3-24d3-41cf-bc4b-3d95dbac6589]
It works! HBaseGraph is now using Kafka as its storage backend.
### Kafka as a Document Database
Now that we have a wide column store abstraction for Kafka in the form of KStore, let’s see what else we can do with it. Another database that uses the HBase client API is HDocDB, a document database for HBase. To use KStore with HDocDB, first we need to set hbase.client.connection.impl in our hbase-site.xml as follows.
<configuration>
<property>
<name>hbase.client.connection.impl</name>
<value>io.kstore.KafkaStoreConnection</value>
</property>
<property>
<name>kafkacache.bootstrap.servers</name>
<value>localhost:9092</value>
</property>
</configuration>
Now we can issue MongoDB-like commands against Kafka, using HDocDB.3
$jrunscript -cp <hbase-conf-dir>:target/hdocdb-1.0.1.jar:../kstore/target/kstore-0.1.0.jar -f target/classes/shell/hdocdb.js -f - nashorn> db.mycoll.insert( { _id: "jdoe", first_name: "John", last_name: "Doe" } ) nashorn> var doc = db.mycoll.find( { last_name: "Doe" } )[0] nashorn> print(doc) {"_id":"jdoe","first_name":"John","last_name":"Doe"} nashorn> db.mycoll.update( { last_name: "Doe" }, {$set: { first_name: "Jim" } } )
nashorn> var doc = db.mycoll.find( { last_name: "Doe" } )[0]
nashorn> print(doc)
{"_id":"jdoe","first_name":"Jim","last_name":"Doe"}
Pretty cool, right?
### Kafka as a Wide Column Store
Of course, there is no requirement to wrap KStore with another layer in order to use it. KStore can be used directly as a wide column store abstraction on top of Kafka. I’ve integrated KStore with the HBase Shell so that one can work directly with KStore from the command line.
$./kstore-shell.sh localhost:9092 hbase(main):001:0> create 'test', 'cf' Created table test Took 0.2328 seconds => Hbase::Table - test hbase(main):003:0* list TABLE test 1 row(s) Took 0.0192 seconds => ["test"] hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1' Took 0.1284 seconds hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2' Took 0.0113 seconds hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3' Took 0.0096 seconds hbase(main):007:0> scan 'test' ROW COLUMN+CELL row1 column=cf:a, timestamp=1578763986780, value=value1 row2 column=cf:b, timestamp=1578763992567, value=value2 row3 column=cf:c, timestamp=1578763996677, value=value3 3 row(s) Took 0.0233 seconds hbase(main):008:0> get 'test', 'row1' COLUMN CELL cf:a timestamp=1578763986780, value=value1 1 row(s) Took 0.0106 seconds hbase(main):009:0> There’s no limit to the type of fun one can have with KStore. 🙂 ### Back to Graphs Getting back to graphs, another popular graph database is JanusGraph, which is interesting because it has a pluggable storage layer. Some of the storage backends that it supports through this layer are HBase, Cassandra, and BerkeleyDB. Of course, KStore can be used in place of HBase when configuring JanusGraph. Again, it’s simply a matter of configuring the KStore connection class in the JanusGraph configuration. storage.hbase.ext.hbase.client.connection.impl: io.kstore.KafkaStoreConnection storage.hbase.ext.kafkacache.bootstrap.servers: localhost:9092 However, we can do better when integrating JanusGraph with Kafka. JanusGraph can be integrated with any storage backend that supports a wide column store abstraction. When integrating with key-value stores such as BerkeleyDB, JanusGraph provides its own adapter for mapping a key-value store to a wide column store. Thus we can simply provide KCache to JanusGraph as a key-value store, and it will perform the mapping to a wide column store abstraction for us automatically. I’ve implemented a new storage plugin for JanusGraph called janusgraph-kafka that does exactly this. Let’s try it out. After following the instructions here, we can start the Gremlin console. $ ./bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.server
plugin activated: tinkerpop.tinkergraph
plugin activated: tinkerpop.spark
plugin activated: tinkerpop.utilities
plugin activated: janusgraph.imports
gremlin> graph = JanusGraphFactory.open('conf/janusgraph-kafka.properties')
==>standardjanusgraph[io.kcache.janusgraph.diskstorage.kafka.KafkaStoreManager:[127.0.0.1]]
gremlin> g = graph.traversal()
==>graphtraversalsource[standardjanusgraph[io.kcache.janusgraph.diskstorage.kafka.KafkaStoreManager:[127.0.0.1]], standard]
==>v[4320]
==>v[4104]
Works like a charm.
### Summary
In this and the previous post, I’ve shown how Kafka can be used as
I guess I could have titled this post “Building a Graph Database, Document Database, and Wide Column Store Using Kafka”, although that’s a bit long. In any case, hopefully I’ve shown that Kafka is a lot more versatile than most people realize.
# Building A Relational Database Using Kafka
In a previous post, I showed how Kafka can be used as the persistent storage for an embedded key-value store, called KCache. Once you have a key-value store, it can be used as the basis for other models such as documents, graphs, and even SQL. For example, CockroachDB is a SQL layer built on top of the RocksDB key-value store and YugaByteDB is both a document and SQL layer built on top of RocksDB. Other databases such as FoundationDB claim to be multi-model, because they support several types of models at once, using the key-value store as a foundation.
In this post I will show how KCache can be extended to implement a fully-functional relational database, called KarelDB1. In addition, I will show how today a database architecture can be assembled from existing open-source components, much like how web frameworks like Dropwizard came into being by assembling components such as a web server (Jetty), RESTful API framework (Jersey), JSON serialization framework (Jackson), and an object-relational mapping layer (JDBI or Hibernate).
### Hello, KarelDB
Before I drill into the components that comprise KarelDB, first let me show you how to quickly get it up and running. To get started, download a release, unpack it, and then modify config/kareldb.properties to point to an existing Kafka broker. Then run the following:
$bin/kareldb-start config/kareldb.properties While KarelDB is still running, at a separate terminal, enter the following command to start up sqlline, a command-line utility for accessing JDBC databases. $ bin/sqlline
sqlline version 1.8.0
sqlline> create table books (id int, name varchar, author varchar);
No rows affected (0.114 seconds)
sqlline> insert into books values (1, 'The Trial', 'Franz Kafka');
1 row affected (0.576 seconds)
sqlline> select * from books;
+----+-----------+-------------+
| ID | NAME | AUTHOR |
+----+-----------+-------------+
| 1 | The Trial | Franz Kafka |
+----+-----------+-------------+
1 row selected (0.133 seconds)
### Kafka for Persistence
At the heart of KarelDB is KCache, an embedded key-value store that is backed by Kafka. Many components use Kafka as a simple key-value store, including Kafka Connect and Confluent Schema Registry. KCache not only generalizes this functionality, but provides a simple Map based API for ease of use. In addition, KCache can use different implementations for the embedded key-value store that is backed by Kafka.
In the case of KarelDB, by default KCache is configured as a RocksDB cache that is backed by Kafka. This allows KarelDB to support larger datasets and faster startup times. KCache can also be configured to use an in-memory cache instead of RocksDB if desired.
### Avro for Serialization and Schema Evolution
Kafka has pretty much adopted Apache Avro as its de facto data format, and for good reason. Not only does Avro provide a compact binary format, but it has excellent support for schema evolution. Such support is why the Confluent Schema Registry has chosen Avro as the first format for which it provides schema management.
KarelDB uses Avro to both define relations (tables), and serialize the data for those relations. By using Avro, KarelDB gets schema evolution for free when executing an ALTER TABLE command.
sqlline> !connect jdbc:avatica:remote:url=http://localhost:8765 admin admin
sqlline> create table customers (id int, name varchar);
No rows affected (1.311 seconds)
Error: Error -1 (00000) :
Error while executing SQL "alter table customers add address varchar not null":
{
"type" : "record",
"name" : "CUSTOMERS",
"fields" : [ {
"name" : "ID",
"type" : "int",
"sql.key.index" : 0
}, {
"name" : "NAME",
"type" : [ "null", "string" ],
"default" : null
} ]
}
using schema:
{
"type" : "record",
"name" : "CUSTOMERS",
"fields" : [ {
"name" : "ID",
"type" : "int",
"sql.key.index" : 0
}, {
"name" : "NAME",
"type" : [ "null", "string" ],
"default" : null
}, {
"type" : "string"
} ]
}
No rows affected (0.024 seconds)
As you can see above, when we first try to add a column with a NOT NULL constraint, Avro rejects the schema change, because adding a new field with a NOT NULL constraint would cause deserialization to fail for older records that don’t have that field. When we instead add the same column with a NULL constraint, the ALTER TABLE command succeeds.
By using Avro for deserialization, a field (without a NOT NULL constraint) that is added to a schema will be appropriately populated with a default, or null if the field is optional. This is all automatically handled by the underlying Avro framework.
Another important aspect of Avro is that it defines a standard sort order for data, as well as a comparison function that operates directly on the binary-encoded data, without first deserializing it. This allows KarelDB to efficiently handle key range queries, for example.
### Calcite for SQL
Apache Calcite is a SQL framework that handles query parsing, optimization, and execution, but leaves out the data store. Calcite allows for relational expressions to be pushed down to the data store for more efficient processing. Otherwise, Calcite can process the query using a built-in enumerable calling convention, that allows the data store to be represented as a set of tuples that can be accessed through an iterator interface. An embedded key-value store is a perfect representation for such a set of tuples, so KarelDB will handle key lookups and key range filtering (using Avro’s sort order support) but otherwise defer query processing to Calcite’s enumerable convention. One nice aspect of the Calcite project is that it continues to develop optimizations for the enumerable convention, which will automatically benefit KarelDB moving forward.
Calcite supports ANSI-compliant SQL, including some newer functions such as JSON_VALUE and JSON_QUERY.
sqlline> create table authors (id int, json varchar);
No rows affected (0.132 seconds)
sqlline> insert into authors
> values (1, '{"name":"Franz Kafka", "book":"The Trial"}');
1 row affected (0.086 seconds)
sqlline> insert into authors
> values (2, '{"name":"Karel Capek", "book":"R.U.R."}');
1 row affected (0.036 seconds)
sqlline> select json_value(json, 'lax $.name') as author from authors; +-------------+ | AUTHOR | +-------------+ | Franz Kafka | | Karel Capek | +-------------+ 2 rows selected (0.027 seconds) ### Omid for Transactions and MVCC Although Apache Omid was originally designed to work with HBase, it is a general framework for supporting transactions on a key-value store. In addition, Omid uses the underlying key-value store to persist metadata concerning transactions. This makes it especially easy to integrate Omid with an existing key-value store such as KCache. Omid actually requires a few features from the key-value store, namely multi-versioned data and atomic compare-and-set capability. KarelDB layers these features atop KCache so that it can take advantage of Omid’s support for transaction management. Omid utilizes these features of the key-value store in order to provide snapshot isolation using multi-version concurrency control (MVCC). MVCC is a common technique used to implement snapshot isolation in other relational databases, such as Oracle and PostgreSQL. Below we can see an example of how rolling back a transaction will restore the state of the database before the transaction began. sqlline> !autocommit off sqlline> select * from books; +----+-----------+-------------+ | ID | NAME | AUTHOR | +----+-----------+-------------+ | 1 | The Trial | Franz Kafka | +----+-----------+-------------+ 1 row selected (0.045 seconds) sqlline> update books set name ='The Castle' where id = 1; 1 row affected (0.346 seconds) sqlline> select * from books; +----+------------+-------------+ | ID | NAME | AUTHOR | +----+------------+-------------+ | 1 | The Castle | Franz Kafka | +----+------------+-------------+ 1 row selected (0.038 seconds) sqlline> !rollback Rollback complete (0.059 seconds) sqlline> select * from books; +----+-----------+-------------+ | ID | NAME | AUTHOR | +----+-----------+-------------+ | 1 | The Trial | Franz Kafka | +----+-----------+-------------+ 1 row selected (0.032 seconds) Transactions can of course span multiple rows and multiple tables. ### Avatica for JDBC KarelDB can actually be run in two modes, as an embedded database or as a server. In the case of a server, KarelDB uses Apache Avatica to provide RPC protocol support. Avatica provides both a server framework that wraps KarelDB, as well as a JDBC driver that can communicate with the server using Avatica RPC. One advantage of using Kafka is that multiple servers can all “tail” the same set of topics. This allows multiple KarelDB servers to run as a cluster, with no single-point of failure. In this case, one of the servers will be elected as the leader while the others will be followers (or replicas). When a follower receives a JDBC request, it will use the Avatica JDBC driver to forward the JDBC request to the leader. If the leader fails, one of the followers will be elected as a new leader. ### Database by Components Today, open-source libraries have achieved what component-based software development was hoping to do many years ago. With open-source libraries, complex systems such as relational databases can be assembled by integrating a few well-designed components, each of which specializes in one thing that it does particularly well. Above I’ve shown how KarelDB is an assemblage of several existing open-source components: Currently, KarelDB is designed as a single-node database, which can be replicated, but it is not a distributed database. Also, KarelDB is a plain-old relational database, and does not handle stream processing. For a distributed, stream-relational database, please consider using KSQL instead, which is production-proven. KarelDB is still in its early stages, but give it a try if you’re interesting in using Kafka to back your plain-old relational data. # Machine Learning with Kafka Graphs As data has become more prevalent from the rise of cloud computing, mobile devices, and big data systems, methods to analyze that data have become more and more advanced, with machine learning and artificial intelligence algorithms epitomizing the state-of-the-art. There are many ways to use machine learning to analyze data in Kafka. Below I will show how machine learning can be performed with Kafka Graphs, a graph analytics library that is layered on top of Kafka Streams. ### Graph Modeling As described in “Using Apache Kafka to Drive Cutting-Edge Machine Learning“, a machine learning lifecycle is comprised of modeling and prediction. That article goes on to describe how to integrate models from libraries like TensorFlow and H20, whether through RPC or by embedding the model in a Kafka application. With Kafka Graphs, the graph is the model. Therefore, when using Kafka Graphs for machine learning, there is no need to integrate with an external machine learning library for modeling. ### Recommender Systems As an example of machine learning with Kafka Graphs, I will show how Kafka Graphs can be used as a recommender system1. Recommender systems are commonly used by companies such as Amazon, Netflix, and Spotify to predict the rating or preference a user would give to an item. In fact, the Netflix Prize was a competition that Netflix started to determine if an external party could devise an algorithm that could provide a 10% improvement over Netflix’s own algorithm. The competition resulted in a wave of innovation in algorithms that use collaborative filtering2, which is a method of prediction based on the ratings or behavior of other users in the system. ### Singular Value Decomposition Singular value decomposition (SVD) is a type of matrix factorization popularized by Simon Funk for use in a recommender system during the Netflix competition. When using Funk SVD3, also called regularized SVD, the user-item rating matrix is viewed as the product of two lower-dimensional matrices, one with a row for each user, and another with a column for each item. For example, a 5×5 ratings matrix might be factored into a 5×2 user-feature matrix and a 2×5 item-feature matrix. $\begin{bmatrix} r_{11} & r_{12} & r_{13} & r_{14} & r_{15} \\ r_{21} & r_{22} & r_{23} & r_{24} & r_{25} \\ r_{31} & r_{32} & r_{33} & r_{34} & r_{35} \\ r_{41} & r_{42} & r_{43} & r_{44} & r_{45} \\ r_{51} & r_{52} & r_{53} & r_{54} & r_{55} \end{bmatrix} = \begin{bmatrix} u_{11} & u_{12} \\ u_{21} & u_{22} \\ u_{31} & u_{32} \\ u_{41} & u_{42} \\ u_{51} & u_{52} \end{bmatrix} \begin{bmatrix} v_{11} & v_{12} & v_{13} & v_{14} & v_{15} \\ v_{21} & v_{22} & v_{23} & v_{24} & v_{25} \\ \end{bmatrix}$ Matrix factorization with SVD actually takes the form of $R = U \Sigma V^T$ where $\Sigma$ is a diagonal matrix of weights. The values in the row or column for the user-feature matrix or item-feature matrix are referred to as latent factors. The exact meanings of the latent factors are usually not discernible. For a movie, one latent factor might represent a specific genre, such as comedy or science-fiction; while for a user, one latent factor might represent gender while another might represent age group. The goal of Funk SVD is to extract these latent factors in order to predict the values of the user-item rating matrix. While Funk SVD can only accommodate explicit interactions, in the form of numerical ratings, a team of researchers from AT&T enhanced Funk SVD to additionally account for implicit interactions, such as likes, purchases, and bookmarks. This enhanced algorithm is referred to as SVD++.4 During the Netflix competition, SVD++ was shown to generate more accurate predictions than Funk SVD. ### Machine Learning on Pregel Kafka Graphs provides an implementation of the Pregel programming model, so any algorithm written for the Pregel programming model can easily be supported by Kafka Graphs. For example, there are many machine learning algorithms written for Apache Giraph, an implementation of Pregel that runs on Apache Hadoop, so such algorithms are eligible to be run on Kafka Graphs as well, with only minor modifications. For Kafka Graphs, I’ve ported the SVD++ algorithm from Okapi, a library of machine learning and graph mining algorithms for Apache Giraph. ### Running SVD++ on Kafka Graphs In the rest of this post, I’ll show how you can run SVD++ using Kafka Graphs on a dataset of movie ratings. To set up your environment, install git, Maven, and Docker Compose. Then run the following steps: git clone https://github.com/rayokota/kafka-graphs.git cd kafka-graphs mvn clean package -DskipTests cd kafka-graphs-rest-app docker-compose up The last step above will launch Docker containers for a ZooKeeper instance, a Kafka instance, and two Kafka Graphs REST application instances. The application instances will each be assigned a subset of the graph vertices during the Pregel computation. For our data, we will use the librec FilmTrust dataset, which is a relatively small set of 35497 movie ratings from users of the FilmTrust platform. The following command will import the movie ratings data into Kafka Graphs: java \ -cp target/kafka-graphs-rest-app-1.2.2-SNAPSHOT.jar \ -Dloader.main=io.kgraph.tools.importer.GraphImporter \ org.springframework.boot.loader.PropertiesLauncher 127.0.0.1:9092 \ --edgesTopic initial-edges \ --edgesFile ../kafka-graphs-core/src/test/resources/ratings.txt \ --edgeParser io.kgraph.library.cf.EdgeCfLongIdFloatValueParser \ --edgeValueSerializer org.apache.kafka.common.serialization.FloatSerializer The remaining commands will all use the Kafka Graphs REST API. First we prepare the graph data for use by Pregel. The following command will group edges by the source vertex ID, and also ensure that topics for the vertices and edges have the same number of partitions. curl -H "Content-type: application/json" -d '{ "algorithm":"svdpp", "initialEdgesTopic":"initial-edges", "verticesTopic":"vertices", "edgesGroupedBySourceTopic":"edges", "async":"false" }' \ localhost:8888/prepare Now we can configure the Pregel algorithm: curl -H "Content-type: application/json" -d '{ "algorithm":"svdpp", "verticesTopic":"vertices", "edgesGroupedBySourceTopic":"edges", "configs": { "random.seed": "0" } }' \ localhost:8888/pregel The above command will return a hexadecimal ID to represent the Pregel computation, such as a8d72fc8. This ID is used in the next command to start the Pregel computation. curl -H "Content-type: application/json" -d '{ "numIterations": 6 }' \ localhost:8888/pregel/{id} You can now examine the state of the Pregel computation: curl -H "Content-type: application/json" localhost:8888/pregel/{id} Once the above command shows that the computation is no longer running, you can use the final state of the graph for predicting user ratings. For example, to predict the rating that user 2 would give to item 14, run the following command, using the same Pregel ID from previous steps: java \ -cp target/kafka-graphs-rest-app-1.2.2-SNAPSHOT.jar \ -Dloader.main=io.kgraph.tools.library.SvdppPredictor \ org.springframework.boot.loader.PropertiesLauncher localhost:8888 \ {id} --user 2 --item 14 The above command will return the predicted rating, such as 2.3385806. You can predict other ratings by using the same command with different user and item IDs. ### Summary The Kafka ecosystem provides several ways to build a machine learning system. Besides the various machine learning libraries that can be directly integrated with a Kafka application, Kafka Graphs can be used to run any machine learning algorithm that has been adapted for the Pregel programming model. Since Kafka Graphs is a library built on top of Kafka Streams, we’ve essentially turned Kafka Streams into a distributed machine learning platform! # Fun with Confluent Schema Registry Extensions The Confluent Schema Registry often serves as the heart of a streaming platform, as it provides centralized management and storage of the schemas for an organization. One feature of the Schema Registry that deserves more attention is its ability to incorporate pluggable resource extensions. In this post I will show how resource extensions can be used to implement the following: 1. Subject modes. For example, one might want to “freeze” a subject so that no further changes can be made. 2. A Schema Registry browser. This is a complete single-page application for managing and visualizing schemas in a web browser. Along the way I will show how to use the KCache library that I introduced in my last post. ### Subject Modes The first resource extension that I will demonstrate is one that provides support for subject modes. With this extension, a subject can be placed in “read-only” mode so that no further changes can be made to the subject. Also, an entire Schema Registry cluster can be placed in “read-only” mode. This may be useful, for example, when using Confluent Replicator to replicate Schema Registry from one Kafka cluster to another. If one wants to keep the two registries in sync, one could mark the Schema Registry that is the target of replication as “read-only”. When implementing the extension, we want to associate a mode, either “read-only” or “read-write”, to a given subject (or * to indicate all subjects). The association needs to be persistent, so that it can survive a restart of the Schema Registry. We could use a database, but the Schema Registry already has a dependency on Kafka, so perhaps we can store the association in Kafka. This is a perfect use case for KCache, which is an in-memory cache backed by Kafka. Using an instance of KafkaCache, saving and retrieving the mode for a given subject is straightforward: public class ModeRepository implements Closeable { // Used to represent all subjects public static final String SUBJECT_WILDCARD = "*"; private final Cache<String, String> cache; public ModeRepository(SchemaRegistryConfig schemaRegistryConfig) { KafkaCacheConfig config = new KafkaCacheConfig(schemaRegistryConfig.originalProperties()); cache = new KafkaCache<>(config, Serdes.String(), Serdes.String()); cache.init(); } public Mode getMode(String subject) { if (subject == null) subject = SUBJECT_WILDCARD; String mode = cache.get(subject); if (mode == null && subject.equals(SUBJECT_WILDCARD)) { // Default mode for top level return Mode.READWRITE; } return mode != null ? Enum.valueOf(Mode.class, mode) : null; } public void setMode(String subject, Mode mode) { if (subject == null) subject = SUBJECT_WILDCARD; cache.put(subject, mode.name()); } @Override public void close() throws IOException { cache.close(); } } Using the ModeRepository, we can provide a ModeResource class that provides REST APIs for saving and retrieving modes: public class ModeResource { private static final int INVALID_MODE_ERROR_CODE = 42299; private final ModeRepository repository; private final KafkaSchemaRegistry schemaRegistry; public ModeResource( ModeRepository repository, SchemaRegistry schemaRegistry ) { this.repository = repository; this.schemaRegistry = (KafkaSchemaRegistry) schemaRegistry; } @Path("/{subject}") @PUT public ModeUpdateRequest updateMode( @PathParam("subject") String subject, @Context HttpHeaders headers, @NotNull ModeUpdateRequest request ) { Mode mode; try { mode = Enum.valueOf( Mode.class, request.getMode().toUpperCase(Locale.ROOT)); } catch (IllegalArgumentException e) { throw new RestConstraintViolationException( "Invalid mode. Valid values are READWRITE and READONLY.", INVALID_MODE_ERROR_CODE); } try { if (schemaRegistry.isMaster()) { repository.setMode(subject, mode); } else { throw new RestSchemaRegistryException( "Failed to update mode, not the master"); } } catch (CacheException e) { throw Errors.storeException("Failed to update mode", e); } return request; } @Path("/{subject}") @GET public ModeGetResponse getMode(@PathParam("subject") String subject) { try { Mode mode = repository.getMode(subject); if (mode == null) { throw Errors.subjectNotFoundException(); } return new ModeGetResponse(mode.name()); } catch (CacheException e) { throw Errors.storeException("Failed to get mode", e); } } @PUT public ModeUpdateRequest updateTopLevelMode( @Context HttpHeaders headers, @NotNull ModeUpdateRequest request ) { return updateMode(ModeRepository.SUBJECT_WILDCARD, headers, request); } @GET public ModeGetResponse getTopLevelMode() { return getMode(ModeRepository.SUBJECT_WILDCARD); } } Now we need a filter to reject requests that attempt to modify a subject when it is in read-only mode: @Priority(Priorities.AUTHORIZATION) public class ModeFilter implements ContainerRequestFilter { private static final Set<ResourceActionKey> subjectWriteActions = new HashSet<>(); private ModeRepository repository; @Context ResourceInfo resourceInfo; @Context UriInfo uriInfo; @Context HttpServletRequest httpServletRequest; static { initializeSchemaRegistrySubjectWriteActions(); } private static void initializeSchemaRegistrySubjectWriteActions() { subjectWriteActions.add( new ResourceActionKey(SubjectVersionsResource.class, "POST")); subjectWriteActions.add( new ResourceActionKey(SubjectVersionsResource.class, "DELETE")); subjectWriteActions.add( new ResourceActionKey(SubjectsResource.class, "DELETE")); subjectWriteActions.add( new ResourceActionKey(ConfigResource.class, "PUT")); } public ModeFilter(ModeRepository repository) { this.repository = repository; } @Override public void filter(ContainerRequestContext requestContext) { Class resource = resourceInfo.getResourceClass(); String restMethod = requestContext.getMethod(); String subject = uriInfo.getPathParameters().getFirst("subject"); Mode mode = repository.getMode(subject); if (mode == null) { // Check the top level mode mode = repository.getMode(ModeRepository.SUBJECT_WILDCARD); } if (mode == Mode.READONLY) { ResourceActionKey key = new ResourceActionKey(resource, restMethod); if (subjectWriteActions.contains(key)) { requestContext.abortWith( Response.status(Response.Status.UNAUTHORIZED) .entity("Subject is read-only.") .build()); } } } private static class ResourceActionKey { private final Class resourceClass; private final String restMethod; public ResourceActionKey(Class resourceClass, String restMethod) { this.resourceClass = resourceClass; this.restMethod = restMethod; } ... } } Finally, the resource extension simply creates the mode repository and then registers the resource and the filter: public class SchemaRegistryModeResourceExtension implements SchemaRegistryResourceExtension { private ModeRepository repository; @Override public void register( Configurable<?> configurable, SchemaRegistryConfig schemaRegistryConfig, SchemaRegistry schemaRegistry ) { repository = new ModeRepository(schemaRegistryConfig); configurable.register(new ModeResource(repository, schemaRegistry)); configurable.register(new ModeFilter(repository)); } @Override public void close() throws IOException { repository.close(); } } The complete source code listing can be found here. To use our new resource extension, we first copy the extension jar (and the KCache jar) to ${CONFLUENT_HOME}/share/java/schema-registry. Next we add the following to ${CONFLUENT_HOME}/etc/schema-registry/schema-registry.properties: kafkacache.bootstrap.servers=localhost:9092 resource.extension.class=io.yokota.schemaregistry.mode.SchemaRegistryModeResourceExtension Now after we start the Schema Registry, we can save and retrieve modes for subjects via the REST API: $ curl -X PUT -H "Content-Type: application/json" \
$curl localhost:8081/mode/topic-key {"mode":"READONLY"}$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
--data '{ "schema": "{ \"type\": \"string\" }" }' \
http://localhost:8081/subjects/topic-key/versions
It works!
### A Schema Registry Browser
Resource extensions can not only be used to add new REST APIs to the Schema Registry; they can also be used to add entire web-based user interfaces. As a demonstration, I’ve developed a resource extension that provides a Schema Registry browser by bundling a single-page application based on Vue.js, which resides here. To use the Schema Registry browser, place the resource extension jar in ${CONFLUENT_HOME}/share/java/schema-registry and then add the following properties to ${CONFLUENT_HOME}/etc/schema-registry/schema-registry.properties:1
resource.static.locations=static
resource.extension.class=io.yokota.schemaregistry.browser.SchemaRegistryBrowserResourceExtension
The resource extension merely indicates which URLs should use the static resources:
public class SchemaRegistryBrowserResourceExtension
implements SchemaRegistryResourceExtension {
@Override
public void register(
Configurable<?> configurable,
SchemaRegistryConfig schemaRegistryConfig,
SchemaRegistry kafkaSchemaRegistry
) throws SchemaRegistryException {
configurable.property(ServletProperties.FILTER_STATIC_CONTENT_REGEX,
"/(static/.*|.*\\.html|.*\\.js)");
}
@Override
public void close() {
}
}
Once we start the Schema Registry and navigate to http://localhost:8081/index.html, we will be greeted with the home page of the Schema Registry browser:
From the Entities dropdown, we can navigate to a listing of all subjects:
If we view a specific subject, we can see the complete version history for the subject, with schemas nicely formatted:
I haven’t shown all the functionality of the Schema Registry browser, but it supports all the APIs that are available through REST, including creating schemas, retrieving schemas by ID, etc.
Hopefully this post has inspired you to create your own resource extensions for the Confluent Schema Registry. You might even have fun. 🙂
# KCache: An In-Memory Cache Backed by Kafka
Last year, Jay Kreps wrote a great article titled It’s Okay to Store Data in Apache Kafka, in which he discusses a variety of ways to use Kafka as a persistent store. In one of the patterns, Jay describes how Kafka can be used to back an in-memory cache:
You may have an in-memory cache in each instance of your application that is fed by updates from Kafka. A very simple way of building this is to make the Kafka topic log compacted, and have the app simply start fresh at offset zero whenever it restarts to populate its cache.
After reading the above, you may be thinking, “That’s what I want to do. How do I do it?”
I will now describe exactly how.
Roughly, there are two approaches.
In the first approach, you would
1. Create a compacted topic in Kafka.
2. Create a Kafka consumer that will never commit its offsets, and will start by reading from the beginning of the topic.
3. Have the consumer initially read all records and insert them, in order, into an in-memory cache.
4. Have the consumer continue to poll the topic in a background thread, updating the cache with new records.
5. To insert a new record into the in-memory cache, use a Kafka producer to send the record to the topic, and wait for the consumer to read the new record and update the cache.
6. To remove a record from the in-memory cache, use a Kafka producer to send a record with the given key and a null value (such a record is called a tombstone), and wait for the consumer to read the tombstone and remove the corresponding record from the cache.
In the second approach, you would instead use KCache, a small library that neatly wraps all of the above functionality behind a simple java.util.Map interface.
import io.kcache.*;
String bootstrapServers = "localhost:9092";
Cache<String, String> cache = new KafkaCache<>(
bootstrapServers,
Serdes.String(), // for serializing/deserializing keys
Serdes.String() // for serializing/deserializing values
);
cache.init(); // creates topic, initializes cache, consumer, and producer
cache.put("Kafka", "Rocks");
String value = cache.get("Kafka"); // returns "Rocks"
cache.remove("Kafka");
cache.close(); // shuts down the cache, consumer, and producer
That’s it!
The choice is yours. 🙂
# KDatalog: Kafka as a Datalog Engine
In previous posts, I had discussed how Kafka can be used for both stream-relational processing as well as graph processing. I will now show how Kafka can also be used to process Datalog programs.
### The Datalog Language
Datalog 1 is a declarative logic programming language that is used as the query language for databases such as Datomic and LogicBlox.2 Both relational queries and graph-based queries can be expressed in Datalog, so in one sense it can be seen as a unifying or foundational database language. 3
### Finite Model Theory
To mathematicians, both graphs and relations can be viewed as finite models. Finite model theory has been described as “the backbone of database theory”4. When SQL was developed, first-order logic was used as its foundation since it was seen as sufficient for expressing any database query that might ever be needed. Later theoreticians working in finite model theory were better able to understand the limits on the expressivity of first-order logic.
For example, Datalog was shown to be able to express queries involving transitive closure, which cannot be expressed in first-order logic nor in relational algebra, since relational algebra is equivalent to first-order logic.5 This led to the addition of the WITH RECURSIVE clause to SQL-99 in order to support transitive closure queries against relational databases.
### Facts and Rules
Below is an example Datalog program.
parent(’Isabella’, ’Ella’).
parent(’Ella’, ’Ben’).
parent(’Daniel’, ’Ben’).
sibling(X, Y) :- parent(X, Z), parent(Y, Z), X̸ != Y.
The Datalog program above consists of 3 facts and 1 rule.6 A rule has a head and a body (separated by the :- symbol). The body is itself a conjunction of subgoals (separated by commas). The symbols X, Y, and Z are variables, while ‘Isabella’, ‘Ella’, ‘Ben’, and ‘Daniel’ are constants.
In the above program, the facts state that Isabella has a parent named Ella, Ella has a parent named Ben, and Daniel has a parent named Ben. The rule states that if there exist two people (X and Y) who are not the same (X != Y) and who have the same parent (Z), then they are siblings. Thus we can see that Ella and Daniel are siblings.
The initial facts of a Datalog program are often referred to as the extensional database (EDB), while the remaining rules define the intensional database (IDB). When evaluating a Datalog program, we use the extensional database as input to the rules in order to output the intensional database.
### Datalog Evaluation
There are several ways to evaluate a Datalog program. One of the more straightforward algorithms is called semi-naive evaluation. In semi-naive evaluation, we use the most recently generated facts ($\Delta_\text{old}$) to satisfy one subgoal, and all facts $F$ to satisfy the remaining subgoals, which generates a new set of facts ($\Delta_\text{new}$). These new facts are then used for the next evaluation round, and we continue iteratively until no new facts are derived.7 The algorithm is shown below, where $\textbf{EVAL-INCR}$ is the method of satisfying subgoals just described.
$\line(1,0){500} \\ \texttt{// Input: } F \texttt{ (set of Datalog EDB facts),} \\ \texttt{//}\hspace{56pt} R \texttt{ (set of Datalog rules with non-empty body)} \\ \Delta_\text{old} := F \\ \textbf{while } \Delta_\text{old} \neq \emptyset \\ \indent\Delta_\text{new} := \textbf{EVAL-INCR}(R,F,\Delta_\text{old}) \\ \indent F := F \cup \Delta_\text{new} \\ \indent\Delta_\text{old} := \Delta_\text{new} \\ \texttt{output } F \\ \line(1,0){500}$
Semi-naive evaluation is essentially how incremental view maintenance is performed in relational databases.
So how might we use Kafka to evaluate Datalog programs? Well, it turns out that researchers have shown how to perform distributed semi-naive evaluation of Datalog programs on Pregel, Google’s framework for large-scale graph processing. Since Kafka Graphs supports Pregel, we can use Kafka Graphs to evaluate Datalog programs as well.
### Datalog on Pregel
The general approach for implementing distributed semi-naive evaluation on Pregel is to treat each constant in the facts, such as ‘Ella’ and ‘Daniel’, as a vertex in a complete graph.8 The value of each vertex will be the complete set of rules $R$ as well as the subset of facts $F$ that reference the constant represented by the vertex. During each round of the semi-naive algorithm, a given vertex will resolve the subgoals with the facts that it has and then send facts or partial rules to other vertices if necessary. That means that each round will be comprised of one or more Pregel supersteps, since vertices receive messages that were sent in the previous superstep.
Beyond the straightforward approach described above9, many enhancements can be made, such as efficient grouping of vertices into super-vertices, rule rewriting, and support for recursive aggregates.10
### Datalog Processing With Kafka Streams
Based on existing research, we can adapt the above approach to Kafka Graphs, which I call KDatalog.11
Assuming our Datalog program is in a file named siblings.txt, we first use KDatalog to parse the program and construct a graph consisting of vertices for the constants in the program, with edges between every pair of vertices. As mentioned, the initial value of the vertex for a given constant $c$ will consist of the set of rules $R$ as well as the subset of facts $F$ that contain $c$.
StreamsBuilder builder = new StreamsBuilder();
Properties producerConfig = ...
KGraph<String, String, String> graph =
We can then use this graph with KDatalog’s implementation of distributed semi-naive evaluation on Pregel, which is called DatalogComputation. After the computation terminates, the output of the algorithm will be the union of the values at every vertex.
Since KDatalog uses Kafka Graphs, and Kafka Graphs is built with Kafka Streams, we are essentially using Kafka Streams as a distributed Datalog engine.
### The Future of Datalog
Datalog has recently enjoyed somewhat of a resurgence, as it has found successful application in a wide variety of areas, including data integration, information extraction, networking, program analysis, security, and cloud computing.12 Datalog has even been used to express conflict-free replicated data types (CRDTs) in distributed systems. If interest in Datalog continues to grow, perhaps Kafka will play an important role in helping enterprises use Datalog for their business needs.
# Kafka Graphs: Graph Analytics with Apache Kafka
As the 2018 Apache Kafka Report has shown, Kafka has become mission-critical to enterprises of all sizes around the globe. Although there are many similar technologies in the field today, none have the equivalent of the thriving ecosystem that has developed around Kafka. Frameworks like Kafka Connect, Kafka Streams, and KSQL have enabled a much wider variety of scenarios to be addressed by Kafka. We are witnessing the growth of an entire technology market, distributed streaming, that resembles how the relational database market grew to take hold of enterprises at the end of the last century.
Kafka Graphs is a new framework that extends Kafka Streams to provide distributed graph analytics. It provides both a library for graph transformations as well as a distributed platform for executing graph algorithms. Kafka Graphs was inspired by other platforms for graph analytics, such as Apache Flink Gelly, Apache Spark GraphX, and Apache Giraph, but unlike these other frameworks it does not require anything other than what is already provided by the Kafka abstraction funnel.
### Graph Representation and Transformations
A graph in Kafka Graphs is represented by two tables from Kafka Streams, one for vertices and one for edges. The vertex table is comprised of an ID and a vertex value, while the edge table is comprised of a source ID, target ID, and edge value.
KTable<Long, Long> vertices = ...
KTable<Edge<Long>, Long> edges = ...
KGraph<Long, Long, Long> graph = new KGraph<>(
vertices,
edges,
GraphSerialized.with(Serdes.Long(), Serdes.Long(), Serdes.Long())
);
Once a graph is created, graph transformations can be performed on it. For example, the following will compute the sum of the values of all incoming neighbors for each vertex.
graph.reduceOnNeighbors(new SumValues(), EdgeDirection.IN);
### Pregel-Based Graph Algorithms
Kafka Graphs provides a number of graph algorithms based on the vertex-centric approach of Pregel. The vertex-centric approach allows a computation to “think like a vertex” so that it only need consider how the value of a vertex should change based on messages sent from other vertices. The following algorithms are provided by Kafka Graphs:
1. Breadth-first search (BFS): given a source vertex, determines the minimum number of hops to reach every other vertex.
2. Label propagation (LP): finds communities in a graph by propagating labels between neighbors.
3. Local clustering coefficient (LCC): computes the degree of clustering for each vertex as determined by the ratio between the number of triangles a vertex closes with its neighbors to the maximum number of triangles it could close.
4. Multiple-source shortest paths (MSSP): given a set of source vertices, finds the shortest paths from these vertices to all other vertices.
5. PageRank (PR): measures the rank or popularity of each vertex by propagating influence between vertices.
6. Single-source shortest paths (SSSP): given a source vertex, finds the shortest paths to all other vertices.
7. Weakly connected components (WCC): determines the weakly connected component for each vertex.
For example, here is the implementation of the single-source shortest paths (SSSP) algorithm:
public final class SSSPComputeFunction
implements ComputeFunction<Long, Double, Double, Double> {
public void compute(
int superstep,
VertexWithValue<Long, Double> vertex,
Map<Long, Double> messages,
Iterable<EdgeWithValue<Long, Double>> edges,
Callback<Long, Double, Double> cb) {
double minDistance = vertex.id().equals(srcVertexId)
? 0d : Double.POSITIVE_INFINITY;
for (Double message : messages.values()) {
minDistance = Math.min(minDistance, message);
}
if (minDistance < vertex.value()) {
cb.setNewVertexValue(minDistance);
for (EdgeWithValue<Long, Double> edge : edges) {
double distance = minDistance + edge.value();
cb.sendMessageTo(edge.target(), distance);
}
}
}
}
Custom Pregel-based graph algorithms can also be added by implementing the ComputeFunction interface.
### Distributed Graph Processing
Since Kafka Graphs is built on top of Kafka Streams, it is able to leverage the underlying partitioning scheme of Kafka Streams in order to support distributed graph processing. To facilitate running graph algorithms in a distributed manner, Kafka Graphs provides a REST application for managing graph algorithm executions.
java -jar kafka-graphs-rest-app-0.1.0.jar \
--kafka.graphs.bootstrapServers=localhost:9092 \
--kafka.graphs.zookeeperConnect=localhost:2181
When multiple instantiations of the REST application are started on different hosts, all configured with the same Kafka and ZooKeeper servers, they will automatically coordinate with each other to partition the set of vertices when executing a graph algorithm. When a REST request is sent to one host, it will automatically proxy the request to the other hosts if necessary.
### Summary
Kafka Graphs is a new addition to the rapidly expanding ecosystem surrounding Apache Kafka. Kafka Graphs is still in its early stages, but please feel to try it and suggest improvements if you have a need to perform distributed graph analytics with Kafka.
# Embedding Kafka Connect in Kafka Streams + KSQL
Previously I presented the Kafka abstraction funnel and how it provides a simple yet powerful tool for writing applications that use Apache Kafka. In this post I will show how these abstractions also provide a straightforward means of interfacing with Kafka Connect, so that applications that use Kafka Streams and KSQL can easily integrate with external systems like MySQL, Elasticsearch, and others.1
This is a somewhat lengthy post, so feel free to skip to the summary below.
### Twins Separated at Birth?
Normally when using Kafka Connect, one would launch a cluster of Connect workers to run a combination of source connectors, that pull data from an external system into Kafka, and sink connectors, that push data from Kafka to an external system. Once the data is in Kafka, one could then use Kafka Streams or KSQL to perform stream processing on the data.
Let’s take a look at two of the primary abstractions in Kafka Connect, the SourceTask and the SinkTask. The heart of the SourceTask class is the poll() method, which returns data from the external system.
/**
* SourceTask is a Task that pulls records from another system for
* storage in Kafka.
*/
...
public abstract void start(Map<String, String> props);
public abstract List<SourceRecord> poll()
throws InterruptedException;
public abstract void stop();
...
}
Likewise, the heart of the SinkTask is the put(Collection<SinkRecord> records) method, which sends data to the external system.
/**
* sends them to another system.
*/
...
public abstract void start(Map<String, String> props);
public abstract void put(Collection<SinkRecord> records);
public void flush(
}
public abstract void stop();
...
}
Do those Kafka Connect classes remind of us anything in the Kafka abstraction funnel? Yes, indeed, the Consumer and Producer interfaces. Here is the Consumer:
/**
* A client that consumes records from a Kafka cluster.
*/
public interface Consumer<K, V> extends Closeable {
...
public void subscribe(
Pattern pattern, ConsumerRebalanceListener callback);
public ConsumerRecords<K, V> poll(long timeout);
public void unsubscribe();
public void close();
...
}
And here is the Producer:
/**
* A Kafka client that publishes records to the Kafka cluster.
*/
public interface Producer<K, V> extends Closeable {
...
ProducerRecord<K, V> record, Callback callback);
void flush();
void close();
...
}
There are other methods in those interfaces that I’ve elided, but the above methods are the primary ones used by Kafka Streams.
### Connect, Meet Streams
The Connect APIs and the Producer-Consumer APIs are very similar. Perhaps we can create implementations of the Producer-Consumer APIs that merely delegate to the Connect APIs. But how would we plug in our new implementations into Kafka Streams? Well, it turns out that Kafka Streams allows you to implement an interface that instructs it where to obtain a Producer and a Consumer:
/**
* {@code KafkaClientSupplier} can be used to provide custom Kafka clients
* to a {@link KafkaStreams} instance.
*/
public interface KafkaClientSupplier {
Producer<byte[], byte[]> getProducer(Map<String, Object> config);
Consumer<byte[], byte[]> getConsumer(Map<String, Object> config);
...
}
That’s just we want. Let’s try implementing a new Consumer that delegates to a Connect SourceTask. We’ll call it ConnectSourceConsumer.
public class ConnectSourceConsumer implements Consumer<byte[], byte[]> {
private final Converter keyConverter;
private final Converter valueConverter;
...
public ConsumerRecords<byte[], byte[]> poll(long timeout) {
// Poll the Connect source task
return records != null
? new ConsumerRecords<>(convertRecords(records))
: ConsumerRecords.empty();
}
// Convert the Connect records into Consumer records
private ConsumerRecords<byte[], byte[]> convertRecords(
List<SourceRecord> records) {
for (final SourceRecord record : records) {
byte[] key = keyConverter.fromConnectData(
record.topic(), record.keySchema(), record.key());
byte[] value = valueConverter.fromConnectData(
record.topic(), record.valueSchema(), record.value());
int partition = record.kafkaPartition() != null
? record.kafkaPartition() : 0;
final ConsumerRecord<byte[], byte[]> consumerRecord =
new ConsumerRecord<>(
record.topic(),
partition,
...
key,
value);
TopicPartition tp = new TopicPartition(
record.topic(), partition);
List<ConsumerRecord<byte[], byte[]>> consumerRecords =
result.computeIfAbsent(tp, k -> new ArrayList<>());
}
return new ConsumerRecords<>(result);
}
...
}
And here is the new Producer that delegates to a Connect SinkTask, called ConnectSinkProducer.
public class ConnectSinkProducer implements Producer<byte[], byte[]> {
private final Converter keyConverter;
private final Converter valueConverter;
private final List<SinkRecord> recordBatch;
...
ProducerRecord<byte[], byte[]> record, Callback callback) {
convertRecords(Collections.singletonList(record));
...
}
// Convert the Connect records into Producer records
private void convertRecords(
List<ProducerRecord<byte[], byte[]>> records) {
for (ProducerRecord<byte[], byte[]> record : records) {
SchemaAndValue keyAndSchema = record.key() != null
? keyConverter.toConnectData(record.topic(), record.key())
: SchemaAndValue.NULL;
SchemaAndValue valueAndSchema = record.value() != null
? valueConverter.toConnectData(
record.topic(), record.value())
: SchemaAndValue.NULL;
int partition = record.partition() != null
? record.partition() : 0;
SinkRecord producerRecord = new SinkRecord(
record.topic(), partition,
keyAndSchema.schema(), keyAndSchema.value(),
valueAndSchema.schema(), valueAndSchema.value(),
...);
}
}
public void flush() {
deliverRecords();
}
private void deliverRecords() {
// Finally, deliver this batch to the sink
try {
recordBatch.clear();
} catch (RetriableException e) {
// The batch will be reprocessed on the next loop.
} catch (Throwable t) {
throw new ConnectException("Unrecoverable exception:", t);
}
}
...
}
With our new classes in hand, let’s implement KafkaClientSupplier so we can let Kafka Streams know about them.
public class ConnectClientSupplier implements KafkaClientSupplier {
...
public Producer<byte[], byte[]> getProducer(
final Map<String, Object> config) {
return new ConnectSinkProducer(config);
}
public Consumer<byte[], byte[]> getConsumer(
final Map<String, Object> config) {
return new ConnectSourceConsumer(config);
}
...
}
### Words Without Counts
We now have enough to run a simple function using Kafka Connect embedded in Kafka Streams.2. Let’s give it a whirl.
The following code performs the first half of a WordCount application, where the input is a stream of lines of text, and the output is a stream of words. However, instead of using Kafka for input/output, we use the JDBC Connector to read from a database table and write to another.
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "simple");
props.put(StreamsConfig.CLIENT_ID_CONFIG, "simple-example-client");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
...
StreamsConfig streamsConfig = new StreamsConfig(props);
Map<String, Object> config = new HashMap<>();
...
ConnectStreamsConfig connectConfig = new ConnectStreamsConfig(config);
// The JDBC source task configuration
Map<String, Object> config = new HashMap<>();
config.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, jdbcUrl);
// The JDBC sink task configuration
Map<String, Object> config = new HashMap<>();
config.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, jdbcUrl);
StreamsBuilder builder = new StreamsBuilder();
KStream<SchemaAndValue, SchemaAndValue> input =
builder.stream(inputTopic);
KStream<SchemaAndValue, SchemaAndValue> output = input
.flatMapValues(value -> {
Struct lines = (Struct) value.value();
String[] strs = lines.get("lines").toString()
.toLowerCase().split("\\W+");
List<SchemaAndValue> result = new ArrayList<>();
for (String str : strs) {
if (str.length() > 0) {
Schema schema = SchemaBuilder.struct().name("word")
.field("word", Schema.STRING_SCHEMA).build();
Struct struct = new Struct(schema).put("word", str);
}
}
return result;
});
output.to(outputTopic);
streams = new KafkaStreams(builder.build(), streamsConfig,
new ConnectClientSupplier("JDBC", connectConfig,
streams.start();
}
When we run the above example, it executes without involving Kafka at all!
### This Is Not A Pipe3
Now that we have the first half of a WordCount application, let’s complete it. Normally we would just add a groupBy function to the above example, however that won’t work with our JDBC pipeline. The reason can be found in the JavaDoc for groupBy:
Because a new key is selected, an internal repartitioning topic will be created in Kafka. This topic will be named “\${applicationId}-XXX-repartition”, where “applicationId” is user-specified in StreamsConfig via parameter APPLICATION_ID_CONFIG, “XXX” is an internally generated name, and “-repartition” is a fixed suffix.
So Kafka Streams will try to create a new topic, and it will do so by using the AdminClient obtained from KafkaClientSupplier. Therefore, we could, if we chose to, create an implementation of AdminClient that creates database tables instead of Kafka topics. However, if we remember to think of Kafka as Unix pipes, all we want to do is restructure our previous WordCount pipeline from this computation:
to the following computation, where the database interactions are no longer represented by pipes, but rather by stdin and stdout:
This allows Kafka Streams to use Kafka Connect without going through Kafka as an intermediary. In addition, there is no need for a cluster of Connect workers as the Kafka Streams layer is directly instantiating and managing the necessary Connect components. However, wherever there is a pipe (|) in the above pipeline, we still want Kafka to hold the intermediate results.
### WordCount With Kafka Connect
So let’s continue to use an AdminClient that is backed by Kafka. However, if we want to use Kafka for intermediate results, we need to modify the APIs in ConnectClientSupplier. We will now need this class to return instances of Producer and Consumer that delegate to Kafka Connect for the stream input and output, but to Kafka for intermediate results that are produced within the stream.
public class ConnectClientSupplier implements KafkaClientSupplier {
private DefaultKafkaClientSupplier defaultSupplier =
new DefaultKafkaClientSupplier();
private String connectorName;
private ConnectStreamsConfig connectStreamsConfig;
public ConnectClientSupplier(
String connectorName, ConnectStreamsConfig connectStreamsConfig,
this.connectorName = connectorName;
this.connectStreamsConfig = connectStreamsConfig;
}
...
@Override
public Producer<byte[], byte[]> getProducer(
Map<String, Object> config) {
ProducerConfig producerConfig = new ProducerConfig(
new ByteArraySerializer(), new ByteArraySerializer()));
Map<String, ConnectSinkProducer> connectProducers =
.collect(Collectors.toMap(Map.Entry::getKey,
e -> ConnectSinkProducer.create(
connectorName, connectStreamsConfig,
e.getValue(), producerConfig)));
// Return a Producer that delegates to Connect or Kafka
return new WrappedProducer(
connectProducers, defaultSupplier.getProducer(config));
}
@Override
public Consumer<byte[], byte[]> getConsumer(
Map<String, Object> config) {
ConsumerConfig consumerConfig = new ConsumerConfig(
new ByteArrayDeserializer(), new ByteArrayDeserializer()));
Map<String, ConnectSourceConsumer> connectConsumers =
.collect(Collectors.toMap(Map.Entry::getKey,
e -> ConnectSourceConsumer.create(
connectorName, connectStreamsConfig,
e.getValue(), consumerConfig)));
// Return a Consumer that delegates to Connect and Kafka
return new WrappedConsumer(
connectConsumers, defaultSupplier.getConsumer(config));
}
...
}
The WrappedProducer simply sends a record to either Kafka or the appropriate Connect SinkTask, depending on the topic or table name.
public class WrappedProducer implements Producer<byte[], byte[]> {
private final Map<String, ConnectSinkProducer> connectProducers;
private final Producer<byte[], byte[]> kafkaProducer;
@Override
ProducerRecord<byte[], byte[]> record, Callback callback) {
String topic = record.topic();
ConnectSinkProducer connectProducer = connectProducers.get(topic);
if (connectProducer != null) {
// Send to Connect
return connectProducer.send(record, callback);
} else {
// Send to Kafka
return kafkaProducer.send(record, callback);
}
}
...
}
The WrappedConsumer simply polls Kafka and all the Connect SourceTask instances and then combines their results.
public class WrappedConsumer implements Consumer<byte[], byte[]> {
private final Map<String, ConnectSourceConsumer> connectConsumers;
private final Consumer<byte[], byte[]> kafkaConsumer;
@Override
public ConsumerRecords<byte[], byte[]> poll(long timeout) {
Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> records
= new HashMap<>();
// Poll from Kafka
poll(kafkaConsumer, timeout, records);
for (ConnectSourceConsumer consumer : connectConsumers.values()) {
// Poll from Connect
poll(consumer, timeout, records);
}
return new ConsumerRecords<>(records);
}
private void poll(
Consumer<byte[], byte[]> consumer, long timeout,
Map<TopicPartition, List<ConsumerRecord<byte[], byte[]>>> records) {
try {
ConsumerRecords<byte[], byte[]> rec = consumer.poll(timeout);
for (TopicPartition tp : rec.partitions()) {
records.put(tp, rec.records(tp);
}
} catch (Exception e) {
log.error("Could not poll consumer", e);
}
}
...
}
Finally we can use groupBy to implement WordCount.
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "simple");
props.put(StreamsConfig.CLIENT_ID_CONFIG, "simple-example-client");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
...
StreamsConfig streamsConfig = new StreamsConfig(props);
Map<String, Object> config = new HashMap<>();
...
ConnectStreamsConfig connectConfig = new ConnectStreamsConfig(config);
// The JDBC source task configuration
Map<String, Object> config = new HashMap<>();
config.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, jdbcUrl);
// The JDBC sink task configuration
Map<String, Object> config = new HashMap<>();
config.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, jdbcUrl);
StreamsBuilder builder = new StreamsBuilder();
KStream<SchemaAndValue, SchemaAndValue> input = builder.stream(inputTopic);
KStream<SchemaAndValue, String> words = input
.flatMapValues(value -> {
Struct lines = (Struct) value.value();
String[] strs = lines.get("lines").toString()
.toLowerCase().split("\\W+");
List<String> result = new ArrayList<>();
for (String str : strs) {
if (str.length() > 0) {
}
}
return result;
});
KTable<String, Long> wordCounts = words
.groupBy((key, word) -> word,
Serialized.with(Serdes.String(), Serdes.String()))
.count();
wordCounts.toStream().map(
(key, value) -> {
Schema schema = SchemaBuilder.struct().name("word")
.field("word", Schema.STRING_SCHEMA)
.field("count", Schema.INT64_SCHEMA).build();
Struct struct = new Struct(schema)
.put("word", key).put("count", value);
return new KeyValue<>(SchemaAndValue.NULL,
new SchemaAndValue(schema, struct));
}).to(outputTopic);
streams = new KafkaStreams(builder.build(), streamsConfig,
new ConnectClientSupplier("JDBC", connectConfig,
streams.start();
When we run this WordCount application, it will use the JDBC Connector for input and output, and Kafka for intermediate results, as expected.
### Connect, Meet KSQL
Since KSQL is built on top of Kafka Streams, with the above classes we get integration between Kafka Connect and KSQL for free, thanks to the Kafka abstraction funnel.
For example, here is a KSQL program to retrieve word counts that are greater than 100.
Map<String, Object> config = new HashMap<>();
...
ConnectStreamsConfig connectConfig = new ConnectStreamsConfig(config);
// The JDBC source task configuration
Map<String, Object> config = new HashMap<>();
config.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, jdbcUrl);
KafkaClientSupplier clientSupplier =
new ConnectClientSupplier("JDBC", connectConfig,
Collections.emptyMap());
ksqlContext = KsqlContext.create(
ksqlConfig, schemaRegistryClient, clientSupplier);
ksqlContext.sql("CREATE STREAM WORD_COUNTS"
+ " (ID int, WORD varchar, WORD_COUNT bigint)"
+ " WITH (kafka_topic='word_counts', value_format='AVRO', key='ID');";
ksqlContext.sql("CREATE STREAM TOP_WORD_COUNTS AS "
+ " SELECT * FROM WORD_COUNTS WHERE WORD_COUNT > 100;";
When we run this example, it uses the JDBC Connector to read its input from a relational database. This is because we pass an instance of ConnectClientSupplier to the KsqlContext factory, in order to instruct the Kafka Streams layer underlying KSQL where to obtain the Producer and Consumer.4
### Summary
With the above examples I’ve been able to demonstrate both
1. the power of clean abstractions, especially as utilized in the Kafka abstraction funnel, and
2. a promising method of integrating Kafka Connect with Kafka Streams and KSQL based on these abstractions.
Ironically, I have also shown that the Kafka abstraction funnel does not need to be tied to Kafka at all. It can be used with any system that provides implementations of the Producer-Consumer APIs, which reside at the bottom of the funnel. In this post I have shown how to plug in Kafka Connect at this level to achieve embedded Kafka Connect functionality within Kafka Streams. However, frameworks other than Kafka Connect could be used as well. In this light, Kafka Streams (as well as KSQL) can be viewed as a general stream processing platform in the same manner as Flink and Spark.
I hope you have enjoyed these excursions into some of the inner workings of Kafka Connect, Kafka Streams, and KSQL.
|
# Not so efficient!
Discrete Mathematics Level 2
Find the coefficient of the term involving $$x^{2}$$ in the expansion $$(x + 3x^{-2})^{8}$$.
×
|
A positive integer $n$ is said to be triangular if $n =\sum_{i=0}^{k}{i}$ for some positive integer $k.$ Given $8n+1$ is a square number, show that $n$ is triangular.
Solve $\sum_{i=0}^{k}{i}.$
The condition can be written as $8n+1 = m^2,$ for some natural $m$.
Manipulate the above to a form that allows factorization.
If $(m-1)(m+1)$ is divisible by $8,$ what values can $m$ take?
Would $(m-1)(m+1)$ be divisible by $8$ if $m$ was even?
Use a substitution for $m$ to express that $m$ is odd.
A positive integer $n$ is said to be triangular if $n =\sum_{i=0}^{k}{i}=\frac{k(k+1)}{2}.$ Since $8n+1$ is a square number, there exists a positive integer $m$ such that $8n + 1 = m^2.$ Using the difference between two squares, we can factorize to get $8n = m^2-1 = (m-1)(m+1),$ or that $n=\frac{(m-1)(m+1)}{8},$ which means that $(m-1)(m+1)$ must be divisible by 8. This is true if and only if $m$ is odd, since for every consecutive even numbers, one is a multiple of $4.$ Substituting $m = 2k+1$ gives us $n = \frac{2k(2k+2)}{8} = \frac{k(k+1)}{2} = \sum_{i=0}^{k}{i}.$
|
## Ajustement spline le long d’un ensemble de courbes. (Spline adjustment along a set of curves).(French)Zbl 0725.65017
Authors’ summary: For a surface defined by an explicit equation $$x_ 3=f(x_ 1,x_ 2)$$, the problem of constructing a smooth approximant from a finite set of curves given on the surface is studied. As an approximant of f, a “discrete smoothing spline” belonging to a suitable finite element space is proposed. Convergence of the method and numerical results are given.
### MSC:
65D17 Computer-aided design (modeling of curves and surfaces) 65D07 Numerical computation using splines
Full Text:
### References:
[1] D. APPRATO, R. ARCANGÉLI, R. MANZANILLA, Sur la construction de surfaces de classe Ck à partir d’un grand nombre de données de Lagrange M2AN,vol. 21, n^\circ 4, 529-555 (1987). Zbl0632.65011 MR921826 · Zbl 0632.65011 [2] R. ARCANGÉLI, Cours de DEA, Pau, à paraître. [3] P.G. CIARLET, The Finite Element Method for Elliptic Problems, North Holland, Amsterdam (1978). Zbl0383.65058 MR520174 · Zbl 0383.65058 [4] P. G. CIARLET, P.-A. RAVIART, General Lagrange and Hermite Interpolationin Rn with Applications to Finite Element Methods, Arch. Rat. Mech. Anal., 46, 177-199 (1972). Zbl0243.41004 MR336957 · Zbl 0243.41004 [5] P. CLÉMENT, Approximation by Finite Element Functions Using Local Regularization, RAIRO, 9e année, R-2, 77-84 (1975). Zbl0368.65008 MR400739 · Zbl 0368.65008 [6] J. DUCHON, Splines Minimizing Rotation-Invariant Semi-Norms in Sobolev Spaces, Lecture Notes in Math., 571, 85-100, Springer (1977). Zbl0342.41012 MR493110 · Zbl 0342.41012 [7] P. GRISVARD, Elliptic Problems in Nonsmooth Domains, Pitman, Boston (1985). Zbl0695.35060 MR775683 · Zbl 0695.35060 [8] J. NEČAS, Les méthodes directes en théorie des équations elliptiques, Masson, Paris (1967). MR227584 · Zbl 1225.35003 [9] J. PEETRE, Espaces d’interpolation et théorème de Soboleff, Ann. Inst. Fourier,Grenoble, 16, 279-317 (1966). Zbl0151.17903 MR221282 · Zbl 0151.17903 [10] G. STRANG, Approximation in the Finite Element Method, Numer. Math., 19, 81-98 (1972). Zbl0221.65174 MR305547 · Zbl 0221.65174 [11] A. ŽENISEK, A General Theorem on Triangular Finite C(m)-Elements RAIRO, R-2, 119-127 (1974). Zbl0321.41003 MR388731 · Zbl 0321.41003
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
16 questions linked to/from How do you create pull quotes?
911 views
### Wrapping two column text around a centering picture [duplicate]
Possible Duplicate: How do you create pull quotes? Two-column text with circular insert I'ld like to wrap text around a picture which is in the middle of the page, but I don't wanna let emtpy ...
30 views
### What is it called and how to put a sentence in evidence [duplicate]
Newspapers often put a sentence in large font in the middle of the page to attract attention. For example, see this one: How is this form of highlighting called? How can I do it? I have a multicolumn ...
13 views
### Wrapfigure in twocolumn text [duplicate]
I am trying to wrap a figure in two column text, so that it cuts half of one column width and half of the other column. This is my example of MWE: \documentclass[twocolumn]{article} \usepackage{...
5k views
### Two-column text with circular insert
How does one write on the outside of a circle? I am looking for sample files/packages which imitate a magazine lay-out. Specifically a page with two columns having a circular picture between the two ...
2k views
### Implementing a pullquotes algorithm in LaTeX
In pullquotes, I provided a manual technique for typesetting pullquotes in LaTeX. The technique was to typeset the material in two long narrow columns and then position them so as to join them ...
1k views
### Highlight passages from the body as quotes between the paragraphs
In newspaper and magazine articles, striking messages from the main text are often highlighted in form of quotes in larger font size, sometimes also set apart by a bold rule or so. For an article I am ...
737 views
### How can I insert a paragraph or image into the same spot on every page of a memoir doc?
How can I duplicate one feature of this layout from 'House of Leaves' in a 100ish-page book using LaTeX? (I'm presently using memoir for document formatting/structure.) The specific feature I want ...
2k views
### LaTeX: Textbox within article
I'm not completely new to LaTeX but I'm not used to work with special layout schemes, so that's why I thought it was a good idea to post my question here. Momently I'm trying to create an article ...
2k views
### How to create a 'fact box' or pull quote that is tied to the margins of your page?
I'd like to accomplish the following: Ut convallis libero in ------------------- urna ultrices accumsan. | | Donec sed odio eros. | "an important | Donec viverra mi quis |...
2k views
Consider the following code and output. \documentclass{article} \usepackage{lipsum} \usepackage{graphicx} \usepackage{wrapfig} \begin{document} \begin{wrapfigure}{o}{0.5\textwidth} ...
1k views
### Block quote with big quotation marks and opening quote on bottom
I wanted to adapt Block quote with big quotation marks to languages like my natural language German, but also some other mainly Germanic and Slavic languages (c.f. Wikipedia: Non-English usage of ...
2k views
### Defining a custom ‘wrapfig’ environment
[ This is an updated version of https://stackoverflow.com/questions/3233031/latex-defining-a-custom-wrapfig-environment ] The wrapfig package interacts badly with the setup and teardown done by \...
306 views
### Text boxes that go into the the margin in a two column article
I'd like to highlight some important quotes in my article by writing them into floating (invisible) boxes, that go half into the (left/right) margin and half into the respective column. I'd prefer a ...
|
## Results (1-50 of 59 matches)
Next
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin
2-85176-1.1-c1-0-0 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.270321 Elliptic curve 85176.a 2-85176-1.1-c1-0-1 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.276730$ Elliptic curve 85176.e
2-85176-1.1-c1-0-10 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.390491 Elliptic curve 85176.cd 2-85176-1.1-c1-0-11 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.401232$ Elliptic curve 85176.b
2-85176-1.1-c1-0-12 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.410230 Elliptic curve 85176.i 2-85176-1.1-c1-0-13 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.411553$ Elliptic curve 85176.bb
2-85176-1.1-c1-0-14 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.435823 Elliptic curve 85176.k 2-85176-1.1-c1-0-15 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.448180$ Elliptic curve 85176.h
2-85176-1.1-c1-0-16 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.451513 Elliptic curve 85176.o 2-85176-1.1-c1-0-17 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.478727$ Elliptic curve 85176.bf
2-85176-1.1-c1-0-18 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.481960 Elliptic curve 85176.r 2-85176-1.1-c1-0-19 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.501468$ Elliptic curve 85176.j
2-85176-1.1-c1-0-2 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.278877 Elliptic curve 85176.x 2-85176-1.1-c1-0-20 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.533884$ Elliptic curve 85176.bt
2-85176-1.1-c1-0-21 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.550907 Elliptic curve 85176.bo 2-85176-1.1-c1-0-22 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.553426$ Elliptic curve 85176.ca
2-85176-1.1-c1-0-23 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.562486 Elliptic curve 85176.bi 2-85176-1.1-c1-0-24 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.592918$ Elliptic curve 85176.bp
2-85176-1.1-c1-0-25 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.601916 Elliptic curve 85176.y 2-85176-1.1-c1-0-26 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.620020$ Elliptic curve 85176.q
2-85176-1.1-c1-0-27 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.644405 Elliptic curve 85176.cf 2-85176-1.1-c1-0-28 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.655195$ Elliptic curve 85176.d
2-85176-1.1-c1-0-29 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.655228 Elliptic curve 85176.w 2-85176-1.1-c1-0-3 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.296842$ Elliptic curve 85176.bc
2-85176-1.1-c1-0-30 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.663427 Elliptic curve 85176.n 2-85176-1.1-c1-0-31 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.674073$ Elliptic curve 85176.bl
2-85176-1.1-c1-0-32 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.679319 Elliptic curve 85176.p 2-85176-1.1-c1-0-33 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.682031$ Elliptic curve 85176.m
2-85176-1.1-c1-0-34 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.686572 Elliptic curve 85176.bw 2-85176-1.1-c1-0-35 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.689055$ Elliptic curve 85176.bv
2-85176-1.1-c1-0-36 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.779720 Elliptic curve 85176.v 2-85176-1.1-c1-0-37 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.782527$ Elliptic curve 85176.u
2-85176-1.1-c1-0-38 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.789056 Elliptic curve 85176.f 2-85176-1.1-c1-0-39 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.791305$ Elliptic curve 85176.cg
2-85176-1.1-c1-0-4 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 1 0 0.298362 Elliptic curve 85176.cb 2-85176-1.1-c1-0-40 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.846334$ Elliptic curve 85176.s
2-85176-1.1-c1-0-41 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.864069 Elliptic curve 85176.c 2-85176-1.1-c1-0-42 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.873650$ Elliptic curve 85176.be
2-85176-1.1-c1-0-43 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.904319 Elliptic curve 85176.ba 2-85176-1.1-c1-0-44 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.928843$ Elliptic curve 85176.by
2-85176-1.1-c1-0-45 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.941737 Elliptic curve 85176.t 2-85176-1.1-c1-0-46 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.942672$ Elliptic curve 85176.bn
2-85176-1.1-c1-0-47 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.955082 Elliptic curve 85176.bd 2-85176-1.1-c1-0-48 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $0.965688$ Elliptic curve 85176.bk
2-85176-1.1-c1-0-49 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 0.986024 Elliptic curve 85176.bq 2-85176-1.1-c1-0-5 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $1$ $0$ $0.302632$ Elliptic curve 85176.bm
2-85176-1.1-c1-0-50 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 1.02553 Elliptic curve 85176.bh 2-85176-1.1-c1-0-51 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.02887$ Elliptic curve 85176.bx
2-85176-1.1-c1-0-52 $26.0$ $680.$ $2$ $2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2}$ 1.1 $$1.0 1 -1 1 1.04097 Elliptic curve 85176.br 2-85176-1.1-c1-0-53 26.0 680. 2 2^{3} \cdot 3^{2} \cdot 7 \cdot 13^{2} 1.1$$ $1.0$ $1$ $-1$ $1$ $1.05862$ Elliptic curve 85176.bu
|
Eucalyptus Caesia Height, Regents Court Student Roost, Iphone 7 Boot Loop After Charging Port Replacement, Wellington, Fl Crime Map, This Person Is Unavailable On Messenger 2020, Iphone 7 Boot Loop After Charging Port Replacement, How To Avoid Base Rate Fallacy, Char-broil Big Easy Vegetable Recipes, " />
Home » Porno » how to calculate u and d in binomial tree
how to calculate u and d in binomial tree
0% 0 voto(s)
Binomial tree graphical option calculator: Lets you calculate option prices and view the binomial tree structure used in the calculation. Just as we can write the one step binomial tree for the underlying security, we can write it for a call option. 2. Create your account. For a binomial tree, everywhere in Hull and other literature, we have found the formulas for, but for binomial trees based on forward prices, we get a different formula. answer! The volatility of a non-dividend-paying stock whose price is \$78, is 30%. This preview shows page 5 - 9 out of 14 pages. The tree step size is one month, the domestic interest rate… This section discusses how that is achieved. I expect my reader to be familiar with them already. Solution: d
• 1
|
# Reference request: Affine Grassmannian and G-bundles
Let $G$ be an affine algebraic group over an algebraically closed field $k$ of zero characteristic. The set of cosets $X_G=G(k((t))/G(k[[t]])$ is called the Affine Grassmannian of $G$ and can be given the structure of an ind-$k$-variety, so that for a closed embedding of groups $H\to G$, we get a natural morphism $X_H\to X_G$, which is a closed embedding if $G/H$ is affine. I am interested in a (as detailed as possible) reference for this, but from a specific perspective as will be explain in the following.
There are several possible (equivalent) constructions. A direct approach, in the language of ind-varieties, can be found in Kumar's "Infinite grassmannians and Moduli spaces of G-bundles" for example (for a reductive $G$, which is fine for me), which describes the ind-variety structure explicitly. Apart of using some representation theory, that I don't know well enough, it also lacks an explicit universal property which makes it difficult to operate with and in particular to construct morphsims to and from it.
A more abstract approach is to describe a functor $\operatorname{Gr}_G:kAlg\to Set$ for which $X_G$ is the set of $k$-points. Here is were the $G$-bundles (torsors) appear. There are the "global" and "local" approaches, in which roughly, $\operatorname{Gr}_G (A)$ classifies $A$-families of $G$-Bundles on a curve or a formal disc resp. together with a trivializaion away from a point. Now that one has a functor, it is possible to show that it is an ind-scheme. It is this approach that I would like to have a reference for.
I would like to mention, that this approach is outlined in Gaitsgory's seminar notes, and a more competent student would probably be able to fill in the details by herself, but unfortunately I find it difficult, so I was hopping there might be a more thorough treatment available.
-
If you provide contact information (e.g., a webpage which lists your email address) then I can send you something. But perhaps someone else will provide a more detailed literature reference (I am not aware of any). – user28172 Jun 10 '13 at 11:17
@nosr, thank you for your help. you can send it to [email protected]. – KotelKanim Jun 10 '13 at 11:37
I sympathize greatly with your last sentence and I wish I'd done something to formalize all the exercises I did over the years while in grad school, but alas, when it came time to write the thesis it wasn't worth making it even longer to include something "standard"...therein, I suspect, lies the problem. – Ryan Reich Jun 10 '13 at 11:52
I believe your second approach is Proposition 2 (p.505) of Heinloth's "Uniformization of G-bundles." (link.springer.com/article/10.1007%2Fs00208-009-0443-4) – expz Jun 10 '13 at 12:12
There is a description in Mirkovic-Vilonen, but it is missing some details: arxiv.org/abs/math/0401222 – S. Carnahan Jun 10 '13 at 12:36
I believe your second approach is Proposition 2 (p.6) of Heinloth's Uniformization of $G$-bundles available from Heinloth's website: http://staff.science.uva.nl/~heinloth/Uniformization_17-8-09.pdf
-
Another reference with a slightly different approach is Drinfeld & Beilinsons Quantization of Hitchin Integrable Systems and Hecke Eigensheaves available online here link text. The affine Grassmannian is defined in 5.3.11 and it is explained in the previous remark 5.3.10 c. why this is represented by a formally smoooth ind-proper ind-scheme.
Here, the authors keep track of the points away from which the $G$-bundle has a trivialization, this is denoted by $Gr_{X^I}$ and then take the colimit $colim_{I}Gr_{X^I}$, where the colimit is taken over the category $\mathbf{fSet}^{op}$ of all finite sets $I$ with surjections between them.
In the language of Gaitsgory in Contractibility of the Space of Rational Maps this means that the affine Grassmannian is a pseudo ind-proper pseudo ind-scheme link text. The main point, I think, is that it is possible to define a well-behaved category of D-modules over such pseudo ind-schemes.
-
The following papers might also be helpful:
G. Faltings, Algebraic loop groups and moduli spaces of bundles, J. Eur. Math. Soc. 5 (2003), 41-68.
G. Pappas, M. Rapoport, Twisted Loop groups and their affine flag varieties, Adv. Math. 219 (2008), no. 1, 118-198
G. Pappas, X. Zhu, Local models of Shimura varieties and a conjecture of Kottwitz, http://arxiv.org/abs/1110.5588v4, to appear in Invent. math.
-
|
• (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg$^2$ field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5$\sigma$ point-source depth in a single visit in $r$ will be $\sim 24.5$ (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg$^2$ with $\delta<+34.5^\circ$, and will be imaged multiple times in six bands, $ugrizy$, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg$^2$ region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to $r\sim27.5$. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.
• ### Core or cusps: The central dark matter profile of a redshift one strong lensing cluster with a bright central image(1703.08410)
June 2, 2017 astro-ph.CO, astro-ph.GA
We report on SPT-CLJ2011-5228, a giant system of arcs created by a cluster at $z=1.06$. The arc system is notable for the presence of a bright central image. The source is a Lyman Break galaxy at $z_s=2.39$ and the mass enclosed within the 14 arc second radius Einstein ring is $10^{14.2}$ solar masses. We perform a full light profile reconstruction of the lensed images to precisely infer the parameters of the mass distribution. The brightness of the central image demands that the central total density profile of the lens be shallow. By fitting the dark matter as a generalized Navarro-Frenk-White profile---with a free parameter for the inner density slope---we find that the break radius is $270^{+48}_{-76}$ kpc, and that the inner density falls with radius to the power $-0.38\pm0.04$ at 68 percent confidence. Such a shallow profile is in strong tension with our understanding of relaxed cold dark matter halos; dark matter only simulations predict the inner density should fall as $r^{-1}$. The tension can be alleviated if this cluster is in fact a merger; a two halo model can also reconstruct the data, with both clumps (density going as $r^{-0.8}$ and $r^{-1.0}$) much more consistent with predictions from dark matter only simulations. At the resolution of our Dark Energy Survey imaging, we are unable to choose between these two models, but we make predictions for forthcoming Hubble Space Telescope imaging that will decisively distinguish between them.
• We present spectroscopic confirmation of two new lensed quasars via data obtained at the 6.5m Magellan/Baade Telescope. The lens candidates have been selected from the Dark Energy Survey (DES) and WISE based on their multi-band photometry and extended morphology in DES images. Images of DES J0115-5244 show two blue point sources at either side of a red galaxy. Our long-slit data confirm that both point sources are images of the same quasar at $z_{s}=1.64.$ The Einstein Radius estimated from the DES images is $0.51$". DES J2200+0110 is in the area of overlap between DES and the Sloan Digital Sky Survey (SDSS). Two blue components are visible in the DES and SDSS images. The SDSS fiber spectrum shows a quasar component at $z_{s}=2.38$ and absorption compatible with Mg II and Fe II at $z_{l}=0.799$, which we tentatively associate with the foreground lens galaxy. The long-slit Magellan spectra show that the blue components are resolved images of the same quasar. The Einstein Radius is $0.68$" corresponding to an enclosed mass of $1.6\times10^{11}\,M_{\odot}.$ Three other candidates were observed and rejected, two being low-redshift pairs of starburst galaxies, and one being a quasar behind a blue star. These first confirmation results provide an important empirical validation of the data-mining and model-based selection that is being applied to the entire DES dataset.
• ### Distances with <4% Precision from Type Ia Supernovae in Young Star-Forming Environments(1410.0961)
March 26, 2015 astro-ph.CO, astro-ph.GA
The luminosities of Type Ia supernovae (SNe), the thermonuclear explosions of white-dwarf stars, vary systematically with their intrinsic color and the rate at which they fade. From images taken with the Galaxy Evolution Explorer (GALEX), we identified SNe Ia that erupted in environments that have high ultraviolet surface brightness and star-formation surface density. When we apply a steep model extinction law, we calibrate these SNe using their broadband optical light curves to within ~0.065 to 0.075 magnitudes, corresponding to <4% in distance. The tight scatter, probably arising from a small dispersion among progenitor ages, suggests that variation in only one progenitor property primarily accounts for the relationship between their light-curve widths, colors, and luminosities.
• ### Weighing the Giants II: Improved Calibration of Photometry from Stellar Colors and Accurate Photometric Redshifts(1208.0602)
April 19, 2014 astro-ph.CO, astro-ph.IM
We present improved methods for using stars found in astronomical exposures to calibrate both star and galaxy colors as well as to adjust the instrument flat field. By developing a spectroscopic model for the SDSS stellar locus in color-color space, synthesizing an expected stellar locus, and simultaneously solving for all unknown zeropoints when fitting to the instrumental locus, we increase the calibration accuracy of stellar locus matching. We also use a new combined technique to estimate improved flat-field models for the Subaru SuprimeCam camera, forming `star flats' based on the magnitudes of stars observed in multiple positions or through comparison with available SDSS magnitudes. These techniques yield galaxy magnitudes with reliable color calibration (< 0.01 - 0.02 mag accuracy) that enable us to estimate photometric redshift probability distributions without spectroscopic training samples. We test the accuracy of our photometric redshifts using spectroscopic redshifts z_s for ~5000 galaxies in 27 cluster fields with at least five bands of photometry, as well as galaxies in the COSMOS field, finding sigma((z_p - z_s)/(1 + z_s)) ~ 0.03 for the most probable redshift z_p. We show that the full posterior probability distributions for the redshifts of galaxies with five-band photometry exhibit good agreement with redshifts estimated from thirty-band photometry in the COSMOS field. The growth of shear with increasing distance behind each galaxy cluster shows the expected redshift-distance relation for a flat Lambda-CDM cosmology. Photometric redshifts and calibrated colors are used in subsequent papers to measure the masses of 51 galaxy clusters from their weak gravitational shear. We make our Python code for stellar locus matching available at http://big-macs-calibrate.googlecode.com; the code requires only a catalog and filter functions.
• ### All Weather Calibration of Wide Field Optical and NIR Surveys(1312.1916)
Dec. 6, 2013 astro-ph.IM
The science goals for ground-based large-area surveys, such as the Dark Energy Survey, Pan-STARRS, and the Large Synoptic Survey Telescope, require calibration of broadband photometry that is stable in time and uniform over the sky to precisions of a per cent or better. This performance will need to be achieved with data taken over the course of many years, and often in less than ideal conditions. This paper describes a strategy to achieve precise internal calibration of imaging survey data taken in less than photometric conditions, and reports results of an observational study of the techniques needed to implement this strategy. We find that images of celestial fields used in this case study with stellar densities of order one per arcmin-squared and taken through cloudless skies can be calibrated with relative precision of 0.5 per cent (reproducibility). We report measurements of spatial structure functions of cloud absorption observed over a range of atmospheric conditions, and find it possible to achieve photometric measurements that are reproducible to 1 per cent in images that were taken through cloud layers that transmit as little as 25 per cent of the incident optical flux (1.5 magnitudes of extinction). We find, however, that photometric precision below 1 per cent is impeded by the thinnest detectable cloud layers. We comment on implications of these results for the observing strategies of future surveys.
• A survey that can cover the sky in optical bands over wide fields to faint magnitudes with a fast cadence will enable many of the exciting science opportunities of the next decade. The Large Synoptic Survey Telescope (LSST) will have an effective aperture of 6.7 meters and an imaging camera with field of view of 9.6 deg^2, and will be devoted to a ten-year imaging survey over 20,000 deg^2 south of +15 deg. Each pointing will be imaged 2000 times with fifteen second exposures in six broad bands from 0.35 to 1.1 microns, to a total point-source depth of r~27.5. The LSST Science Book describes the basic parameters of the LSST hardware, software, and observing plans. The book discusses educational and outreach opportunities, then goes on to describe a broad range of science that LSST will revolutionize: mapping the inner and outer Solar System, stellar populations in the Milky Way and nearby galaxies, the structure of the Milky Way disk and halo and other objects in the Local Volume, transient and variable objects both at low and high redshift, and the properties of normal and active galaxies at low and high redshift. It then turns to far-field cosmological topics, exploring properties of supernovae to z~1, strong and weak lensing, the large-scale distribution of galaxies and baryon oscillations, and how these different probes may be combined to constrain cosmological models and the physics of dark energy.
• ### Towards More Precise Survey Photometry for PanSTARRS and LSST: Measuring Directly the Optical Transmission Spectrum of the Atmosphere(0708.1364)
Aug. 10, 2007 astro-ph
Motivated by the recognition that variation in the optical transmission of the atmosphere is probably the main limitation to the precision of ground-based CCD measurements of celestial fluxes, we review the physical processes that attenuate the passage of light through the Earth's atmosphere. The next generation of astronomical surveys, such as PanSTARRS and LSST, will greatly benefit from dedicated apparatus to obtain atmospheric transmission data that can be associated with each survey image. We review and compare various approaches to this measurement problem, including photometry, spectroscopy, and LIDAR. In conjunction with careful measurements of instrumental throughput, atmospheric transmission measurements should allow next-generation imaging surveys to produce photometry of unprecedented precision. Our primary concerns are the real-time determination of aerosol scattering and absorption by water along the line of sight, both of which can vary over the course of a night's observations.
|
# Can the real-time Green's function be written in the form of path integral on the real axis? [closed]
In every textbook, the path integral of the Green's function is written in imaginary-time. I wonder whether we could write real-time green function in the path integral form.
All right after discussing with professor, I’ll answer the question myself. The crucial point is that the real-time green function is defined on zero temperature, so the contribution to the two point function will only come from the ground state, and the factor $$e^{-\beta H}$$ will be thrown away. As a result, the remaining time ordered operator will appear as some evolution operators not in a time ordered place. (Without considering time contour as in @Vadim answer or other more complicated situations) So it can not be written as a path integral.
|
# [tmql-wg] TMQL Issue: Functions and Predicates as first-class topics
Robert Barta rho at bigpond.net.au
Sun Mar 11 00:06:24 EST 2007
On Sat, Mar 10, 2007 at 04:11:14PM +0100, Lars Heuer wrote:
> - The TM-ish style adds some syntactic noise to functions / predicates
> They can be written with less code. I.e.
>
> TM-ish style:
>
> nr_employees isa tmql:function
> tmql:return : {
> fn:length ( $o <- employer) > } > > Traditional style: > > def nr_employees > return fn:length ($o <- employer)
I definitely buy that argument, that a 'dedicated' function syntax is
more dedicated than a generic TM syntax for functions. That is true by
definition, right? :-)
The downside of a specialized syntax is that - if we want to see these
things as topics, and the TM paradigm allows to do so anyway - it
takes ANOTHER, ADDITIONAL step to explain that. That step would fall away
if we would use a CTM syntax.
What about allowing this in CTM:
person rho
shoesize 43
which is saying
rho isa person
shoesize: 43
In this vein
tmql:def nr_employees
tmql:return fn:length ( $o <- employer) or dropping the tmql: prefix def nr_employees return fn:length ($o <- employer)
would also be a valid topic syntax.
> The same applies to the documentation for the functions: I assume
> that the docs are written as occurrences, too. But this practise is
> more lavish than inventing a special (?) comment syntax which can be
> used for documentation purposes (like Java Doc, Pyhton docstrings
> etc.)
Same argument as above.
> - Slower to parse
> If a parser sees i.e. "def" it knows that it sees a function /
> predicate / template declaration without involving any TM-related
> operation. If TMQL uses the TM-ish style, an user can expect that
> the TMQL-processor accepts something like:
>
> my-function iko tmql:function
>
> my-return iko tmql:return
>
> nr_employees isa my-function
> my-return: {
> [...]
> }
>
> To understand that "nr_employees" is TMQL-function causes more work
> for a TMQL processor than the traditional style would.
Even if this is allowed, the additional 'work' for the processor would
be minimal. At some point it will make a call to the TM infrastructure
"is nr_employees a tmql:function ?"
You need this functionality anyway 1000000 times when TMQL evaluates.
> The same applies to surrounding tools, like a simple syntax
> highlighter. It would be a bit harder to write a simple syntax
> highlighter without an underlying topic map if everything looks like
> a topic.
OK, how many people would actually subclass a function? And how many
of those who do will be surprised that a syntax colorizer has
problems?
> - Scope?
> The examples I've seen so far are all in the unconstrained scope.
> What happens if the type-instance relationship is scoped? Under
> which conditions are the functions / predicates are executed?
I am sure I could find an example where this is useful :-)))
\rho
|
# Optimum number of cog teeth for two step helical gearbox
Note: Cross-posted at http://community.wolfram.com/groups/-/m/t/137895?p_p_auth=8QnKtT9I.
I am to design a two step gearbox. The first step is to choose the number of teeth in each cog wheel in order to achieve a gear ratio of 17.3. In other words:
(N1 N2)/(n1 n2) == 17.3
where N1 and N2 are the number of teeth in gear 1 and 2, and n1 and n2 are the number of teeth in pinion 1 and 2. Is it possible to get Mathematica to "guess" the lowest number of teeth possible and still get as close as possible to 17.3? The number of teeth in the pinions must not be lower than 20. In addition, the number of cog teeth in the gears must not be divisible, i.e. only common factor has to be 1.
-
Is this a question about the software Mathematica? It looks like you might have intended to post this at our sister site, math.SE – Verbeia Oct 12 '13 at 20:24
I think in general you might have to enforce the gcd (divisibility) constraint after the fact. For this particular example I get a result on the first try that has those numbers relatively prime, so we can accept that solution.
Anyway, here is the code. For objective function I sum the teeth and add a penalty term for that ratio straying (discrepancing itself?) from 17.3.
obj = c1 + c2 + p1 + p2 + (c1*c2 - 17.3*p1*p2)^2;
cs1 = {p1 >= 20, p2 >= 20, c1 >= 2, c2 >= 2};
In[368]:= {min, vals} =
NMinimize[{obj,
Flatten[{cs1, Element[{c1, c2, p1, p2}, Integers]}]}, {c1, c2, p1,
p2}]
Out[368]= {223., {c1 -> 91, c2 -> 76, p1 -> 20, p2 -> 20}}
As 91 is prime, it is of course relatively prime to 76, so this set of values appears to suit the requirements.
-
Adding MaxIterations -> 500 comes up with a superior solution: {c1 -> 79, c2 -> 92, p1 -> 21, p2 -> 20}; delta -0.0047619 – Mr.Wizard Oct 12 '13 at 21:57
Actually in v7 I get that result anyway. – Mr.Wizard Oct 12 '13 at 21:59
Another approach is to use FindInstance:
FindInstance[(172/10 <= (N1 N2)/(n1 n2) <= 174/10) &&
(n1 >= 20) && (n2 >= 20) && (N1 > 10) && (N2 > 10), {n1, n2, N1, N2}, Integers]
{{n1 -> 20, n2 -> 20, N1 -> 11, N2 -> 630}}
By playing around with the exact criteria used, it is possible to find many answers
FindInstance[(Abs[(N1 N2)/(n1 n2) - 17.3] <= 0.02) && (n1 >= 25) && (n2 >= 25)
&& (N1 > 10) && (N2 > 10) && (n1 + n2 + N1 + N2 < 300), {n1, n2, N1, N2}, Integers]
{{n1 -> 25, n2 -> 25, N1 -> 56, N2 -> 193}}
-
|
## LaTeX forum ⇒ Calendars and Miscellaneous ⇒ bottom line sometimes not drawn
Calendars, invoices, tables, memos, contracts, dictionaries, code snippets
templateuser
Posts: 679
Joined: Tue Mar 03, 2015 4:01 pm
### bottom line sometimes not drawn
Hello,
I'm seeing a little bug in the weekly timetable template. The bottom line of the calendar is not always drawn. For example, this works as expected:
\begin{calendar}{\hsize} \day{}{} \day{}{hello, world} \day{}{} \day{}{} \day{}{} \finishCalendar \end{calendar}
But this omits the bottom line:
\begin{calendar}{\hsize} \day{}{} \day{}{hello, world} \day{}{} \day{}{} \day{}{} \day{}{} \finishCalendar
\end{calendar}
jjfoersch
Last edited by templateuser on Thu Mar 19, 2015 1:59 pm, edited 1 time in total.
Tags:
Vel
Site Moderator
Posts: 455
Joined: Fri Jun 29, 2012 1:20 am
Hi,
It looks like you're right. Odd that the bug only appears with 6 \day{}{} commands and not if there are more or less. I've had a quick look at the calendar.sty file and I don't immediately see what is causing the bug, the line \ifnum\@currentdaynum=6 &\\\hline\else should be printing the bottom line when there are 6 \day{}{} commands.
Ultimately I don't think it matters much since this template has 7 separate sections for each day of the week with a \day{}{} command in each one. Days that are not filled with anything should be left empty, as the Saturday has been in the template.
If you figure out what's causing the bug do let me know and I'll update the template!
Cheers,
Vel
Founder and administrator of LaTeXTemplates.com and LaTeXTypesetting.com
templateuser
Posts: 679
Joined: Tue Mar 03, 2015 4:01 pm
At the point when \finishCalendar gets called, the value of \@currentdaynum will 1 if \day was not called at all, and 2 through 8 if it was. If the last \day was in the 7th column, the value of the counter is 8, and \day has drawn the \hline. If the last \day was in the 6th column or lower, the value of the counter is (column+1), and \finishCalendar needs to draw that line. \finishCalendar just needs an ifnum branch to handle \@currentdaynum=7. Here is a patch:
http://retroj.net/scratch/handle-day-6-last-day.patch
ftr, I'm using calendar.sty for multi-row calendars, so it's nice not to have to make sure I have an evenly-divisible-by-7 number of \day entries.
jjfoersch
Vel
Site Moderator
Posts: 455
Joined: Fri Jun 29, 2012 1:20 am
Excellent, thanks for the fix! I've updated the template on the site to get rid of this bug.
Founder and administrator of LaTeXTemplates.com and LaTeXTypesetting.com
templateuser
Posts: 679
Joined: Tue Mar 03, 2015 4:01 pm
Ah, this is not the "official" copy of calendar.sty, is it? Just realised that. It looks like there is at least one newer version of calendar.sty out since 2010, and I would not be surprised if this bug was already fixed in it. I think that if latextemplates is going to distribute a modified old version of calendar.sty, it would be best to make that clear in the file's version information. Either that, or just use the newest version on latextemplates, and check that the bug has been fixed.
I wrote to the author of calendar.sty to report this bug and find out if it has already been fixed in the newer version. Will report back here if I learn the answer.
jjfoersch
Vel
Site Moderator
Posts: 455
Joined: Fri Jun 29, 2012 1:20 am
Indeed, it appears to be an older version. The newer version doesn't work with this template at all and seems to be very different. I tend not to touch the style files since the site is more about the templates rather than the backend of what's making them work (plus it would mean a lot more work for me to process the style files . I could put in a short comment saying that this has been modified to fix a bug from the original version 3.1.
Founder and administrator of LaTeXTemplates.com and LaTeXTypesetting.com
templateuser
Posts: 679
Joined: Tue Mar 03, 2015 4:01 pm
That sounds good.
jjfoersch
|
MathSciNet bibliographic data MR936697 (90c:11038) 11F70 (11F67 11G20 11R39 22E55) Drinfelʹd, V. G. Proof of the Petersson conjecture for ${\rm GL}(2)$${\rm GL}(2)$ over a global field of characteristic $p$$p$. (Russian) Funktsional. Anal. i Prilozhen. 22 (1988), no. 1, 34--54, 96; translation in Funct. Anal. Appl. 22 (1988), no. 1, 28–43 Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
Username/Password Subscribers access MathSciNet here
AMS Home Page
American Mathematical Society 201 Charles Street Providence, RI 02904-6248 USA
© Copyright 2015, American Mathematical Society
Privacy Statement
|
## anonymous one year ago Boxes are moved on a conveyor belt from where they are filled to the packing station 11.0 m away. The belt is initially stationary and must finish with zero speed. The most rapid transit is accomplished if the belt accelerates for half the distance, then decelerates for the final half of the trip. If the coefficient of static friction between a box and the belt is 0.56 what is the minimum transit time for each box?
• This Question is Open
1. matt101
Picture this as an equilibrium question. We want the net horizontal force to be zero so that way the box doesn't slide away. Static friction is the obvious horizontal force, but what is the force it's balancing? The force applied by the conveyor belt on the box, which is ma. So now we can set up an equation: $f=ma$$\mu mg = ma$$\mu g = a$$a=(0.56)(9.8)=5.488$ Notice that the mass drops right out of the equation, leaving only the acceleration - exactly what we want! In this case, the maximum acceleration (and deceleration) of the conveyor built is 5.488 m/s^2. Consider just the acceleration - half the distance is 5.5 m. We can now calculate the time this half of the trip takes: $d= v_i t+ \frac{1}{2}at^2$$5.5= \frac{1}{2}(5.488)t^2$$t=1.42$ The time it takes to cover half the distance is 1.42 s (we can disregard the negative value of t because it makes no sense here). That means the time to cover the full distance is twice that, 2.84 s! Let me know if that makes sense!
|
STRANGE MESONS($\boldsymbol S$ = $\pm1$, $\boldsymbol C$ = $\boldsymbol B$ = 0) ${{\mathit K}^{+}}$ = ${\mathit {\mathit u}}$ ${\mathit {\overline{\mathit s}}}$, ${{\mathit K}^{0}}$ = ${\mathit {\mathit d}}$ ${\mathit {\overline{\mathit s}}}$, ${{\overline{\mathit K}}^{0}}$ = ${\mathit {\overline{\mathit d}}}$ ${\mathit {\mathit s}}$, ${{\mathit K}^{-}}$ = ${\mathit {\overline{\mathit u}}}$ ${\mathit {\mathit s}}$, similarly for ${{\mathit K}^{*}}$'s INSPIRE search
# ${{\boldsymbol K}^{0}}$ $I(J^P)$ = $1/2(0^{-})$
See related review: $\mathit CPT$ Invariance Tests in Neutral Kaon Decay
${{\mathit K}^{0}}$ MASS $497.611 \pm0.013$ MeV (S = 1.2)
${\mathit m}_{{{\mathit K}^{0}}}–{\mathit m}_{{{\mathit K}^{\pm}}}$ $3.934 \pm0.020$ MeV (S = 1.6)
${{\mathit K}^{0}}$ MEAN SQUARE CHARGE RADIUS $-0.077 \pm0.010$ fm${}^{2}$
$\boldsymbol T$-VIOLATION PARAMETER IN ${{\boldsymbol K}^{0}}-{{\overline{\boldsymbol K}}^{0}}$ MIXING
ASYMMETRY $\mathit A_{\mathit T}$ IN ${{\mathit K}^{0}}-{{\overline{\mathit K}}^{0}}$ MIXING $0.0066 \pm0.0016$
$\boldsymbol CP$-VIOLATION PARAMETERS
Re($\epsilon$) $0.001596 \pm0.000013$
$\boldsymbol CPT$-VIOLATION PARAMETERS
REAL PART OF $\delta$ $(2.5 \pm2.3) \times 10^{-4}$
IMAGINARY PART OF $\delta$ $(-1.5 \pm1.6) \times 10^{-5}$
Re(y) $0.0004 \pm0.0025$
Re(x$_{-}$) $-0.0029 \pm0.0020$
$\vert{}{\mathit m}_{{{\mathit K}^{0}}}–{\mathit m}_{{{\overline{\mathit K}}^{0}}}\vert{}/{\mathit m}_{\mathrm {average}}$ $<6 \times 10^{-19}$ CL=90.0%
(${\Gamma}_{{\mathit K}^{0}}−{\Gamma}_{{\overline{\mathit K}}^{0}})/{\mathit m}_{{\mathrm {average}}}$ $(8 \pm8) \times 10^{-18}$
TESTS OF $\Delta \boldsymbol S$ = $\Delta \boldsymbol Q$ RULE
Re(x$_{+}$) $-0.0009 \pm0.0030$
|
# mongoc_collection_create_bulk_operation_with_opts()¶
## Synopsis¶
mongoc_bulk_operation_t *
mongoc_collection_create_bulk_operation_with_opts (
mongoc_collection_t *collection,
const bson_t *opts) BSON_GNUC_WARN_UNUSED_RESULT;
## Parameters¶
opts may be NULL or a BSON document with additional command options:
## Description¶
This function shall begin a new bulk operation. After creating this you may call various functions such as mongoc_bulk_operation_update(), mongoc_bulk_operation_insert() and others.
After calling mongoc_bulk_operation_execute() the commands will be executed in as large as batches as reasonable by the client.
Bulk Write Operations
mongoc_bulk_operation_t
## Errors¶
Errors are propagated when executing the bulk operation.
## Returns¶
A newly allocated mongoc_bulk_operation_t that should be freed with mongoc_bulk_operation_destroy() when no longer in use.
Warning
Failure to handle the result of this function is a programming error.
|
# Linux – Access Desktop as root user in linux
iso-imagelinuxmountroot
I know the command to get to your Desktop is cd ~/Desktop. This however does not work as the root user. However I need to be the root user because I would like to move a file, from my desktop into my /mnt/disk folder so that I can mount my .iso. I can't click-and-drag either as I do not have permission. I've been trying for about an hour and am quite frustrated; so any and all help would be much appreciated. Thank you.
If you want to access a folder under another users home directory instead of using ~ you need to use ~username. So I would do something like ~zoredache/Desktop. You could also simply use the full path. Typically on a Linux system the home directories are under /home, so you could use cd /home/zoredache/Desktop.
|
# How to show convergence or divergence of a series when the ratio test is inconclusive?
Use the ratio or the root test to show convergence or divergence of the following series. If inconclusive, use another test:
$$\sum_{n=1}^{\infty}\frac{n!}{n^{n}}$$
So my first instinct was to try the ratio test due to the existence of the factorial. This is my working:
Using the Ratio Test: \begin{align*} \lim_{n\to\infty}\left|\frac{a_{n+1}}{a_{n}}\right|&=\lim_{n\to\infty}\left|\frac{(n+1)\cdot n!\cdot n^{n}}{n!\cdot n\cdot n^{n}}\right|\\ &=\lim_{n\to\infty}\left|\frac{n+1}{n}\right|\\ &=1 \end{align*} The Ratio Test is inconclusive.
I decided then to try the root test due to the presence of the $n^n$, but I think that's problematic and won't work (unless I'm looking at something the wrong way). I end up with the following:
\begin{align} \lim_{n\to\infty}\left|a_n\right|^{1/n}&=\lim_{n\to\infty}\left|\frac{n!}{n^n}\right|\\ &=\lim_{n\to\infty}\left|\frac{(n!)^{1/n}}{n}\right|=\frac{\infty}{\infty} \end{align} So my problem here is that I can't apply L'Hopital's rule. If I expand the numerator I get the following: $$(n!)^{1/n}=\sqrt[n]{n}\cdot\sqrt[n]{n-1}\cdot\sqrt[n]{n-2}\cdot\sqrt[n]{n-3}\cdots\sqrt[n]{3}\cdot\sqrt[n]{2}\cdot\sqrt[n]{1}$$
Which then would only allow me to cancel the $\sqrt[n]{n}$ and get $n^{(n-1)/n}$ in the denominator. Still gives me the indeterminate form of $\infty/\infty$.
So how can I approach this? Or was I on the right track and did something wrong?
-
The Ratio Test works here, you just did not compute the limit correctly. – André Nicolas Jun 17 '13 at 19:41
The problem you made for yourself is that your denominator should contain $\ (n+1)^{n+1} \$ . – RecklessReckoner Jun 17 '13 at 19:43
$$\frac{n^n}{(n+1)^{n+1}} = \frac{1}{n+1}\cdot\frac{n^n}{(n+1)^n}.$$ The $n+1$ in the denominator is canceled by the $n+1$ in the other part of the whole expression. Then we have $$\frac{n^n}{(n+1)^n}\to \frac1e\text{ as }n\to\infty.$$
-
I'm sorry, but I'm not following. All I learned so far was to do algebraic manipulation of the ratio $a_{n+1}/a_n$ and then find the subsequent limit. – agent154 Jun 17 '13 at 19:39
Oh, I see now - I messed up with the $a_{n+1}$ term. I'll look at it again, thanks. – agent154 Jun 17 '13 at 19:45
You've got the wrong formula for $a_{n+1}/a_n$. What happened to the $(n+1)^{n+1}$ in $a_{n+1}$?
-
That would be it. Thanks, I couldn't see that when I looked over it. – agent154 Jun 17 '13 at 19:45
To make more explicit what Michael Hardy wrote: \begin{align} \left.\frac{(n+1)!}{(n+1)^{n+1}}\middle/\frac{n!}{n^n}\right. &=\frac{(n+1)!}{n!}\frac{n^n}{(n+1)^{n+1}}\\ &=(n+1)\frac{n^n}{(n+1)^n(n+1)}\\ &=\frac1{\left(1+\frac1n\right)^n}\\ &\to\quad\frac1e \end{align} Therefore, this series passes the ratio test and converges.
-
How did you get from $n^n/(n+1)^n$ to $1/(1+1/n)^n$? I can understand the rest after that.. – agent154 Jun 17 '13 at 22:24
@agent154: divide both numerator and denominator by $n^n$. – robjohn Jun 17 '13 at 23:38
That's probably a whole lot better than the way I did it by using L'Hopital's rule all over again with $(1-1/(n+1))^n$ – agent154 Jun 18 '13 at 0:11
|
Showing the following differential equation is exact
I'm asked to show that the attached differential equation is exact: $$\left(\frac{x}{\sin y}+2 \right)dx=\frac{(x^2+1)\cos y}{1-\cos{2y}}dy$$ I know I have to show that $N_x=M_y$. In this particular equation, $M = \frac{x}{\sin y} + 2$ and N = $\frac{((x^2+1)\cos y)}{(1-\cos2y)}$, and all I could get to is $M_y = -\frac{x\cos y}{\sin^2 y}$ and $N_x = \frac{2x\cos y}{1-\cos2y}$. What did I do wrong? Or maybe there is a different path altogether?
We are trying to determine if:
$$\underbrace{\left(\frac{x}{\sin(y)}+2\right)}_{P(x,y)}\:\mathrm{d}x-\underbrace{\frac{(x^{2}+1)\cos(y)}{1-\cos(2y)}}_{Q(x,y)}\:\mathrm{d}y=0$$
Is an exact ordinary differential equation; we note that if this is an exact differential equation, then we have that there exists a function $f(x,y)$ such that $P(x,y)=\frac{\partial f}{\partial x}$ and $Q(x,y)=\frac{\partial f}{\partial y}$, we therefore have (using the identity $\frac{\partial^{2} f}{\partial x\partial y}=\frac{\partial^{2} f}{\partial y\partial x}$) that:
$$\frac{\partial P}{\partial y}=\frac{\partial Q}{\partial x}$$
Must be true if it is an exact ODE. computing $\frac{\partial P}{\partial y}$, we get:
$$\frac{\partial P}{\partial y}=-x\cot(y)\csc(y)$$
And computing $\frac{\partial Q}{\partial x}$, we have:
$$\frac{\partial Q}{\partial x}=-\frac{2x \cos(y)}{1-\cos(2y)}=-x\cot(y)\csc(y)$$
Therefore it is an exact ODE.
• Thank you very much. However, could you please explain the derivations you did? I'm not sure I understood them. – user159527 Aug 18 '14 at 15:02
|
# Class 11 Maths Ncert Solutions Ex 6.1
## Class 11 Maths Ncert Solutions Chapter 6 Ex 6.1
### Ncert Solutions For Class 11 Maths Ex 6.1 PDF Free Download
#### Practise This Question
Between 58 and 118, how many fractions lie on the number line
|
# Python and R - Part 1: Exploring Data with Datatable
Written by David Lucey
# Article Update
Interested in more Python and R tutorials?
## Introduction
Python’s datatable was launched by h2o two years ago and is still in alpha stage with cautions that it may still be unstable and features may be missing or incomplete. We found that it feels very similar to the R version, with a few syntax differences and also some important pieces still to be added (as we will discuss). We could only find a handful of posts showing how to use datatable, and many of the examples we were probably not written by regular users of R data.table, and were often focused on its efficiency and ability to scale relative to pandas. We use R data.table every day and love the speed and concise syntax, so this walk-through analysis of the EPA’s Big MT cars data set will be on the syntax of the most frequent actual data exploration operations. As for plotnine, it feels more seamless with ggplot2 with a few problems formatting plots in Rmarkdown.
## EPA’s Big MT Dataset
To make it a little interesting, we will use the Tidy Tuesday Big MT Cars with 36 years of 42,230 new US car models. The data dictionary with 83 variables describing each annual new car model is found here. Everyone loves cars and remembering historical models, and we have naturally been curious about this data set. After closer analysis however, we discovered that there are some unfortunate missing pieces.
When we have modeled mtcars, weight (wt) and horsepower (hp), and their interaction, have been most informative for predicting mpg. It would have been interesting to look at the evolution of the mtcars coefficients over time, but these variables are not unfortunately not available. In addition, it is hard to get a sense of fleet mileage without the annual unit-volume of each new car model. Because of this, it is impossible to know the evolution of more fuel efficient electric vehicles relative to more fuel-hungry model sales.
It is difficult to understand why these variables are not included when that information must be available to the EPA, and it clearly says on page 6 of Fuel Economy Guide 2020 that an extra 100 lbs decreases fuel economy by 1%. While the data set is still of interest to practice for data cleaning, it doesn’t look likely that we will be able replicate mtcars over time unless we can find more variables.
We tried to download both the origin zipped data directly from the EPA website (see link below), and the .csv from the Tidy Tuesday website, but were unsuccessful in both cases using Python and R versions of fread. We were able to download the Tidy Tuesday .csv link with fread in data.table but not datatable, and the error message didn’t give us enough information to figure it out. The documentation for data.table fread is among the most extensive of any function we know, while still thin for datatable’s version so far. In the end, we manually downloaded and unzipped the file from the EPA’s website, and uploaded from our local drive.
The list of all 83 variables below, and we can see that there are several pertaining to fuel efficiency, emissions, fuel type, range, volume and some of the same attributes that we all know from mtcars (ie: cylinders, displacement, make, model and transmission). As mentioned, gross horsepower and weight are missing, but carburetors, acceleration and engine shape are also absent. We have all classes of vehicles sold, so get vehicle class information (VClass) not available in mtcars which is only cars. We will discuss further down, changes to the weight cutoffs on some of the categories over time make VClass of questionable use.
## Set-up Thoughts from R Perspective
There were a couple of things about the set-up for datatable, which weren’t apparent coming over from data.table as an R user. The first was to use from dt import * at the outset to avoid having to reference the package short name every time within the frame. From a Python perspective, this is considered bad practice, but we are only going to do it for that one package because it makes us feel more at home. The second was to use export_names() in order to skip having to use the f operator or quotation marks to reference variables. In order to do this, we had to create a dictionary of names using the names list from above, and each of their f expressions extracted with export_names in a second list. We then used update from the local environment to assign all of the dictionary values to their keys as variables. From then on, we can refer to those variable without quotation marks or the f operator (although any new variables created would still need f or quotation marks). We weren’t sure why this is not the default behavior, but it is easily worked around for our purposes. These two possibly not “Pythonic” steps brought the feel of datatable a lot closer to the usual R data.table (ie: without the package and expression short codes).
## Basic Filter and Select Operations
A few lines of some key variables are shown in the code below, and it is clear that they need significant cleaning to be of use. One difference with R data.table can be seen below with filtering. Using our year_filter in i (the first slot), the 1204 2019 models are shown below. Unlike R data.table, we refer to year outside of the frame in an expression, and then call it within i of the frame. The columns can be selected within () or [] in j (the second slot) as shown below, and new columns can be created within {}.
We usually like to make a quick check if there are any duplicated rows across the whole our dataFrame, but there isn’t a duplicated() function yet in datatable. According to How to find unique values for a field in Pydatatable Data Frame, the unique() function also doesn’t apply to groups yet. In order to work around this, identifying variables would have to be grouped, counted and filtered for equal to 1, but we weren’t sure yet exactly which variables to group on. We decided to pipe over to pandas to verify with a simple line of code that there were no duplicates, but hope this function will be added in the future.
## Aggregate New Variable and Sort
We can see that below that eng_dscr is unfortunately blank 38% of the time, and high cardinality for the rest of the levels. A small percentage are marked “GUZZLER” and “FLEX FUELS”. in a few cases, potentially helpful information about engine like V-6 or V-8 are included with very low frequency, but not consistently enough to make sense try to extract. Another potentially informative variable, trans_dscr is similarly blank more than 60% of the time. It seems unlikely that we could clean these up to make it useful in an analysis, so will probably have to drop them.
## Separate and Assign New Variables
As shown above, trany has both the transmission-type and gear-speed variables within it, so we extracted the variable from big_mt with to_list(), drilled down one level, and used regex to extract the transmission and gear information needed out into trans and gear. Notice that we needed to convert the lists back into columns with dt.Frame before assigning as new variables in big_mt.
In the third line of code, we felt like we were using an R data.table. The {} is used group by trans and gear, and then to create the new percent variable in-line, without affecting the other variables in big_mt. We tried to round the decimals in percent, but couldn’t figure it out so far. Our understanding is that there is no round() method yet for datatable, so we multiplied by 100 and converted to integer. We again called export_names(), to be consistent in using non-standard evaluation with the two new variables.
## Set Key and Join
We wanted to create a Boolean variable to denote if a vehicle had an electric motor or not. We again used {} to create the variable in the frame, but don’t think it is possible to update by reference so still had to assign to is_ev. In the table below, we show the number of electric vehicles rising from 3 in 1998 to 149 this year. Unfortunately,
## Using Regular Expressions in Row Operations
Next, we hoped to extract wheel-drive (2WD, AWD, 4WD, etc) and engine type (ie: V4, V6, etc) from model. The re_match() function is helpful in filtering rows in i. As shown below, we found almost 17k matches for wheel drive, but only 718 for the engine size. Given that we have over 42k rows, we will extract the wheels and give up on the engine data. It still may not be enough data for wheels to be a helpful variable.
We used regex to extract whether the model was 2WD, 4WD, etc as wheels from model, but most of the time, it was the same information as we already had in drive. It is possible that our weakness in Python is at play, but this would have been a lot simpler in R, because we wouldn’t have iterated over every row in order to extract part of the row with regex. We found that there were some cases where the 2WD and 4WD were recorded as 2wd and 4wd. The replace() function was an efficient solution to this problem, replacing matches of ‘wd’ with ‘WD’ over the entire frame.
## Reshaping
There was no such thing as an 4-wheel drive SUVs back in the 80’s, and we remember the big 8-cylinder Oldsmobiles and Cadillacs, so were curious how these models evolved over time. datatable doesn’t yet have dcast() or melt(), so we had to pipe these out to_pandas() and then use pivot_table(). Its likely that a lot of the the many models where wheel-drive was unspecified were 2WD, which is still the majority of models. We would have liked to show these as whole numbers, and there is a workaround in datatable to convert to integer, but once we pivoted in pandas, it reverted to float. We can see the first AWD models starting in the late 80s, and the number of 8-cylinder cars fall by half. There are are a lot fewer annual new car models now than in the 80s, but were surprised how many fewer 4-cylinders.
## Combining Levels of Variables with High Cardinality
With 35 distinct levels often referring to similar vehicles, VClass also needed to be cleaned up. Even in R data.table, we have been keenly awaiting the implementation of fcase, a data.table version of the dplyr case_when() function for nested control-flow statements. We made a separate 16-line function to lump factor levels (not shown). In the first line below, we created the vclasses list to drill down on the VClass tuple elements as strings. In the second line, we had to iterate over the resulting strings from the 0-index of the tuple to extract wheel-drive from a list-comprehension. We printed out the result of our much smaller list of lumped factors, but there are still problems with the result. The EPA changed the cutoff for a “Small Pickup Truck” from 4,500 to 6,000 lbs in 2008, and also used a higher cut-off for “small” SUV’s starting in 2011. This will make it pretty hard to us VClass as a consistent variable for modeling, at least for Pickups and SUVs. As noted earlier, if we had the a weight field, we could have easily worked around this.
## Selecting Multiple Columns with Regex
In the chunk (below), we show how to select columns from the big_mt names tuple by creating the measures selector using regex matches for the key identifier columns and for integer mileage columns matching ‘08’. This seemed complicated and we couldn’t do it in line within the frame as we would have with data.table .SD = patterns(). We also wanted to reorder to move the identifier columns (year, make and model) to the left side of the table, but couldn’t find a equivalent setcolorder function. There is documentation about multi-column selection, but we couldn’t figure out an efficient way to make it work. We show the frame with the year_filter which we set up earlier.
## Selecting Columns and Exploring Summary Data
We looked for a Python version of skimr, but it doesn’t seem like there is an similar library (as is often the case). We tried out pandas profiling, but that had a lot of dependencies and seemed like overkill for our purposes, so decided to use skim_tee on the table in a separate R chunk (below). It was necessary to convert to pandas in the Python chunk (above), because we couldn’t figure out how to translate a datatable back to a data.frame via reticulate in the R chunk.
When we did convert, we discovered there were some problems mapping NA’s which we will show below. We suspect it isn’t possible to pass a datatable to data.table, and this might be the first functionality we would vote to add. There is a sizable community of data.table users who are used to the syntax, and as we are, might be looking to port into Python (rather than learn pandas directly). As reticulate develops, opening this door seems to make so much sense. Below, we again run export_names() in order to also prepare the newly generated variables for non-standard evaluation within the frame, and then filtered for the 21 columns we wanted to keep.
In the result above, we see a lot of challenges if we had hoped to have appropriate data to build a model to predict mpg over time. Many variables, such as evMotor, tCharger, sCharger and guzzler, are only available in a small number of rows. When we set out on this series, we hoped we would be able to experiment with modeling gas mileage for every year just like mtcars, but that seems unlikely based on the available variables.
## Conclusion
It took us a couple of months to get up and running with R data.table, and even with daily usage, we are still learning its nuance a year later. We think the up-front investment in learning the syntax, which can be a little confusing at first, has been worth it. It is also less well documented than dplyr or pandas. We learned so much about data.table from a few blog posts such as Advanced tips and tricks with data.table and A data.table and dplyr tour. The goal of this post is to help to similarly fill the gap for datatable.
Python datatable is promising, and we are grateful for it as familiar territory as we learn Python. We can’t tell how much of our difficulty has been because the package is not as mature as data.table or our just inexperience with Python. The need to manually set variables for non-standard evaluation, to revert to pandas to accomplish certain tasks (ie: reshaping) or the challenges extracting and filtering data from nested columns. It was still not easy to navigate the documentation and there were areas where the documentation was not Also, it would be appreciated to seamlessly translate between a datatable and data.table.
Author: David Lucey, Founder of Redwall Analytics
David spent 25 years working with institutional global equity research with several top investment banking firms.
|
# Tag Info
7
Perhaps the simplest and most intuitive approach is to regularize the hard wall potential $$V_0(x)~=~\left\{ \begin{array}{rcl} 0 &\text{for}& x<0 \cr\cr \infty &\text{for}& x>0\end{array}\right.$$ as $$\lim_{\varepsilon \to 0^+} V_{\varepsilon}(x) ~=~V_0(x).$$ For instance, one could choose the regularized potential as $$... 6 If you collide two ideal billiard balls, then that would be what you would call a perfectly elastic collision. If you have a large dense collection of billiard balls, and you slam a new one into the collection, then there are a whole lot of elastic collisions, transfering energy and momentum in many ways that you would be hard-pressed to calculate exactly. ... 4 A simple version of this is bremsstrahlung, i.e. an electron that decelerates and produces electromagnetic radiation / photons. By your reasoning the energy of the electron should only be able to go into other electrons: maybe it should radiate other electrons, maybe a single electron shouldn't lose energy as it travels. But the electron can transfer some ... 4 Collisions can be elastic or inelastic. Elastic collisions are collisions where the incoming and outgoing kinetic energies are the same and only the angles change The same holds true classically, example: billiard balls ; and in the elementary particle framework. example: An electron hitting an electron has a probability of scattering elastically. ... 3 In a typical introductory physics class, the terms elastic and inelastic refer to whether the macroscopic kinetic energy is the same before and after the collision. By this I mean the large-scale motion of the objects that you typically consider in collisions, like carts or balls, not the detailed particle motion/potential. You are correct that the total ... 2 Earth is really, really big (in comparison to that projectile). In order for an object to completely penetrate it, it would need to have enough force to go through 12,742 kilometers of solid and liquid. It would need either extreme mass or extreme speed. in the case of extreme mass, at a certain point, the object wouldn't go straight through earth as much ... 2 Energy-momentum conservation is a stronger statement than the statement* that the inner product p_\mu p'^\mu is conserved. It states that the sums are conserved individually/coordinatewise - P_1+P_2=P_1'+P_2'. As I see it, the conservation of the inner product is a statement about change in reference frames, whereas the conservation of energy and ... 1 It seems like you know what the answer is, but you just don't know how to prove it. You are right though. To make things simpler, just view things in the center of mass frame. Then the total momentum is zero, and, like you said, the total angular momentum is just the sum of the orbital momentum of each planet plus the sum of the spin angular momentum of ... 1 I think you have too many parameters, and not all of the necessary ones To simplify your thinking: Change to a frame of reference in which one billiard ball is initially at rest at the origin, and the second is moving at velocity V from right to left along the straight line$$y=k, \,k>=0 A collision will take place if and only if $k<2\sigma$. ...
1
Using $e^-e^-$ or $e^+e^+$ means that the final states need to be charged and have lepton number of two. This produces a different set of potential final products then $e^-e^+$. One such example would be $e^- e ^- \rightarrow \mu ^-\mu^-$. While this may be an interesting collision for some new theory, such interactions can only produce a very particular ...
1
When a type of quantum (elementary) particle is absent it just means that the field of that particle is in the quantum "vacuum state". But "vacuum state" does not mean absence of everything concerning that field. The vacuum state has various physical properties in spite of its name: It is nothing but a possible state or quantum configuration of the field, ...
1
As your read suggestion suggested, you can approximate physical reality with "no collisions are simultaneous". The reason is that the physical world is full of indeterminacies (AKA errors) due to thermal fluctuations and many other sources. What this means is that, even if the strict mathematical solutions that you will find by assuming simultaneous ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Can a theory prove (schematically) its axioms relativized to a set?
In $$\mathsf{ZFC}$$, a somewhat cheating way to buy "transitive models" without cost in consistency is to add to the language a constant symbol $$M$$ and add to $$\mathsf{ZFC}$$ the axioms stating that $$M$$ is countable transitive, and for each axiom of $$\mathsf{ZFC}$$, add its relativization to $$M$$. This allows us to "pretend" that we have a transitive model of $$\mathsf{ZFC}$$ (the catch is that this is a theorem schema).
I wonder if adding a constant symbol is necessary. In other words, can there be some theory $$T$$ (extending $$\mathsf{ZFC}$$) in the language $$\{\in\}$$, such that T can prove there is a set $$M$$, and $$T$$ proves each of its own axioms relativized to $$M$$?
• @HanulJeon I think this is what I'm referring to in my first paragraph. Or are you suggesting this can be done without expanding the language? Feb 14, 2021 at 19:45
• I misread your question, so I removed the previous comment. Feb 15, 2021 at 12:32
$$\mathsf{ZFC}$$ already has this property. Specifically:
• The reflection theorem shows that every model of $$\mathsf{ZFC}$$ - even of $$\mathsf{ZFC+\neg Con(ZFC)}$$ - contains a model of $$\mathsf{ZFC}$$. (This model might not internally be a model of $$\mathsf{ZFC}$$, which is why this isn't an obvious absurdity.)
• The fact that satisfiability is absolute to $$L$$ then lets us pick out a specific model, via the $$L$$-ordering.
The details are as follows:
Working in $$\mathsf{ZFC}$$, fix some standard enumeration of the $$\mathsf{ZFC}$$ axioms and let $$T_0$$ be the largest initial segment of that enumeration which is consistent. Classically of course we have (under the usual assumptions) that $$T_0=\mathsf{ZFC}$$; meanwhile, note that $$\mathsf{ZFC}$$ proves "$$T_0$$ is consistent" (trivially) as well as "$$\varphi\in T_0$$" for each $$\varphi\in\mathsf{ZFC}$$ (by the reflection theorem).
Now since $$\mathsf{ZFC}$$ proves that $$T_0$$ is consistent, we can - in $$\mathsf{ZFC}$$ - consider $$M=$$ the least constructible set model of $$T_0$$, where "least" refers to the standard ordering on $$L$$. This is provably in $$\mathsf{ZFC}$$ a model of $$T_0$$, and so for each $$\mathsf{ZFC}$$-axiom $$\varphi$$ we have $$\mathsf{ZFC}\vdash M\models\varphi$$ as desired.
• Thanks! I believe the first bullet refers to the theorem in Hamkins's answer here? Another question: since $\mathsf{ZFC}$ proves $T_0$ is consistent, we can pick out the least constructible set model of $T_0$. This is because $V$ and $L$ agree on the consistency of c.e. theories by Shoenfield's theorem, right? Feb 14, 2021 at 20:30
• @ikrto To the first, yes. To the second, invoking Shoenfield is galactic overkill and c.e.-ness isn't really needed. The right argument is the following. First, since $L$ and $V$ have the same natural numbers, they agree on consistency of constructible theories $L$. This means first that $T_0=T_0^L$ (think about how we defined $T_0$) and second that $\mathsf{ZFC}\vdash \mathsf{Con}(T_0)\leftrightarrow\mathsf{Con}(T_0)^L$. Now use the fact that the completeness theorem holds in $L$, which is a consequence of $L$ satisfying a tiny tiny amount of $\mathsf{ZFC}$ (e.g. $\mathsf{KP}$ is enough). Feb 14, 2021 at 20:33
• It's also worth noting that $T_0$ is defined in a $\Pi^0_1$, not $\Sigma^0_1$, way. Of course being an initial segment of a computable theory according to a computable ordering it is itself computable, but somehow it's "morally" not computable or even c.e. This isn't a point which matters here, but it's neat and I'm easily distracted by shiny objects. Feb 14, 2021 at 20:36
• What about transitivity, though? Feb 14, 2021 at 22:55
• @spaceisdarkgreen: Obviously if $M$ is an $\omega$-model, it must agree with its meta-theory on the axioms of ZFC, so taking the minimal transitive model of ZFC we have a universe of ZFC where there are no transitive models of ZFC, but since the theory is the same, having a set which satisfies "each of the axioms" would mean that it satisfies ZFC (internally and externally), therefore there are no transitive models there. Feb 15, 2021 at 2:00
|
# Question about a mosfet driver gate resistor
I'm considering the ZXGD3003E6 for driving an N-channel MOSFET - PSMN4R5-40PS,127. There is a typical application schematic in the MOSFET driver datasheet (as shown below). My question is how do I calculate the value of R1 and R2?
Edit: Will be driving the gate driver with a TLC272 opamp as a buffer which is connected to a 12V supply. Also the current through the MOSFET will be no more than 10A DC.
• The values depend on what you want it to be. It's a voltage divider. However, according to the datasheet, the maximum common mode output voltage that device can withstand is ±7V. – KingDuken Jul 7 at 16:38
• @KingDuken yes I saw that but wasn't able to understand what that meant. Could you please elaborate? – electrophile Jul 8 at 3:26
• It means that whatever the difference you see at Sink and Source, it needs to be between -7 and 7. So for an example, if the voltage at Sink is 5V and Source is 2V, then the differential voltage is 3V (5-2=3). – KingDuken Jul 8 at 14:43
• But the recommended schematic suggests that the source and sink be shorted. Which makes them at the same potential at any given point of time right? – electrophile Jul 9 at 3:35
From the driver's point of view, a power MOSFET looks like a capacitor from the gate to the source. This is why the notation says that varying the resistors changes the turn on and turn off times; those resistors work with the FET capacitance to form an R-C delay network. If you want to slow down the turn-on and turn-off voltage ramps htat the load sees at the FET drain, then R1 and R2 can do that for you. If you are just switching a relatively static load on and off, they are not needed.
Another reason for R1 is to reduce ringing caused by a resonant tank formed by the FET capacitance and lead inductance. This is an issue only in repetitive high speed circuits like switching power supplies, PWM motor drivers, etc. Again, if that is not what you are doing then R1 is not needed.
• I just would not spare gate resistor even with non repetitive switching. Just the opposite, I'd rather use a relatively higher value resistor. Since switching losses are little concern in this cast I would slow down transitions to keep EMI under control. – carloc Jul 7 at 17:23
The N-Channel MOSFET that you have selected is listed as a "standard level MOSFET" in the title of the datasheet. This means that the gate drive voltage needed to turn on the device will be higher than a more specialized "logic level MOSFET". You should check out the transfer function charts in the data sheet to become familiar with the device characteristics. You will require a gate drive voltage (gate to source) of at least 5 Volts to achieve a decent level of drain current flow. In fact to achieve the advertised on resistance value of 4.6 mΩ you would require gate drive voltage of 10 Volts.
The gate driver IC that you have selected is not a particularly elegant part. Due to the emitter follower nature of the device the high (low) level output levels are a VBE voltage drop less than (more than) the input pin voltages. This means that if your microcontroller GPIO drive signals are a high drive of 5 Volts you would not see the output get to 5 Volts. In fact in the data sheet they state a typical value of 0.4V VBE when the current flow is 1uA. So even if you get the gate capacitance of the MOSFET fully charged the drive voltage can only get to around 4.6 Volts.
This is hardly a good match for part selection even if your MCU has 5V swing GPIOs. If the MCU you are using has 3.3V GPIOs (which is much more common) you have a non-starter solution here.
Further consider that if you do choose to use the R1 and R2 resistors as shown in your diagram they act as a voltage divider which will further reduce the gate drive voltage.
• Thank you! What should I look for while selecting gate drivers. Also I re-selected my MOSFET parameters and found BSC100N06LS3 G. I intend to drive not more than 10A through the MOSFET. Assuming that I use the same gate driver, I would get about 60A at 4V. Would that be correct? The gate driver will be driven using a TLC272 opamp as a buffer which is connected to a 12V supply (edited the Q to include this information). – electrophile Jul 8 at 3:21
|
1. First Cup of Ubuntu
Join Date
Mar 2010
Beans
1
## Re: LaTeX and SVG
I think you'll like the new PDF+LaTeX export option in Inkscape 0.48. Read about it here: http://wiki.inkscape.org/wiki/images/SVG_in_LaTeX.pdf
When exporting to PDF, you can select the LaTeX option, so that Inkscape will create an extra .tex file containing the text. You should then include the .tex file in your latex document; it will show the image with text put on top of it.
Basically, it enables you to write LaTeX text (math and such) in your Inkscape image, which is then typeset by latex simply by including the tex file inkscape generates.
The PDF document above also describes how to automate the whole thing. Then the workflow becomes:
1. draw image and save it to SVG
2. build your latex document; this will notice the SVG has been updated and will automatically call inkscape to export the image again to pdf and latex.
Very convenient and fast!
2. A Carafe of Ubuntu
Join Date
Aug 2006
Beans
82
Distro
Ubuntu 10.04 Lucid Lynx
## Re: LaTeX and SVG
Wow, very impressive. Thanks a lot for this, I'm sure I'll find it very useful.
3. Ubuntu addict and loving it
Join Date
Jan 2008
Location
Auckland, New Zealand
Beans
3,132
Distro
Ubuntu 9.10 Karmic Koala
## Re: LaTeX and SVG
Wow awesome! I know Xfig had something like this but that's great it's in Inkscape now. I never used xfig though so maybe I'm wrong and it was something different.
4. First Cup of Ubuntu
Join Date
Sep 2008
Beans
2
## Re: LaTeX and SVG
It really gud but.. the text size is the problem! size.. bigger fonts is reduce to the normal! any idea how to fix it!
5. First Cup of Ubuntu
Join Date
Jul 2008
Beans
3
## Re: LaTeX and SVG
I would like to add something to this old thread, beacuse I think it is still not completely solved.
I recently blogged about the new way to include SVG directly in LaTeX.
Have a look at http://laclaro.wordpress.com/2011/07...nte-einbinden/
6. Just Give Me the Beans!
Join Date
Mar 2009
Location
the 'Lowlands'
Beans
70
Distro
Ubuntu 10.10 Maverick Meerkat
## Re: LaTeX and SVG
Originally Posted by johan engelen
I think you'll like the new PDF+LaTeX export option in Inkscape 0.48. Read about it here: http://wiki.inkscape.org/wiki/images/SVG_in_LaTeX.pdf
7. First Cup of Ubuntu
Join Date
Nov 2011
Beans
1
## Re: LaTeX and SVG
Originally Posted by jeneverboy
Just registered an account for that:
http://mirrors.ctan.org/info/svg-ink...pePDFLaTeX.pdf
Greetings
8. ## Re: LaTeX and SVG
Originally Posted by hugmenot
One important aspect is which Cairo version you have installed. The newer Cairo is, the more capabilities it has in preserving transparency and filters in the PDF. In some situations Cairo is forced to rasterize in order to represent the graphic in the PDF.
You have cairo 1.8.8, which is good. And you can see that it doesn’t preserve transparent vectors in the EPS, while it does in PDF. It can’t properly represent your semi-translucent bar stroking so it rasterizes these parts of the graphic in the EPS file.
See: http://en.wikipedia.org/wiki/Transpa..._in_PostScript
What a shame... This is the biggest problem for me. I really dont see any other way than rasterizing an image...
9. Extra Foam Sugar Free Ubuntu
Join Date
Mar 2007
Beans
763
## Re: LaTeX and SVG
Originally Posted by 71GA
What a shame... This is the biggest problem for me. I really dont see any other way than rasterizing an image...
I would recommend using LyX. You can insert svg files directly. In the end, the svg is converted, but if you don't like the automatic conversion you can define your custom conversion. LyX 2.0.3 will be released in one week.
10. First Cup of Ubuntu
Join Date
Jul 2008
Beans
3
## Re: LaTeX and SVG
I recently added the scale-feature to the code-snippet which provides the \includesvg[scale]{file}-command.
I made it also platform-independent. Look here (general) and here (platform-independence).
Yours, Henning
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
|
Dave Horner's Website - Yet another perspective on things...
34 guests
Rough Hits : 4632284
how did u find my site?
our entire universe; minuscule spec in gigantic multiverse which is mostly lethal.
There is no honest way to explain it because the only people who really know where it is are the ones who have gone over. But the edge is still Out there. Or maybe it's In.
--Hunter Stockton Thompson
$$e = \sum_{n=0}^\infty \frac{1}{n!}$$ satis dictum.
consciousness survives death of brain?
consciousness survives death of brain?
not sure / don't care
43 67.2%
no
7 10.9%
yes
6 9.4%
# voters : 64 1st vote: : Thursday, 25 December 2014 18:45 last vote: : Monday, 08 April 2019 03:58
|
## Monday, 28 October 2013
### CMAC-PID in a AUV
The first step in the implementation of a CMAC-PID was the modeling of the vehicle in Simulink. Dr. Cheng Chin (Chin et al. 2006)create a symbol library for modelling of ROVs. The library provided a predefined models that can be change by a simple script in Matlab.
Figure 62 6 DOF AUV unperturbed system
As second step in the implementation of the CMAC-PID is to implement a normal PID controller. The PID component of the system was implemented in Matlab. The acceleration wanted is calculated with a script that divide the input velocity by 8 and if the velocity wanted is equal to the platform speed the acceleration is equal to zero .A value limiter was placed at the output of the system to avoid saturation of the actuators. The tuning values for the PID components were estimated giving as result:
Kp=[16 14 24 34 34 14];
Kpi=[0.01;0;0;0.01;0.01;0.001];
Kdl=[0.002 0 0 0.001 0 0];
Kd=[1.5 0.9 1.5 7 6.2 0.1];
Figure 63 PID Implementation
The implementation of the CMAC component was by the constant call of a script at a sample time of 0.01 seconds. A CMAC was implemented by each one of the DOF’s .In the first 100 cycles is calculated the maximum and minimum data points. As second step the neural networks is started. For each cycle the weight values are adjusted. The adjustments values for the CMAC are m=5, and .Appendix 3 c). Figure 63 show the respond to a step input in all DoF. The respond of the PID is calibrated to the fastest answer with a minor overshoot. In rotation movement and vertical motion, the system show low overshooting and fast stabilization. In lateral motions it is notice a overshooting. However the overshooting can be solved reduce the acceleration rating of the robot. Figure 65 provide the different between CMAC-PID and a normal PID in the principal signal with overshooting. The result show a minor change on the signal and a reduction on the overshooting. If the step signal is change for a sinusoidal signal the CMAC-PID show a higher importance in the maintenance of the resonance frequency of the system over the time.
Figure 64 PID position respond to a Step Input [10,5,5,0.7,0.7,0.7]
Figure 65 CMAC-PID Implementation
Figure 66 PID(RED)vs. CMAC-PID (Brown)
## Friday, 26 July 2013
### Translator Between Torque controller and thrusters
As the AUV have four thrusters that can rotate in any direction its generates a vector U of possible Torque direction where T1,T2,T3 is the component x,y,z of thruster number 1.However the PD tracking controller generates a force vector Tb of the way where X,Y,Z are the forces and K,M,N are the moments over the gravitational center in the AUV.
$\bg_black&space;U=\left(&space;\begin{array}{c}&space;T_1&space;\\&space;T_2&space;\\&space;T_3&space;\\&space;T_4&space;\\&space;T_5&space;\\&space;T_6&space;\\&space;T_7&space;\\&space;T_8&space;\\&space;T_9&space;\\&space;T_{10}&space;\\&space;T_{11}&space;\\&space;T_{12}&space;\\&space;\end{array}&space;\right)$ $\bg_black&space;T_B=\left(&space;\begin{array}{c}&space;X&space;\\&space;Y&space;\\&space;Z&space;\\&space;K&space;\\&space;M&space;\\&space;N&space;\\&space;\end{array}&space;\right)$ $\bg_black&space;L=\left(&space;\begin{array}{cccccccccccc}&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;\\&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;\\&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&&space;0&space;&&space;0&space;&&space;1&space;&0&space;&&space;0&space;&&space;1\\&space;-0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;\\&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;&&space;0&space;&&space;0&space;&&space;0.3&space;\\&space;\end{array}&space;\right)$
However L can not be inverse and to solve the system it has to be applied and advanced algorithm to solve the system with a seed data .In the case of Matlab this can be made with fsolve(@myfun,x0) where x0 is the seed data .In this case seed data is a vector where each data is X/4,Y/4,Z/4,K/4,M/4 and N/4
Work in progress.......
## Autonomous Underwater Vehicle
### Seleccion of Motors
In robotics the most complicate step is the selection of motors and battery. the weight of the battery is the principal component of the mass of a robot. In the process to design AUV i decide to start the design with the selection of thrusters with a high drag force of 1.5 and a speed of 3 knots.Teknodyne
With the selection of thruster it can be selected the battery pack and continue with the design of the external shape to establish the real drag force.
the drag force depends of the longitudinal projected area of the AUV there was established the need to simulated different direction of movement
Drag Coefficient=0.64
Drag Coefficient =0.75
## Monday, 21 January 2013
### Emboss of an image with inventor
Usually to add an binary image as a logo to inventor is a process of converting the image to eps and then tranfer that file to autocad and the to inventor. But that process can generate mistakes in the quality of the tranfer.To solve that problem I wrote a matlab code that generate a excel file with the boundary coordinates of the image.with the use of the function to export point of Inventor,the points can be transfer to a sketch to be used in the emboss function.At the moment to export in the option of inventor it can be choose to create automatically lines or b-spline lines between the points.
InventorSimple.zip (2.0 MB)https://mega.co.nz/#!tENgmZaR!ZpQpSW4_3yr0ATdRD6mItWQDvJdO5sOA-xk-J0NayQA
|
# Sections 9A,9B
### exercises
...
Identifying a function. Exercise 12, p524
You make a list of your friends and their e-mail addresses. Are there two variables here related in a way that may be described as a function?
Recall generalities on functions.
A function: two variables, an independent variable and dependent variable such that a value of the independent variable determines the value of the dependent variable uniquely
Answer. The situation can be described as a function; the independent variable is the e-mail address, while the dependent variable is the name of a friend. Indeed, a e-mail address identifies uniquely the name of a friend.
Identifying a function. Exercise 12, p524 Some remarks.
1. Actually, several people may share the same e-mail address although that does not happen frequently.
In such a case, there is no function which may adequately model this situation.
2. No quantities are involved. Although indeed one considers functions operating with non-quantitative data, we typically do not do that in this class.
Functions and their graphs. Exercise 24, p524
Study the graph which represents hours of daylight at $$40^o$$N latitude.
a. Use the graph to estimate the number of hours of daylight on April 1 (91st day of the year) and October 1 (the 304th day of the year).
Exercise 24, p524 ... continued Study the graph which represents hours of daylight at $$40^o$$N latitude.
b. Use the graph to estimate the dates on which there are 13 hours of daylight.
Answer: about 110 and 230 days of the year. That corresponds to April, 20 and August, 18.
Linear functions. Exercise 12, p537
Consider the graph.
a. In words, describe the function shown on the graph.
Answer. The population increases linearly with time.
b. Find the slope of the graph, and express it as a rate of change
Answer. The slope is $$\frac{20}{2}=10$$. The rate of change of population is 10 thousand people per year.
Linear functions. Exercise 16, p537
Consider the graph.
a. In words, describe the function shown on the graph.
Answer. The record pace decreases linearly with the length of race.
b. Find the slope of the graph, and express it as a rate of change,
Answer. The slope is approximately $$\frac{-20}{10}=-2$$. The rate of change is -2 (km/hr)/km.
Rate of change. Exercise 20, p537
A gas station owner finds that for every penny increase in the price of gasoline, she sells 80 gallons fewer of gas per week. How much more or less gas will she sell if she raises the price by 8 cents per gallon?
Solution. At this gas station, the amount of gas sold depends linearly on the price. The rate of change is
$$-80$$ (gallons per week)/cent.
If the price increases by 8 cents, the change is
$$-80 \times 8 = -640$$
That is a 640 gallons less sold per week.
Rate of change. Exercise 18, p537
You run along a path at a constant speed of 5.5 miles per hour. How far do you travel in 1.5 hours?
Solution. The distance traveled is a linear function of time. It increases at a constant rate of $$5.5$$ miles per hour. The distance traveled in $$1.5$$ hours is
$$5.5 \times 1.5 = 8.25$$ miles.
Linear equations. Exercise 26, p537
The cost of leasing a car is $$\1,000$$ for a downpayment and processing fee plus $$\360$$ per month. For how many months can you lease a car with $$\3680$$?
Solution. The total amount spent $$y$$ is a linear function of time $$x$$ with a rate of change of 360, and initial value ($$y$$-intercept) of 1000. This function can be written as
$$y=360\times x + 1000$$
Solve the equation for $$x$$: $$x=\frac{y-1000}{360}$$, and find for y=3680:
$$x=\frac{3680-1000}{360} \approx 7.44$$. That means 7 full months of lease.
Exercise 26, p537 ... continued ... The theory behind this solution.
Recall that linear function is described by the formula
$$y=mx + b$$
This equation can always be solved for $$x$$:
$$x = \frac{y-b}{m}$$
and we always can find the value of independent variable $$x$$
for a given value of the dependent variable $$y$$
Exercise 26, p537 ... continued
The cost of leasing a car is $$\1,000$$ for a downpayment and processing fee plus $$\360$$ per month. For how many months can you lease a car with $$\3680$$?
Alternative solution. After paying $$\1,000$$ of downpayment and processing fee you are left with
$$3680-1000= \ 2680$$.
With a monthly payment of $$\360$$, that suffices for $$\frac{2680}{360} \approx 7.44$$,
or $$7$$ full months.
Linear equations. Exercise 30, p538
You can purchase a motorcycle for $$\6,500$$ or lease it for a downpayment of $$\200$$ and $$\150$$ per month. Find the function which describes how the cost of the lease depends on time. How long can you lease the motorcycle before you've paid more than its purchase price?
Solution. This is a linear function with a slope of 150, and an initial value ($$y$$-intercept) of 200.
The equation for this function thus is
$$y=150x+ 200$$.
Exercise 30, p538 ... continued
You can purchase a motorcycle for $$\6,500$$ or lease it for a downpayment of $$\200$$ and $$\150$$ per month. Find the function which describes how the cost of the lease depends on time. How long can you lease the motorcycle before you've paid more than its purchase price?
Solution. We found the linear function $$y=150x+ 200$$.
Solve the equation for $$x$$:
$$x=\frac{y-200}{150}$$,
and find for $$y=6500$$
$$x=\frac{6500-200}{150} =42$$ months (3.5 years) of lease.
Exercise 30, p538 ... continued
You can purchase a motorcycle for $$\6,500$$ or lease it for a downpayment of $$\200$$ and $$\150$$ per month. How long can you lease the motorcycle before you've paid more than its purchase price?
Alternative solution (no function involved).
After $$\200$$ of downpayment is paid, you are left with
$$6500-200=\6300$$.
With a monthly lease of $$\150$$, this amount suffices for
$$\frac{6300}{150}=42$$ months (equivalently, 3.5 years) of lease.
Linear equations. Exercise 34, p538
A mining company can extract $$2000$$ tons of gold ore per day with a purity of $$3$$ ounces of gold per ton. The cost of extraction is $$\1000$$ per ton. If $$p$$ is the price of gold in dollars per ounce, find the function that gives the daily profit/loss of the mine as it varies with the price of gold.
Solution. While the total cost of extraction is $$1000 \times 2000 = 2,000,000$$ dollars, the price of all gold extracted during a day is $$2000 \times 3 \times p = 6000\times p$$ dollars.
The daily profit $$P$$ is the difference between them:
$$P=6000p-2000000$$
Exercise 34, p538 ... continued
What is the minimum price of the gold which makes the mine profitable?
Solution. We found the profit $$P$$ as a function of the price $$p$$:
$$P=6000p-2000000$$
This becomes a loss, and the mine stops to be profitable when $$P=0$$:
$$0=6000p-2000000$$.
Solve for $$p$$ to find $$p=\frac{2000000}{6000} \approx \ 333.33$$
|
×
What would happen if the Sun disappeared
8 minutes: The Earth would have realised the Sun had disappeared. It would also travel tangent to wherever it was travelling around its orbit.
30 minutes: The light from Jupiter would have blinked out.
2 - 3 days: Most plants would have died due to no food from photosynthesis.
1 week: The average global temperature would be 0 degrees Celsius. Great trees would have died because of their water and sap would have solidified. The Earth's extra-terrestrial light would only be $$\frac {1}{300}$$ of a full moon.
1 year: The average global temperature would be -73 degrees Celsius. Any human who wasn't living underground or near geothermal plants would have died.
1 - 3 years: The Earth's oceans would have frozen all the way. At the bottom of the oceans, liquid water could still exist and so could extremophiles like microbes.
10 - 20 years: The air would have condensed into liquid and would rain on the Earth, and later snow.
1 billion years: The Earth would have travelled 100000 light years or have crossed the Milky Way.
Note by Sharky Kesa
3 years, 11 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Assuming you didn't die too quickly, the stargazing would be pretty amazing.
Staff - 3 years, 11 months ago
Awesome stargazing.
- 3 years, 11 months ago
If we mapped out the constellations while the Earth travelled, we'd find that they were centered at the original orbit of Earth.
- 2 years, 2 months ago
Perhaps new types of species will emerge.The ones which can adapt to this change.
- 3 years, 11 months ago
Extremophilic species.
- 3 years, 11 months ago
Very informative!
- 3 years, 8 months ago
its interesting as well as horrifiying.
- 3 years, 10 months ago
Firstly, if the sun ever dissapears then it would go through these transformations, The nuclear fuel in the sun is slowly finishing off, but the sun's gravity will remains the same, so what will the sun do? Well, it's gravity won't be able to make it a black hole so the nuclear fuel will try to push out and the sun will would become what we call a RED GIANT STAR . The effects of this would be way adverse such that the temperature would tremendously increase up ,all the water would dry up and the worse life will be in extreme danger. It would not end here because then the gravity would say ''now, you have had your turn let me have mine.'' The gravity will try to pull the sun in but to some extent as I mentioned before it cannot turn into a black hole.Then the sun will become what we call a WHITE DWARF STAR as it will shrink to 1/100th size of earth and will become tremendously cold.What about life on earth?All the left precipitation(water in any form) will freeze and life till that time will become very scarce. The life of sun would not end here.I will be starting a discussion on this soon Secondly , There are 4.5 billions years still left for this to happen and by that time, mankind may have found life somewhere in the stars .
- 3 years, 7 months ago
I meant that the Sun completely vanishes. A wormhole could make this possible as well as the matter of the Sun to quantum tunnel to a different region of space.
- 3 years, 7 months ago
I tried to state it in another way.The presence of a wormhole can definitely be a reason but their presence in our galaxy would have been detected and that would be a BIG HEADLINE because they cannot just pop out from anywhere or emerge out of nothing. P.S I plainly don't know much about wormhole theory and the above given statement is an intuitive one.
- 3 years, 7 months ago
earth attracted by healthy gravity plant or galaxy
- 3 years, 10 months ago
Possibly but unlikely.
- 3 years, 10 months ago
Earth can be enter in another orbit and life will on.
- 3 years, 10 months ago
It would take millennia to millions of years.
- 3 years, 10 months ago
Assuming you didn't die too quickly........
- 3 years, 10 months ago
after some billion year:-maybe earth get in to another orbit around other star due to gravitation field of that star...and life again cherish on earth due to its heat n light
- 3 years, 10 months ago
If sun disapears photosynthesis in plants and trees does not occur and physicaly all the planets will collasped if its center sun disappears,no heat energy no light
- 3 years, 10 months ago
If the sun had gone, what would happen to the Earth's orbit and other celestial bodies out there? Would we naturally just achieve a perfect balance again in our solar system as nature always does, orbiting another object (which would take probably hundreds of years) or what?
- 3 years, 11 months ago
possibly, but it would take millennia. The nearest star, Proxima Centauri, is 4 light years away or 37.8 trillion kilometres. Since the Earth was travelling at 30km/s, it would take about 4000 years.
- 3 years, 11 months ago
I don't understand the 10-20 years part. Without the sun, how is there rain or snow?
- 3 years, 11 months ago
It was so cold that the air condensed into liquid and clouds which precipitated.
- 3 years, 11 months ago
We'd die
- 3 years, 11 months ago
We might be living underground.
- 3 years, 11 months ago
Maybe we can generate energy from the earth's core.
- 3 years, 11 months ago
Exactly
- 3 years, 11 months ago
Either way we would most likely die due to having no food and we simply don't have enough time to prepare for that kind of disaster (Unless they're already doing that right now).
- 3 years, 11 months ago
Do you know that humans have enough food already to last a whole year without any crops. Also, hydroponics might be vastly expanded so we may be able to live.
- 3 years, 11 months ago
Also, I think it would possible to use electricity from geothermal/nuclear/ plants to make artificial sunlight to feed our plants. Though only a fraction of humans would survive in an underground bunker.
- 3 years, 11 months ago
Probably.
- 3 years, 11 months ago
Couldn't Earth start orbiting any other much larger body like Jupiter??
- 3 years, 10 months ago
No. Jupiter's gravity is not big enough and even if the started moving in the direction of Jupiter we would be bombarded by asteroids in the asteroid belt.
- 3 years, 10 months ago
if I ever wish to die...........that is how I wud wish to! for it wud be a super stimulating experience to perish this way
- 3 years, 11 months ago
Actually,i would love to die like that.The sight would be intriguing.
- 3 years, 7 months ago
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Wojciech Broniowski Physics , 2008, DOI: 10.1007/s12648-009-0080-5 Abstract: Generalized vector form factors of the pion, related to the moments of the generalized parton distribution functions, are evaluated in the Nambu--Jona-Lasinio model with the Pauli-Villars regularization. The lowest moments (the electromagnetic and the gravitational form factors) are compared to recent lattice data, with fair agreement. Predictions for higher-order moments are also made. Relevant features of the generalized form factors in the chiral quark models are highlighted and the role of the QCD evolution for the higher-order GFFs is stressed.
Physics , 2011, DOI: 10.1007/s00601-011-0265-2 Abstract: The transversity Generalized Parton Distributions (tGPDs) and related transversity form factors of the pion are evaluated in chiral quark models, both local (Nambu--Jona-Lasinio) and nonlocal, involving a momentum-dependent quark mass. The obtained tGPDs satisfy all a priori formal requirements, such as the proper support, normalization, and polynomiality. We evaluate generalized transversity form factors accessible from the recent lattice QCD calculations. These form factors, after the necessary QCD evolution, agree very well with the lattice data, confirming the fact that the spontaneously broken chiral symmetry governs the structure of the pion also in the case of the transversity observables.
Kotko Piotr EPJ Web of Conferences , 2012, DOI: 10.1051/epjconf/20123708008 Abstract: Transition distribution amplitudes (TDAs) are non-perturbative quantities appearing in the description of certain exclusive processes, for instance hadron-anti-hadron annihilation HH → γ*γ or backward virtual Compton scattering. They are similar to generalized parton distributions (GPDs), except that the non-diagonality concerns not only the momenta, but also the physical states (they are defined in terms of hadron-photon matrix element of a non-local operator). For the case of hadronic states such as pions, there are two TDAs of interest: the vector and the axial one. They are straightforwardly related to the axial and vector form factors controlling weak pion decays π± → e±νγ. The value at zero momentum transfer of the vector form factor is fixed by the axial anomaly, while this is not the case for the axial one. Moreover, the vector form factor is related to the pion-photon transition form factor which was recently measured by Belle and BaBar giving contradictory results at high momentum transfers. We have studied pion-to-photon TDAs within the non-local chiral quark model using modified non-local currents satisfying Ward-Takahashi identities. We found that the value of the axial form factor at zero momentum transfer is shifted towards the experimental value due to the non-locality of the model (in the local quark models the values of both vector and axial form factors at zero momentum transfer are the same, what is not consistent with the data). We also calculate the pion-photon transition form factor and compare it with the data.
High Energy Physics - Phenomenology , 2007, DOI: 10.1103/PhysRevC.75.055207 Abstract: The nucleon form factors of the energy-momentum tensor are studied in the large-Nc limit in the framework of the chiral quark-soliton model for model parameters that simulate physical situations in which pions are heavy. This allows for a direct comparison to lattice QCD results.
Physics , 2011, Abstract: We describe the chiral quark model evaluation of the transversity Generalized Parton Distributions (tGPDs) and related transversity form factors (tFFs) of the pion. The obtained tGPDs satisfy all necessary formal requirements, such as the proper support, normalization, and polynomiality. The lowest tFFs, after the necessary QCD evolution, compare favorably to the recent lattice QCD determination. Thus the transversity observables of the pion support once again the fact that the spontaneously broken chiral symmetry governs the structure of the Goldstone pion. The proper QCD evolution is crucial in these studies.
Physics , 2014, DOI: 10.1103/PhysRevC.91.025202 Abstract: The structure of hadrons is described well by the Nambu--Jona-Lasinio (NJL) model, which is a chiral effective quark theory of QCD. In this work we explore the electromagnetic structure of the pion and kaon using the three-flavor NJL model, including effects of confinement and a pion cloud at the quark level. In the calculation there is only one free parameter, which we take as the dressed light quark ($u$ and $d$) mass. In the regime where the dressed light quark mass is approximately $0.25\,$GeV, we find that the calculated values of the kaon decay constant, current quark masses, and quark condensates are consistent with experiment and QCD based analyses. We also investigate the dressed light quark mass dependence of the pion and kaon electromagnetic form factors, where comparison with empirical data and QCD predictions also favors a dressed light quark mass near $0.25\,$GeV.
Takashi Kaneko Physics , 2009, Abstract: We calculate pion vector and scalar form factors in two-flavor lattice QCD and study the chiral behavior of the vector and scalar radii _{V,S}. For a direct comparison with chiral perturbation theory (ChPT), chiral symmetry is exactly preserved by employing the overlap quark action. We utilize the all-to-all quark propagator in order to calculate the scalar form factor including the contributions of disconnected diagrams. A detailed comparison with ChPT reveals that two-loop contributions are important to describe the chiral behavior of the radii in our region of the pion mass M_\pi \gtrsim 290 MeV. From chiral extrapolation based on two-loop ChPT, we obtain _V = 0.409(23)(37) fm^2 and _S = 0.617(79)(66) fm^2, which are consistent with phenomenological analyses.
Andreas Juttner Physics , 2011, DOI: 10.1007/JHEP01(2012)007 Abstract: The quark-connected and the quark-disconnected Wick contractions contributing to the pion's scalar form factor are computed in the two and in the three flavour chiral effective theory at next-to-leading order. While the quark-disconnected contribution to the form factor itself turns out to be power-counting suppressed its contribution to the scalar radius is of the same order of magnitude as the one of the quark-connected contribution. This result underlines that neglecting quark-disconnected contributions in simulations of lattice QCD can cause significant systematic effects. The technique used to derive these predictions can be applied to a large class of observables relevant for QCD-phenomenology.
High Energy Physics - Phenomenology , 2008, DOI: 10.1016/j.physletb.2009.05.052 Abstract: We examine the quark mass dependence of the pion vector form factor, particularly the curvature (mean quartic radius). We focus our study on the consequences of assuming that the coupling constant of the rho to pions is largely independent of the quark mass while the quark mass dependence of the rho--mass is given by recent lattice data. By employing the Omnes representation we can provide a very clean estimate for a certain combination of the curvature and the square radius, whose quark mass dependence could be determined from lattice computations. This study provides an independent access to the quark mass dependence of the rho-pi-pi coupling and in this way a non-trivial check of the systematics of chiral extrapolations. We also provide an improved value for the curvature for physical values for the quark masses, namely = 0.73 +- 0.09 fm^4 or equivalently c_V=4.00\pm 0.50 GeV^{-4}.
Physics , 1996, DOI: 10.1103/PhysRevC.55.2675 Abstract: We study the interactions of an elementary pion with a nucleon made of constituent quarks and show that the enforcement of chiral symmetry requires the use of a two-body operator, whose form does not depend on the choice of the pion-quark coupling. The coordinate space NN effective potential in the pion exchange channel is given as a sum of terms involving two gradients, that operate on both the usual Yukawa function and the confining potential. We also consider an application to the case of quarks bound by a harmonic potential and show that corrections due to the symmetry are important.
Page 1 /100 Display every page 5 10 20 Item
|
# Forces on Bifolding Door with horizontal hinge
I'm a programmer who's been put in charge of some motor control problems and I need your help, so please forgive me if the question is silly.
The problem in question (see inserted picture) is basically whether or not I should factor in the entire mass of the (bifold with horizontal hinge) door into the lifting load of the hoist, or if some of that load will be absorbed by the building (we can disregard any forces absorbed by the building structure here):
So if the total weight of both halves of the door is 200 pounds, should I treat the holding torque of the hoist as 200 pounds * Gravity?
If not, how is the total force on the pulley line calculated from the 200 pounds of total door weight?
• A good way to approach this problem is to think about the potential energy of the window vs. the height to which it has been pulled up; the force needed to hoist the cable is the derivative of this potential energy with respect to height. Due to the folding of the window, its center of mass moves upwards at half the speed of the cable, meaning you effectively get a factor of two in mechanical advantage. The weight on the cable is 100 lbs. – Yly Aug 30 '17 at 23:28
• You say $200\;lbs$ but the sketch says $190\;lbs$. – John Alexiou Aug 31 '17 at 0:11
• @ja72 - Good catch. I've been throwing around a nice round number to describe it but the actual combined weight is 190 pounds! – BasileSoftware Aug 31 '17 at 16:49
• @Yly - Much appreciated! This will be great help in my calculations. – BasileSoftware Aug 31 '17 at 16:50
You need to analyse the forces acting on each body separately. What is important is the location of each center of mass. I am approaching this as a static (or quasi static) problem because the dynamics are way too complex for this forum.
I am not assuming the two leafs are of the exact same size, in order to accommodate the pivot-to-rope clearance $s$.
The angle $\theta$ is such that $s = 2 b \sin \theta - 2 c \sin \varphi$, or $$\theta = \sin^{-1} \left( \frac{ 2c \sin \varphi + s}{2 b} \right)$$
The vertical distance of the rope attachment to the door pivot is
$$h = r + 2 b \cos \theta + 2 c \cos \varphi$$
The sum of the forces along the x and y axes are
\begin{aligned} A_x &= 0 \\ A_y-\tfrac{W}{2}-\tfrac{W}{2}+T & = 0 \end{aligned} \Rightarrow \begin{aligned} A_x &= 0 \\ A_y & = W-T \end{aligned}
Now we need to balance the moments. Using the pivot as the origin the rotational balance is
$$s\, T + c \cos\varphi \tfrac{W}{2} + (b \cos \theta-s) \tfrac{W}{2} = 0 \Rightarrow T = \frac{W}{2} \left(1 - \frac{h-r}{2 \,s} \right) < \frac{W}{2}$$
So as the $h$ distance gets smaller, the tension gets less and less.
|
# Why is an element's atomic number always a round number, whereas its atomic mass often is not?
Sep 3, 2016
Because the element is defined by its $\text{atomic number, Z.......}$
#### Explanation:
Because the element is defined by its $\text{atomic number, Z}$, which is the number of positively charged, massive particles in the element's nucleus.
If $Z = 1$, the element is hydrogen, if $Z = 2$, the element is helium,.......if $Z = 34$, the element is selenium....
Now an element's identity is thus defined by $Z$. However, the elemental core, the nucleus, may contain different numbers of massive, neutrally charged particles, neutrons, and this gives rise to the phenomenon of isotopes.
For hydrogen, $Z = 1$, by definition. Most hydrogen nuclei contain only the 1 proton, to give the ""^1H, the protium isotope; a smaller number of hydrogen nuclei may contain a neutron in addition to the discriminating proton (why discriminating?) to give the deuterium isotope, ""^2H, and an even smaller number of hydrogen nuclei may contain 2 neutrons to give the tritium isotope, ""^3H.
As the atomic number gets larger, the possibility for isotopic stability becomes greater. Many electron atoms (or many proton atoms) tend to have a range of isotopes, whose weighted average defines the quoted average isotopic mass. And thus the average atomic mass is typically not a whole number.
Of course the mass of any particular isotope is necessarily a whole number. Peculiar isotopic properties may be exploited by chemists to give a spectroscopic handle on chemistry. Nuclear magnetic resonance spectroscopy is one example, as there is usually an isotope with useful magnetic properties. Alternatively, isotopes may be used for mass spectroscopic studies - deuterium labelling is a frequent experiment.
|
How to break a line in a long equation?
I am using a class named abntex2 and I don't know how to break an equation. I've tried inserting a package amsmath, breqn, mathtools, but it does not work.
I type:
\usepackage{amsmath}
...
and when I insert the equation, I write:
$$\begin{multlined} \left({{\varphi}^{x}{\frac{\partial}{{\partial}u_{x}}}+{\varphi}^{t}{\frac{\partial}{{\partial}u_{t}}} \\ +{\varphi}^{xx}{\frac{\partial}{{\partial}u_{xx}}}+{\varphi}^{xt}{\frac{\partial}{{\partial}u_{xt}}} \\ +{\varphi}^{tt}{\frac{\partial}{{\partial}u_{tt}}}+{\varphi}^{xxx}{\frac{\partial}{{\partial}u_{xxx}}}}\right){\left(u_{t}+u_{xxx}+mu^{m-1}u_{x}\right)} \\ ={\varphi}^{x}{\left({mu^{m-1}}\right)}+{\varphi}^{t}+{\varphi}^{xxx}$$
\end{multlined}
But the double bar does not work. What am I supposed to do?
• Use multline. Sorry for abnt rules. – Sigur Mar 21 '14 at 0:17
• Welcome to TeX.SX! A tip: If you indent lines by 4 spaces, they'll be marked as a code sample. You can also highlight the code and click the "code" button (with "{}" on it). – jub0bs Mar 21 '14 at 0:17
• Yes, Sigur, but i've used {multline} too. – Poli Tolstov Mar 21 '14 at 0:30
• Please post a complete Minimal (non-)Working Example. Much more useful than code fragments. – cfr Mar 21 '14 at 0:51
• it wouldn't solve all problems, but if comes before \begin{xxx}, then \end{equation has to come after \end{xxx}. proper nesting is a must. – barbara beeton Mar 21 '14 at 2:02
You have mixed up the equation and multiline environments. Also it is multline not multlined. Further, I have removed many braces that seemed un-necessary. You have used many of them.
\documentclass{article}
\usepackage{amsmath}
\begin{document}
%$$\begin{multline} \biggl(\varphi^{x}{\frac{\partial}{\partial u_{x}}}+\varphi^{t}\frac{\partial}{\partial u_{t}} +\varphi^{xx}\frac{\partial}{\partial u_{xx}}+\varphi^{xt}\frac{\partial}{\partial u_{xt}} \\ +\varphi^{tt}\frac{\partial}{\partial u_{tt}}+\varphi^{xxx}\frac{\partial}{\partial u_{xxx}}\biggr)\left(u_{t}+u_{xxx}+mu^{m-1}u_{x}\right) \\ =\varphi^{x}{\left(mu^{m-1}\right)}+\varphi^{t}+\varphi^{xxx} %$$
\end{multline}
\end{document}
Also note that you can't use \left( and \right) across lines. I have used biggl( and \biggr) which do not have this constraint. For details, please refer to amsmath documentation (texdoc amsldoc from command prompt)
To use \left( and \right) across lines you have to use the placeholders \right. and left., see how here:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
%$$\begin{multline} \left(\varphi^{x}{\frac{\partial}{\partial u_{x}}}+\varphi^{t}\frac{\partial}{\partial u_{t}} +\varphi^{xx}\frac{\partial}{\partial u_{xx}}+\varphi^{xt}\frac{\partial}{\partial u_{xt}} \right. \\ \left. +\varphi^{tt}\frac{\partial}{\partial u_{tt}}+\varphi^{xxx}\frac{\partial}{\partial u_{xxx}}\right)\left(u_{t}+u_{xxx}+mu^{m-1}u_{x}\right) \\ =\varphi^{x}{\left(mu^{m-1}\right)}+\varphi^{t}+\varphi^{xxx} %$$
\end{multline}
\end{document}
note the \right. \\ at the end of the first line and \left. at the beginning of the second. The visual outcome is the same as above.
Another option using split:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
$$\begin{split} &\biggl(\varphi^{x}{\frac{\partial}{\partial u_{x}}}+\varphi^{t}\frac{\partial}{\partial u_{t}} +\varphi^{xx}\frac{\partial}{\partial u_{xx}}+\varphi^{xt}\frac{\partial}{\partial u_{xt}} \\ & \quad+\varphi^{tt}\frac{\partial}{\partial u_{tt}}+\varphi^{xxx}\frac{\partial}{\partial u_{xxx}}\biggr)\left(u_{t}+u_{xxx}+mu^{m-1}u_{x}\right) \\ &\qquad \quad {}={}\varphi^{x}{\left(mu^{m-1}\right)}+\varphi^{t}+\varphi^{xxx} \end{split}$$
\end{document}
• the solution to \left( and \right) across lines is to use the phantom placeholders \right. and \left. respectively at the end and the beginning of the line – Federico Mar 21 '14 at 14:36
• @Federico \left and \right is to be used very judiciously as it is not needed most of the times. \bigl and brothers are sufficient in most of the cases. :) – user11232 Mar 21 '14 at 22:33
Some slight change to ease the typing of the equation given in Harish Kumar's answer. Use the commath package to typeset the derivatives easily.
\documentclass{article}
\usepackage{amsmath,commath}
\begin{document}
$$\begin{split} &\biggl(\varphi^{x}\dpd{}{u_{x}}+ \varphi^{t}\dpd{}{u_{t}} +\varphi^{xx}\dpd{}{u_{xx}}+\varphi^{xt}\dpd{}{u_{xt}} \\ & \quad+\varphi^{tt}\dpd{}{u_{tt}}+\varphi^{xxx}\dpd{}{u_{xxx}} \biggr)\left(u_{t}+u_{xxx}+mu^{m-1}u_{x}\right) \\ & \qquad \quad {}={}\varphi^{x}{\left(mu^{m-1}\right)}+ \varphi^{t}+\varphi^{xxx} \end{split}$$
\end{document}
|
Find each of the following product:
Question:
Find each of the following product:
$\left(\frac{7}{9} a b^{2}\right) \times\left(\frac{15}{7} a c^{2} b\right) \times\left(-\frac{3}{5} a^{2} c\right)$
Solution:
To multiply algebraic expressions, we use commutative and associative laws along with the law of indices, i.e., $a^{m} \times a^{n}=a^{m+n}$.
We have:
$\left(\frac{7}{9} a b^{2}\right) \times\left(\frac{15}{7} a c^{2} b\right) \times\left(-\frac{3}{5} a^{2} c\right)$
$=\left\{\frac{7}{9} \times \frac{15}{7} \times\left(-\frac{3}{5}\right)\right\} \times\left(a \times a \times a^{2}\right) \times\left(b^{2} \times b\right) \times\left(c^{2} \times c\right)$
$=-a^{4} b^{3} c^{3}$
Thus, the answer is $-a^{4} b^{3} c^{3}$.
|
## College Physics (4th Edition)
(a) The wave is traveling in the +y-direction. (b) $B_x = \frac{E_m}{c}~sin~(ky+\omega t+\frac{\pi}{6})$ $B_y = 0$ $B_z = 0$
(a) From the term $ky-\omega t$, we can see that the wave is traveling in the +y-direction. (b) Since the direction of propagation is determined by the cross-product $E\times B$, by the right-hand rule, the magnetic field must be pointing in the +x-direction at y = 0 and t = 0. Note that $(+z)\times (+x) = +y$ We can find the components of the magnetic field of this wave: $B_x = \frac{E_m}{c}~sin~(ky+\omega t+\frac{\pi}{6})$ $B_y = 0$ $B_z = 0$
|
The oxidation numbers of carbon in ${\left(\mathrm{CN}\right)}^{2}$ are +3, +2 and +4 respectively.
These are obtained as shown below:
Let the oxidation number of C be x.
The oxidation number of carbon in the various species is:
${\left(\stackrel{+1}{\mathrm{C}}\mathrm{N}\right)}_{2\left(g\right)}+2O{{H}^{-}}_{\left(aq\right)}\to \stackrel{+2}{\mathrm{C}}{{\mathrm{N}}^{-}}_{\left(\mathrm{aq}\right)}+\stackrel{+4}{\mathrm{C}}{{\mathrm{N}}^{-}}_{\left(\mathrm{aq}\right)}+{\mathrm{H}}_{2}{\mathrm{O}}_{\left(\mathrm{l}\right)}$
It can be easily observed that the same compound is being reduced and oxidised
simultaneously in the given equation. Reactions in which the same compound is reduced
and oxidised is known as disproportionation reactions. Thus, it can be said that the alkaline
decomposition of cyanogen is an example of a disproportionation reaction.
8.21 : The ${\mathrm{Mn}}^{3+}$ ion is unstable in solution and undergoes disproportionation to give ion. Write a balanced ionic equation for the reaction.
The oxidation numbers of carbon in ${\left(\mathrm{CN}\right)}^{2}$ are +3, +2 and +4 respectively.
These are obtained as shown below:
Let the oxidation number of C be x.
The oxidation number of carbon in the various species is:
${\left(\stackrel{+1}{\mathrm{C}}\mathrm{N}\right)}_{2\left(g\right)}+2O{{H}^{-}}_{\left(aq\right)}\to \stackrel{+2}{\mathrm{C}}{{\mathrm{N}}^{-}}_{\left(\mathrm{aq}\right)}+\stackrel{+4}{\mathrm{C}}{{\mathrm{N}}^{-}}_{\left(\mathrm{aq}\right)}+{\mathrm{H}}_{2}{\mathrm{O}}_{\left(\mathrm{l}\right)}$
It can be easily observed that the same compound is being reduced and oxidised
simultaneously in the given equation. Reactions in which the same compound is reduced
and oxidised is known as disproportionation reactions. Thus, it can be said that the alkaline
decomposition of cyanogen is an example of a disproportionation reaction.
|
Square root of a square root
Algebra Level 3
Which of the following numbers does not have a square root in the form $$x+y\sqrt { 2 }$$, where x and y are positive integers?
×
|
### Theory:
Let us see some more pairs of angles, in this lesson.
1. Co-interior angles:
Each pair of angles named $$∠3$$ and $$∠6$$, $$∠4$$ and $$∠5$$ are marked on the same side of transversal line $$l$$ and are lying between the lines $$m$$ and $$n$$. These angles are lying on the interior of the lines $$m$$ and $$n$$ as well as the same side of the transversal line $$l$$. So these are called as co-interior angles.
2. Co-exterior angles:
Each pair of angles named $$∠1$$ and $$∠8$$, $$∠2$$ and $$∠7$$are marked on the same side of transversal line $$l$$ and are lying outside of the lines $$m$$ and $$n$$. These angles are lying on the exterior of the lines $$m$$ and $$n$$ as well as the same side of the transversal line $$l$$. So these are called as co-exterior angles.
|
# Competition Analysis
Not Reviewed
barx =
Tags:
Rating
ID
MichaelBartmess.Competition Analysis
UUID
121acff3-c227-11e4-a3bb-bc764e2038f2
This equation computes weighted evaluation criteria for a vCalc competition held in Nairobi, Kenya. The judging criteria and their respective weights are input from the data set:
# Inputs
Competitor - name of vCalc competition participant
# Usage
The competition coordinator adds new records into the dataset containing competitors' score cards (data set: vCalc Competition on ddMMyyyy) for each person or team signed up for the competition.
The competitor (teams') names must be added to the enumerated list of this equations that the user has the pull-down option to see the weighted final score for each competitor.
The competition coordinator must assign the weights to the evaluation criteria as inputs to this equation. These are weights for the relative value of each evaluation criteria and are used as the weighting factors in a weighted geometric mean inside this equation.
If any evaluation criteria is not to be considered, it can be assigned a weighting factor of zero. Generally it is easier to understand the weighting scheme if the weights are all from a standard range (like 1 through 10). Weights may be real numbers but integer values may be easier to estimate.
# Description
The weighted geometric mean is a powerful equation in decision analysis applications, allowing you to weight and combine externally generated values for a set of decision criteria. The data set is limited only by the number of rows of data provided in the data set. To simplify the combining function, it is assumed the judging criteria data has been normalized.
Given a set of judging data, X = {X_1, X_2, ... , X_n}, the corresponding weights are: W = {W_1, W_2, ... , W_n}. This equation computes the resultant weighted geometric mean:
barx = (prod_"i-1"^N X_i^(W_i))^(1/(sum_"i=1"^10 I_i)), where X = (X_1,X_2, ..., X_N) and W = (W_1,W_2, ..., W_N)
If all the weights are set equal, the weighted geometric mean is equivalent to the common geometric mean. The weighted geometric mean has the ease-of-use characteristic that any data value whose weight is set to zero will not affect the result. So, this allows you to do quick what-if analysis where, in addition to trying different combination of data values and weights, you can test the question of "what if I don't consider the n-th criteria at all".
This equation's particular implementation of the weighted geometric mean is intended to to support comparisons of combined criteria that are best combined when normalized. The obvious use in decision support applications is the ability to changes weights representing relative importance of a specific criteria and/or the criteria values themselves and compare the results to another set of weights and criteria values.
|
# 1.问题描述
Roman numerals are represented by seven different symbols: I, V, X, L, C, D and M.
Symbol Value
I 1
V 5
X 10
L 50
C 100
D 500
M 1000
For example, two is written as II in Roman numeral, just two one’s added together. Twelve is written as, XII, which is simply X + II. The number twenty seven is written as XXVII, which is XX + V + II.
Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not IIII. Instead, the number four is written as IV. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as IX. There are six instances where subtraction is used:
• I can be placed before V (5) and X (10) to make 4 and 9.
• X can be placed before L (50) and C (100) to make 40 and 90.
• C can be placed before D (500) and M (1000) to make 400 and 900.
Given a roman numeral, convert it to an integer. Input is guaranteed to be within the range from 1 to 3999.
Example 1:
Input: “III”
Output: 3
Example 2:
Input: “IV”
Output: 4
Example 3:
Input: “IX”
Output: 9
Example 4:
Input: “LVIII”
Output: 58
Explanation: C = 100, L = 50, XXX = 30 and III = 3.
Example 5:
Input: “MCMXCIV”
Output: 1994
Explanation: M = 1000, CM = 900, XC = 90 and IV = 4.
# 3.C++代码
//我的代码:(beats 34%)
int romanToInt(string s)
{
//对照表
char *c[4][10] = {
{ "","I","II","III","IV","V","VI","VII","VIII","IX" },
{ "","X","XX","XXX","XL","L","LX","LXX","LXXX","XC" },
{ "","C","CC","CCC","CD","D","DC","DCC","DCCC","CM" },
{ "","M","MM","MMM" }
};
int res = 0;//
int i = 3;
int j = 3;
int flag = 0;
for ( i; i >= 0; i--)
{
for ( j ; j > 0; j--)
{
if (s.find(c[i][j],flag) == flag)//在起点处找到匹配的
{
res += j*pow(10, i);
string tmp_str = c[i][j];
flag+=tmp_str.length();//更新下一次查找的起点
break;
}
}
j = 9;
}
return res;
}
//讨论区比较好的方法
int romanToInt2(string s)
{
int res = 0;
for (int i = s.length() - 1; i >= 0; i--)
{
switch (s[i])
{
case 'I':
res += (res > 5 ? -1 : 1);
break;
case 'V':
res += 5;
break;
case'X':
res += 10 * (res > 50 ? -1 : 1);
break;
case 'L':
res += 50;
break;
case 'C':
res += 100 * (res > 500 ? -1 : 1);
break;
case 'D':
res += 500;
break;
case 'M':
res += 1000;
break;
}
}
return res;
}
//附加阿拉伯转罗马数字
string intTointroman(int nums)
{
string s;
char *c[4][10] = {
{ "","I","II","III","IV","V","VI","VII","VIII","IX" },
{ "","X","XX","XXX","XL","L","LX","LXX","LXXX","XC" },
{ "","C","CC","CCC","CD","D","DC","DCC","DCCC","CM" },
{ "","M","MM","MMM" }
};
s.append(c[3][nums / 1000]);
s.append(c[2][nums % 1000 / 100]);
s.append(c[1][nums % 100 / 10]);
s.append(c[0][nums % 10]);
return s;
}
• 评论
• 上一篇
• 下一篇
|
Volume 345 - International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions (HardProbes2018) - Electroweak Probes
Possible non-prompt photons in pp collisions and their effects in AA analyses
A. Monnai
Full text: pdf
Pre-published on: January 11, 2019
Published on: April 24, 2019
Abstract
Direct photons are a powerful tool for elucidating the properties of the hot QCD matter in heavy-ion collisions. They are conventionally estimated by taking into account prompt photon contributions in proton-proton collisions and thermal and prompt photon contributions in heavy-ion collisions. However, there could also be other sources of photons such as pre-equilibrium photons. I investigate prompt, pre-equilibrium and thermal photons and their effects on the direct photon $p_T$ spectra at the CERN Large Hadron Collider energies.
DOI: https://doi.org/10.22323/1.345.0173
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
Formatted question description: https://leetcode.ca/all/926.html
# 926. Flip String to Monotone Increasing (Medium)
A string of '0's and '1's is monotone increasing if it consists of some number of '0's (possibly 0), followed by some number of '1's (also possibly 0.)
We are given a string S of '0's and '1's, and we may flip any '0' to a '1' or a '1' to a '0'.
Return the minimum number of flips to make S monotone increasing.
Example 1:
Input: "00110"
Output: 1
Explanation: We flip the last digit to get 00111.
Example 2:
Input: "010110"
Output: 2
Explanation: We flip to get 011111, or alternatively 000111.
Example 3:
Input: "00011000"
Output: 2
Explanation: We flip to get 00000000.
Note:
1. 1 <= S.length <= 20000
2. S only consists of '0' and '1' characters.
## Solution 1. Two Pass
When we separate the string into two parts, the flip count is the sum of:
• The ones at the left side
• The zeros at the right side.
So we try different separation points from left to right, and for each trial we can easily get the flip count by keeping track of the above two counts.
// OJ: https://leetcode.com/problems/flip-string-to-monotone-increasing/
// Time: O(N)
// Space: O(1)
class Solution {
public:
int minFlipsMonoIncr(string S) {
int rightZeros = 0, leftOnes = 0;
for (char c : S) if (c == '0') rightZeros++;
int ans = rightZeros;
for (char c : S) {
if (c == '1') leftOnes++;
else rightZeros--;
ans = min(ans, rightZeros + leftOnes);
}
return ans;
}
};
## Solution 2. One Pass
Think in the DP way. Assume we’ve already solved the sub-problem for substring S[0..i], and the solution for it is flipCount[i].
Then we look at the next character S[i + 1].
• If it’s 1, we can simply use the solution for S[0..i], so flipCount[i + 1] = flipCount[i].
• If it’s 0, think about the options we have:
1. Firstly, we can choose to flip this 0 to 1 and reuse the solution for S[0..i]. In this case flipCount[i + 1] = flipCount[i] + 1.
2. What if the best solution is not to flip? Then we need to turn all 1s in S[0..i] into 0. Assume the count of 1s in S[0..i] is ones[i], then flipCount[i + 1] = ones[i]
Given these two options, we pick the one with smaller result.
In sum:
flipCount[0] = 0
ones[0] = 0
flipCount[i + 1] = flipCount[i] (if S[i + 1] == '1')
min(flipCount[i] + 1, ones[i]) (if S[i + 1] == '0')
where 1 <= i <= N - 2
// OJ: https://leetcode.com/problems/flip-string-to-monotone-increasing/
// Time: O(N)
// Space: O(1)
// Ref: https://leetcode.com/problems/flip-string-to-monotone-increasing/discuss/189751/C%2B%2B-one-pass-DP-solution-0ms-O(n)-or-O(1)-one-line-with-explaination.
class Solution {
public:
int minFlipsMonoIncr(string S) {
int ones = 0, ans = 0;
for (char c : S) {
if (c == '1') ones++;
else ans = min(ans + 1, ones);
}
return ans;
}
};
Java
• class Solution {
public int minFlipsMonoIncr(String S) {
if (S == null || S.length() <= 1)
return 0;
int length = S.length();
int[][] dp = new int[length][2];
if (S.charAt(0) == '0')
dp[0][1] = 1;
else
dp[0][0] = 1;
for (int i = 1; i < length; i++) {
dp[i][0] = dp[i - 1][0];
dp[i][1] = Math.min(dp[i - 1][0], dp[i - 1][1]);
char c = S.charAt(i);
if (c == '0')
dp[i][1]++;
else
dp[i][0]++;
}
return Math.min(dp[length - 1][0], dp[length - 1][1]);
}
}
• // OJ: https://leetcode.com/problems/flip-string-to-monotone-increasing/
// Time: O(N)
// Space: O(1)
// Ref: https://leetcode.com/problems/flip-string-to-monotone-increasing/discuss/189751/C%2B%2B-one-pass-DP-solution-0ms-O(n)-or-O(1)-one-line-with-explaination.
class Solution {
public:
int minFlipsMonoIncr(string s) {
int ones = 0, ans = 0;
for (char c : s) {
if (c == '1') ones++;
else ans = min(ans + 1, ones);
}
return ans;
}
};
• class Solution(object):
def minFlipsMonoIncr(self, S):
"""
:type S: str
:rtype: int
"""
N = len(S)
P = [0] # how many ones
res = float('inf')
for s in S:
P.append(P[-1] + int(s))
return min(P[i] + (N - P[-1]) - (i - P[i]) for i in range(len(P)))
|
# Special Lagrangian fibrations, instanton corrections and mirror symmetry
-
Denis Auroux, MIT
Fine Hall 314
We study the extension of mirror symmetry to the case of Kahler manifolds which are not Calabi-Yau: the mirror is then a Landau-Ginzburg model, i.e. a noncompact manifold equipped with a holomorphic function called superpotential. The Strominger-Yau-Zaslow conjecture can be extended to this setting by considering special Lagrangian torus fibrations in the complement of an anticanonical divisor, and constructing the superpotential as a weighted count of holomorphic discs. In particular we show how "instanton corrections" arise in this setting from wall-crossing discontinuities in the holomorphic disc counts. Various explicit examples in complex dimension 2 will be considered.
|
## Essential University Physics: Volume 1 (3rd Edition)
We know that $A=350ft^2=(350)(0.3048m)^2=32.516m^2$ $V=1 \space gallon=3.785L$ Now we can find the coverage $coverage=\frac{32.516m^2}{3.785}=8.59\frac{m^2}{L}$
|
# Times less than Planck Time?
• 25 Replies
• 10023 Views
0 Members and 1 Guest are viewing this topic.
#### Æthelwulf
• Sr. Member
• 358
##### Times less than Planck Time?
« on: 27/04/2012 02:51:46 »
''Hawking's singularity theorem is for the whole universe, and works backwards-in-time: in Hawking's original formulation, it guaranteed that the Big Bang has infinite density. Hawking later revised his position in A Brief History of Time (1988) where he stated "There was in fact no singularity at the beginning of the universe" (p50). This revision followed from quantum mechanics, in which general relativity must break down at times less than the Planck time. Hence general relativity cannot be used to show a singularity.''
excerpt from the holy book, wiki.
Sure, quantum mechanics doesn't deal with singularities... but I wonder what motivated the above. Saying General Relativity breaks down at times less than the Planck Time, surely is essentially meaningless anyway, since we cannot measure anything which exists below the Planck Time or if we could it would essentially seem not to change at all. Does anyone have an incline to how Hawking is managing that arguement?
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #1 on: 27/04/2012 14:44:53 »
Why has this been moved to new theories?
This isn't a new theory. I am talking about Hawking's own theories, which are usually well-established within maintsream.
#### JP
• Neilep Level Member
• 3366
##### Re: Times less than Planck Time?
« Reply #2 on: 27/04/2012 16:00:38 »
. . . we cannot measure anything which exists below the Planck Time or if we could it would essentially seem not to change at all. . .
What's the basis for that statement? I see this get repeated all the time, but no one has ever offered a good reason for it. To the best of my knowledge, the Planck time is where our theories break down, since we know neither quantum mechanics nor general relativity alone will be sufficient there and we don't yet have a testable theory that ties them together.
My understanding of anti-singularity arguments is that what GR predicts as a singularity is probably going to be a much richer phenomenon when we figure out a way to describe what actually happens on those scales.
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #3 on: 27/04/2012 16:30:28 »
. . . we cannot measure anything which exists below the Planck Time or if we could it would essentially seem not to change at all. . .
What's the basis for that statement? I see this get repeated all the time, but no one has ever offered a good reason for it. To the best of my knowledge, the Planck time is where our theories break down, since we know neither quantum mechanics nor general relativity alone will be sufficient there and we don't yet have a testable theory that ties them together.
My understanding of anti-singularity arguments is that what GR predicts as a singularity is probably going to be a much richer phenomenon when we figure out a way to describe what actually happens on those scales.
You know, we only see objects when light bounces off objects yes?
Well, the planck time is the amount of time for a photon to cross a distance of 1 planck length. In light of this, we can be sure that we cannot make any measurements on times smaller than this because light would not be able to travel the distance quick enough.
#### JP
• Neilep Level Member
• 3366
##### Re: Times less than Planck Time?
« Reply #4 on: 27/04/2012 17:06:46 »
You're assuming the Planck length is some fundamental unit there. Otherwise, I could insert any other unit of distance and make the same argument. I could take a light year and say that it takes 1 year for light to travel the distance of a light year. Therefore I can't measure anything less than a year long.
So that returns us to the same question: what makes the Planck scale the length beyond which we can't measure anything?
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #5 on: 27/04/2012 17:30:34 »
You're assuming the Planck length is some fundamental unit there. Otherwise, I could insert any other unit of distance and make the same argument. I could take a light year and say that it takes 1 year for light to travel the distance of a light year. Therefore I can't measure anything less than a year long.
So that returns us to the same question: what makes the Planck scale the length beyond which we can't measure anything?
It's very technical. Even for my poor brain.
At planck lengths, geometry as it is understood by General Relativity breaks down. So a photon travelling a light year is absolutely fine but we can't infer it to be fundamental. The planck lengths are however and they are derived using dimensional analysis. Another way to state this, is that the Schwartzchild radius of a black hole is equal to the Compton wavelength at the planck scale thus a photon trying to probe this would gain no information at all.
For a quick comparisson, the Classical Electron Radius is in fact $$\frac{1}{137}$$ times larger than the Compton Wavelength. The Compton Wavelength is (h/Mc) where h is Plancks Constant and it has a value of 6.62606957(29) X 10^(−34) j.s. The Compton Wavelength itself has a value for the electron as 2.4263102175 +(-) 33 X 10^(−12) m value varies with different particles) and is a measure itself of the wavelength of a particle being equal to a photon (a particle of light energy) whose energy is the same as the rest-mass energy of the particle.
Basically, all particles have a wavelength. Photon's can never be at rest but the energy of a photon can be low enough to have it's wavelength match any particle who is at rest. It's often seen in the eye's of many scientists as the ''size'' of a particle. Actually, a more accurate representation of the size of an object would be the Reduced Compton Wavelength. This is just when you divide the Compton Wavelength by $$2\pi$$ and it gives a smaller representation for the mass of a system.
Furthermore, if a photon could measure a planckian object, it could actually create a new class of particle called a Planck Particle - it would distort that space so badly that the photon would be gobbled up and no measurement could be performed. This is due to the Uncertainty Principle if my memory serves
http://en.wikipedia.org/wiki/Planck_particle
« Last Edit: 27/04/2012 17:42:34 by Æthelwulf »
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #6 on: 27/04/2012 17:40:06 »
Here is a derivation I quickly looked over and which might help.
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #7 on: 27/04/2012 17:45:37 »
You're assuming the Planck length is some fundamental unit there. Otherwise, I could insert any other unit of distance and make the same argument. I could take a light year and say that it takes 1 year for light to travel the distance of a light year. Therefore I can't measure anything less than a year long.
So that returns us to the same question: what makes the Planck scale the length beyond which we can't measure anything?
It's very technical. Even for my poor brain.
At planck lengths, geometry as it is understood by General Relativity breaks down. So a photon travelling a light year is absolutely fine but we can't infer it to be fundamental. The planck lengths are however and they are derived using dimensional analysis. Another way to state this, is that the Schwartzchild radius of a black hole is equal to the Compton wavelength at the planck scale thus a photon trying to probe this would gain no information at all.
For a quick comparisson, the Classical Electron Radius is in fact $$\frac{1}{137}$$ times larger than the Compton Wavelength. The Compton Wavelength is (h/Mc) where h is Plancks Constant and it has a value of 6.62606957(29) X 10^(−34) j.s. The Compton Wavelength itself has a value for the electron as 2.4263102175 +(-) 33 X 10^(−12) m value varies with different particles) and is a measure itself of the wavelength of a particle being equal to a photon (a particle of light energy) whose energy is the same as the rest-mass energy of the particle.
Basically, all particles have a wavelength. Photon's can never be at rest but the energy of a photon can be low enough to have it's wavelength match any particle who is at rest. It's often seen in the eye's of many scientists as the ''size'' of a particle. Actually, a more accurate representation of the size of an object would be the Reduced Compton Wavelength. This is just when you divide the Compton Wavelength by $$2\pi$$ and it gives a smaller representation for the mass of a system.
Furthermore, if a photon could measure a planckian object, it could actually create a new class of particle called a Planck Particle - it would distort that space so badly that the photon would be gobbled up and no measurement could be performed. This is due to the Uncertainty Principle if my memory serves
http://en.wikipedia.org/wiki/Planck_particle
You know, I remember having a discussion with a string theorist once and he was asbolutely certain that the Planck time did not need to mean the smallest time that could be extrapolated from science. I don't know if his string theory education allowed that possibility... I know minimal about the theory. As far as I can tell however, most physicists seem to agree that the Planck Lengths are fundamental and you can't get anything smaller than this. We've only been able to measure larger times anyway. It will be long time before we can actually attempt to put the Planck time to the experimental physics.
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #8 on: 27/04/2012 18:01:55 »
Here is a derivation I quickly looked over and which might help.
There are a few jumps in these derivations you need to be careful of just looking over it again, it may not seem clear. For instance, when it asks us to take $$\ell$$ as the radius of a sphere... thus
$$m = \frac{\hbar}{2c\ell}$$
Then it asks to consider a special case
$$\frac{1}{2} mc^2 = \frac{Gm^2}{R}$$
a bit of jump without a derivation. I work it out as, multiply both sides of $$m = \frac{\hbar}{2c\ell}$$
with the speed of light squared,
$$mc^2 = \frac{\hbar c}{2\ell}$$
The c in the denominator on the right cancels, a $$c$$ appears besides the reduced planck constant $$\hbar$$. Then one must know that $$\hbar c = Gm^2$$ so
$$mc^2 = \frac{Gm^2}{2\ell}$$
And that looks close to the equation for $$\frac{1}{2} mc^2 = \frac{Gm^2}{R}$$
« Last Edit: 27/04/2012 18:03:46 by Æthelwulf »
#### imatfaal
• Neilep Level Member
• 2787
• rouge moderator
##### Re: Times less than Planck Time?
« Reply #9 on: 27/04/2012 18:41:19 »
The Planck Scale
The Planck scale is where one can no longer rely on GR and Quantum Gravity is needed. Additionally QFTs breakdown because one is no longer able to renormalise out the effects of gravity. It is beyond the planck scale that we need unified theories - the four forces are no longer separate but are all aspects of a single force (possibly)
The planck scale, the p. time, the p. lenght, and the p. mass do not form a fundamental limit beyond which nature does not pass (as an analogy absolute zero is a fundamental limit); the planck scale is a limit of our current understanding, but we are almost certain that it can be exceeded physically - look at the Planck Epoch aka Planck Era.
Experimentally - we are not even close to the Planck Scale - and we don't have any good ideas of how to get there yet; but that is a technical limit not a fundamental one
There’s no sense in being precise when you don’t even know what you’re talking about. John Von Neumann
At the surface, we may appear as intellects, helpful people, friendly staff or protectors of the interwebs. Deep down inside, we're all trolls. CaptainPanic @ sf.n
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #10 on: 27/04/2012 18:47:24 »
The Planck Scale
The Planck scale is where one can no longer rely on GR and Quantum Gravity is needed. Additionally QFTs breakdown because one is no longer able to renormalise out the effects of gravity. It is beyond the planck scale that we need unified theories - the four forces are no longer separate but are all aspects of a single force (possibly)
The planck scale, the p. time, the p. lenght, and the p. mass do not form a fundamental limit beyond which nature does not pass (as an analogy absolute zero is a fundamental limit); the planck scale is a limit of our current understanding, but we are almost certain that it can be exceeded physically - look at the Planck Epoch aka Planck Era.
Experimentally - we are not even close to the Planck Scale - and we don't have any good ideas of how to get there yet; but that is a technical limit not a fundamental one
Imatfal, could you please return this thread back to the question and answers forum. This is completely in the wrong place. It is not a new theory thread.
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #11 on: 28/04/2012 06:12:08 »
Interestingly, Brian Greene has speculated on sub-Planckian existences. Whilst the Planck lengths could be fundamental, we don't know this for fact. He said:
"the familiar notion of space and time do not extend into the sub-Planckian realm, which suggests that space and time as we currently understand them may be mere approximations to more fundamental concepts that still await our discovery.”
Which is interesting, because if anyone actually follows my own speculations and contentions, I have been wheeling the idea that space and time could certainly not be fundamental since in the very beginning, there was no geometry (space-time) - not in the sense that GR deals with it.
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #12 on: 28/04/2012 06:19:44 »
You know JP, you might be right. Maybe the Planck scale should be seen as a definition we have from the physics and math we know today. But there are so much physics inter-bound, as it seems to me, with that scale. I have this nice historical summary of the development of the Planck scale First Steps of Quantum Gravity and the Planck Values. As in most cases what people thought as they created something helps me immensely to see what they meant, once when it started.
But sure, there might be something else lurking After all, where does it end? 'photons' are dimensionless, aren't they?
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #13 on: 28/04/2012 06:22:37 »
Well Wulf, Brian has a very fertile mind And.. He.. Will.. speculate..
and as he does, sell books..
==
But yes, he's interesting, and some of his attempts to explain, especially entanglements, was very good. Still I prefer the experiments first. Theory building on that, and I'm not discussing weak measurements when I say experiments.
« Last Edit: 28/04/2012 06:26:34 by yor_on »
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #14 on: 28/04/2012 06:29:27 »
Well Wulf, Brian has a very fertile mind And.. He.. Will.. speculate..
and as he does, sell books..
==
But yes, he's interesting, and some of his attempts to explain, especially entanglements, was very good. Still I prefer the experiments first. Theory building on that, and I'm not discussing weak measurements when I say experiments.
I do try and speculate myself. I am already trying to write out some kind of unification in my head as well, so I can see why physicists enjoy the speculations so long as there is a real science behind it
Unlike Hawking... who recently advocated M-theory as the theory of everything... I was very disappointed at this.
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #15 on: 28/04/2012 06:35:56 »
I dare say sub-Planck physics might be the right track though.
If spacetime arose from a singularity, a point - in no dimensions, then perhaps this makes sense why spacetime would make no sense below the Planck lengths. Below these lengths, we must assume that space and time are not fundamental. And below this length, the uncertainty principle concerning energy is very high. Hopefully all these things will lead to some clues in which we can treat the initial conditions with a new type of understanding, or maybe try and understand it with the physics we have - as it may just be a matter of peicing a very complicated jigsaw puzzle together.
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #16 on: 28/04/2012 06:41:16 »
Oh yes, I agree, physics is the best game there is and you're a gamer Wulf.
And I shouldn't be so harsh on Brian, he's a cool guy, although temperamental at times as seen at some blogs
He's doing some pretty impressive speculating, as we all want to do at times
==
As for above and under Planck, it's like you said, most of the physics we use today draw a line there for what we can explain. Maybe we will get a way to prove scales under it too, Smolin had some ideas there, or rather some of his friends? Using astronomical evidence for drawing conclusions of what might be under Planck scale.
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #17 on: 28/04/2012 06:44:32 »
Oh yes, I agree, physics is the best game there is and you're a gamer Wulf.
And I shouldn't be so harsh on Brian, he's a cool guy, although temperamental at times as seen at some blogs
He's doing some pretty impressive speculating, as we all want to do at times
==
As for above and under Planck, it's like you said, most of the physics we use today draw a line there for what we can explain. Maybe we will get a way to prove scales under it too, Smolin had some ideas there, or rather some of his friends? Using astronomical evidence for drawing conclusions of what might be under Planck scale.
In regards to Smolin, did he? (or his friends)? I haven't seen any of that work, do you know where a link is at?
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #18 on: 28/04/2012 06:53:00 »
I can try to find it, but I think I read it in his (latest(?) book, although I have a memory of seeing it somewhere else too. Give me a minute..
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #19 on: 28/04/2012 06:54:25 »
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #20 on: 28/04/2012 06:59:54 »
But there was some others too in the book?
Try this one too 'Can we probe planck-scale physics with quantum optics?' at http://backreaction.blogspot.se/
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### JP
• Neilep Level Member
• 3366
##### Re: Times less than Planck Time?
« Reply #21 on: 28/04/2012 18:15:07 »
It looks like all the arguments against making sub-Planck measurements use general relativity + quantum mechanics to "prove" that the energies required to probe sub-Planck lengths will create Planck scale black holes. But that assumes GR and quantum mechanics both hold at that scale, which we don't expect to be the case...
#### Æthelwulf
• Sr. Member
• 358
##### Re: Times less than Planck Time?
« Reply #22 on: 28/04/2012 19:09:26 »
That was so weird... a good while ago I couldn't even post here lol
#### syhprum
• Neilep Level Member
• 3893
##### Re: Times less than Planck Time?
« Reply #23 on: 28/04/2012 19:33:23 »
There is a technical difficulty reading this as it is much tooooooooooooooooooooooooooo wiiiiiiiiiiiiiiiiiiiiiiiiiiide
syhprum
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #24 on: 29/04/2012 09:52:54 »
It looks like all the arguments against making sub-Planck measurements use general relativity + quantum mechanics to "prove" that the energies required to probe sub-Planck lengths will create Planck scale black holes. But that assumes GR and quantum mechanics both hold at that scale, which we don't expect to be the case...
That's the way I think of it too JP. As if we have 'phase transitions' of a sort, describing one thing at the macroscopic scale, another at QM level, a third under it. What will be interesting is the question of 'causality chains' and a 'arrow' there. I don't expect there to be any linear definition as our 'normal arrow' possible myself at/under Planck scale. But then again, what will 'exist' there? Bosons?
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
#### yor_on
• Naked Science Forum GOD!
• 12188
• (Ah, yes:) *a table is always good to hide under*
##### Re: Times less than Planck Time?
« Reply #25 on: 29/04/2012 10:13:26 »
Although I better point out one thing. As I think of relativity, from a point of 'locality', meaning that all reference frames are locally equivalent. Which simply stated please me by making a Planck length 'invariant' in any local measurement (that is, if we could:) no matter where you do it, or your relative speed. If looked at that way one does not have to consider a relative length of something, depending on mass/motion etc. So a Planck length will then be a Plank length. But I agree fully on that there will be something more after that, although I don't expect our current definitions to cope with describing it.
=
And I can do that by relate a 'arrow' to 'c', as well as define a length to never change for you 'locally'. You may find a twin experiment to be true, But there is no way you will find a 'changed length', as some rulers compared between frames of reference relatively, to stand the final test of joining 'frames of reference'.
So what is 'real' to me is how you can put it to that test, and from the conclusions you get you will be able to define the 'properties' of whatever concept you're laboring with. Which makes the arrow locally equivalent to 'c', and a 'length' locally invariant.
« Last Edit: 29/04/2012 10:29:46 by yor_on »
"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
|
Service Desk Practitioners Forum
cancel
## Changing Description field text limit of 255 characters
SOLVED
Go to solution
Highlighted
Trusted Contributor.
## Changing Description field text limit of 255 characters
Can someone tell me how I can Change the Description field text limit of 255 characters?
3 REPLIES
Acclaimed Contributor.
Solution
## Re: Changing Description field text limit of 255 characters
Hi,
regardless of what SP you are on you can't change the size fo any of the out of box default text fields.
You will need to activate and use one of the 4k text fields, place it on the form and then move the text from the 255 field into the new 4k text field. There is the ability to use Update All to move the data. Radovan has developed this and the .jar can be found in a recent thread.
If you have the leter SP's you can't change the size of the 255 text field but you have the options to create a new text field if you run out of the default ones provided.
This came with Sp 18-20 (or (19-20).
Trusted Contributor.
## Re: Changing Description field text limit of 255 characters
Thanks Mark
Trusted Contributor.
Thanks Mark
|
# A balloon has a volume of 0.5 L at 20°C. What will the volume be if the balloon is heated to 150°C?
## Assume constant pressure and mass.
Dec 26, 2016
Assuming pressure is constant:
#### Explanation:
${V}_{1} / {T}_{1} = {V}_{2} / {T}_{2}$ ($T$ in Kelvin)
${20}^{o} C = 20 + 273 = 293 K$
${150}^{o} C = 150 + 273 = 423 K$
$\frac{0.5}{293} = {V}_{2} / 423 \to {V}_{2} = \frac{0.5 \times 423}{293} = 0.72 L$
Dec 26, 2016
The new volume will be $\text{0.7 L}$.
#### Explanation:
This is an example of Charles' law, sometimes called the temperature-volume law. It states that the volume of a gas is directly proportional to the Kelvin temperature, while pressure and amount are held constant.
The equation is ${V}_{1} / {T}_{1} = {V}_{2} / {T}_{2}$, where $V$ is volume and $T$ is temperature in Kelvins.
Known
${V}_{1} = \text{0.5 L}$
${T}_{1} = \text{20"^@"C"+273.15="293 K}$
${T}_{2} = \text{150"^@"C"+273.15="423 K}$
Unknown
${V}_{2}$
Solution
Rearrange the equation to isolate ${V}_{2}$. Substitute the known values into the equation and solve.
${V}_{2} = \frac{{V}_{1} {T}_{2}}{T} _ 1$
V_2=((0.5"L"xx423"K"))/(293"K")="0.7 L" rounded to one significant figure
|
1. Nov 21, 2005
### Seiya
An object of unknown mass is hung on the end of an unstretched spring and is released from rest. If the object falls 4.27 cm before first coming to rest, find the period of the motion.
All i can figure out is that the maximum amplitude is a given and ocurrs when cos of (wt) is 1
from this i could know that v = -Aw if i had a value for time..... right now im really confused, any advice would be really helpful, thank you
2. Nov 21, 2005
### mezarashi
The equation for period of oscillation is $$T = 2\pi \sqrt{\frac{m}{k}}$$.
The information given let's you find a relationship between the mass and the spring constant. By the principle of conservation of energy:
PE1 + KE1 = PE2 + KE2
KE1 = 0 as the mass was hung at rest. PE1 = PEspring1 + PEgrav1: you can consider the h to be zero here, then both are zero as well.
Would you like to try the energy analysis at point 2, when it is at the bottom and the string is maximally stretched at 4.27 cm?
3. Nov 21, 2005
### Seiya
Thanks i really gotta start doing this problems earlier in the morning so i can think broader ;p
energy at the bottom:
-mgx+1/2w^2mx^2
conservation of energy
0 = PEbottom
T= 2piw
done
thank you
|
# Capital F Symbol
The capital Latin letter is used in mathematics to represent the anti-derivative of a function .
Symbol Format Data
F Unicode
70
TeX
F
SVG
## Usage
Fundamental Theorem of Calculus | Concept
The fundamental theorem of calculus relates integration to differentiation by defining the integral of a continuous function on a closed bounded interval.
Antiderivative | Notation
The antiderivative of a function is denoted using the capital Latin letter F.
Latin Alphabet | Concept
The Latin Alphabet is a collection of 26 symbols that form the basis of the English language. The alphabet's symbols are used throughout mathematics to represent variables, constants, and coeffecients.
## Related Symbols
F | Symbol
The latin letter f is used in mathematics to represent the name of a generic mathematical function
|
## logarithmic differentiation calculator
Differentiate both sides of the equation and solve for $$\frac{dy}{dx}$$. Logarithmic equations Calculator online with solution and steps. For example: (log uv)’ = … Implicit multiplication (5x = 5*x) is supported. You can also get a better visual and understanding of the function by using our graphing tool. Use the properties of logarithms on the right-hand side of the equation. $$\ln y = x\ln x$$ Step 3. Use our free Logarithmic differentiation calculator to find the differentiation of the given function based on the logarithms. $$\ln y = \ln x^x$$ Step 2. The basic properties of real logarithms are generally applicable to the logarithmic derivatives. The Derivative Calculator supports solving first, second...., fourth derivatives, as well as implicit differentiation and finding the zeros/roots. In order to calculate log -1 (y) on the calculator, enter the base b (10 is the default value, enter e for e constant), enter the logarithm value y and press the = or calculate button: Apply the logarithm to both sides of the equation. For differentiating certain functions, logarithmic differentiation is a great shortcut. Chain Rule: d d x [f (g (x))] = f ' … Solve the resulting equation for \displaystyle {\large { {y}}}. The derivative of the logarithmic function is called the logarithmic derivative of the initial function y = f (x). Using the properties of logarithms will sometimes make the differentiation process easier. Solve Exponential and logarithmic functions problems with our Exponential and logarithmic functions calculator and problem solver. Example 1 y = x sin x For example, say that you want to differentiate the following: Either using the product rule or multiplying would be a huge headache. Detailed step by step solutions to your Logarithmic equations problems online with our math solver and calculator. ), with steps shown. Steps in Logarithmic Differentiation: Take natural logarithms of both sides of an equation and use the Properties of Logarithms to simplify. The following variables and constants are reserved: e = Euler's number, the base of the exponential function (2.718281...); i = imaginary number (i ² = -1); pi, π = the ratio of a circle's circumference to its diameter (3.14159...); phi, Φ = the golden ratio (1,6180...); You can enter expressions the same way you see them in your math textbook. Get step-by-step solutions to your Exponential and logarithmic functions problems, with easy to understand explanations of each step. This differentiation method allows to effectively compute derivatives of power-exponential functions, that is functions of the form It spares you the headache of using the product rule or of multiplying the whole thing out and then differentiating. Instead, you do […] The only constraint for using logarithmic differentiation rules is that f (x) and u (x) must be positive as logarithmic functions are only defined for positive values. $$\displaystyle \frac d {dx}\left(\log_b x\right) = \frac 1 {(\ln b)\,x}$$ Basic Idea: the derivative of a logarithmic function is the reciprocal of the stuff inside. Solved exercises of Logarithmic equations. The method of logarithmic differentiation, calculus, uses the properties of logarithmic functions to differentiate complicated functions and functions where the usual formulas of Differentiation do not apply. Logarithmic differentiation is a method used to differentiate functions by employing the logarithmic derivative of a function. Several examples with detailed solutions are presented. Differentiate implicitly with respect to \displaystyle {\large { {x}}} (or other independent variable). ... General derivatives… The online calculator will calculate the derivative of any function using the common rules of differentiation (product rule, quotient rule, chain rule, etc. It can handle polynomial, rational, irrational, exponential, logarithmic, trigonometric, inverse trigonometric, hyperbolic and inverse hyperbolic functions.
|
## Relativistic Lagrangians
I’m a part of a cool group of folks interested in infusing computation into undergraduate physics curriculum. One of the projects is called “relativistic dynamics” and it really got me thinking. I thought I’d get my thoughts down here.
### Lagrangian
I’ve used a Lagrangian approach a ton in my work with students and my posts here. It’s a great way to model the dynamics of a system because you just have to parametrize the kinetic and potential energy of the system and you’re off. No vectors, no free body diagrams, just fun 🙂
Here’s the idea in a nutshell:
Hold a ball in your hand. In 2 seconds it needs to be back in your hand. What should you do with the ball during those two seconds to minimize the time integral of the kinetic energy minus the potential energy during the journey?
It’s a fun exercise to do with students. You’re asking them to minimize this integral over two seconds:
$\int_0^2 \text{KE}(\vec{r}, t)-\text{PE}(\vec{r},t)\,dt$
When I do this their first guess is to leave the ball in your hand. They like to define the gravitational potential energy there to be zero, and then know the kinetic energy is zero if it doesn’t move so they’ve found an easy way to get a total of zero for the integral. So I challenge them to find a path who’s answer would be negative! It’s a pretty fun exercise, especially if you actually calculate the integrals for their crazy ideas.
The point is that the winner is to throw the ball up so that it’s trajectory, responding simply to gravity, takes 2 seconds (ie throw it up 1.225 meters). The kinetic energy is positive during the whole journey (except for an instant at the top, of course) but the potential energy is positive during the whole journey too.
Calculus of variations teaches us that if you want to minimize an integral like this:
$\int_\text{start}^\text{finish}f(x, \dot{x}, t)\,dt$
(where $\dot{x}$ is shorthand for the x-velocity) you really just need to integrate this differential equation over the same time integral:
$\frac{\partial f}{\partial x}-\frac{d}{dt}\frac{\partial f}{\partial \dot{x}}=0$
What’s cool is that if the function is KE-PE the equation above becomes Newton’s second law! That’s why this works. You use scalar energy expressions and you get the force equation for every component of motion! Now there are some other cool things like not needing to worry about constraint forces but I won’t worry about that in this post.
### Relativity
Ok, so what happens when you consider relativistic speeds (ie close to the speed of light)? Well, the first thing I did (which, spoiler, didn’t work) was to wrack my brain for an expression for the kinetic energy and plug away. When teaching relativity you get to a point when you’re making the argument with your students that KE isn’t just $1/2 m v^2$ anymore but is really $mc^2(\gamma-1)$ where gamma is given by:
$\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$
If you take the limit of that expression for small v’s you get the usual expected result, and that’s certainly what we do right away with our students to make them feel better.
Ok, so I plugged it in and got a relativistic version of Newton’s 2nd law:
Note how the second term on the left side looks a little like “ma” while the right hand side is just the force from a conservative potential energy (U). The extra term on the left hand side is the weird stuff.
Without really thinking about whether that was the right equation, I modeled a constant force system and got this for the velocity
(I set the speed of light to 1). You can see that the speed is forced to obey the cosmic speed limit.
But here’s the problem. The equation above is wrong. That is not the correct relativistic Newton’s 2nd law equation.
So what happened? I plugged in the correct relativistic kinetic energy and the Lagrangian trick (minimizing KE-PE) gave a trajectory that doesn’t match what actually happens! So something’s wrong. Here’s a few possibilities (one is right, see if you can guess before reading the next paragraph):
• I’m using the wrong expression for kinetic energy
• The Lagrangian trick has some non-relativistic bias in it
• I’m minimizing the wrong function
It turns out it’s the last one. It took me a while of digging around, but this wikipedia article set me straight. The gist of what’s talked about there is this:
• Andy’s hard work above just doesn’t work
• But we know the right relativistic expression for momentum ($\gamma m v$ which, interestingly enough is a crazy thing that’s conserved in all frames of reference during collisions so we tell our students that since it’s conserved we should call it momentum).
• Let’s differentiate that momentum to get what the force should be and then search for a functional (that’s what f above is) that works out in the calculus of variations
Yeah, weird, I know. It’s like “hey, I know what the answer in the back of the book is so I’m going to futz with my early equations until they give me the right answer. So what is the right functional to use? This:
$-\frac{m c^2}{\gamma}$
Yep, it’s negative. Yep, it’s not an expression you’ve ever seen before if you’ve studied special relativity. But, guess what, it works! When you plug it in and do the calculus of variations trick you get the right dynamics. Surprise, surprise, given that it was built to do just that.
Here’s the same graph as above but not comparing that prediction with the right dynamics (in red):
It also asymptotes to the cosmic speed limit, just at a different rate.
### So what’s being minimized?
That’s the question I was really wondering about. Luckily google came to the rescue with this great wikibook article that it found for me. It points out that the kinetic energy portion of the functional you use to make the relativistic dynamics work is really just proportional to the invariant space-time interval:
$ds=\sqrt{c^2dt^2-dx^2}$
This is an expression for the “distance” between two distinct events in space-time that is the same for all inertial observers. It’s really cool given all the weird time dilation and length contraction that can go on in the various inertial frames.
So basically the trajectories that actual things follow is designed to make the space-time “jumps” add up to the smallest number. That’s super cool
Your thoughts? Here are a few starters for you:
• I like how you talk about teaching the Lagrangian. What I would add is . . .
• I hate how you talk about teaching the Lagrangian. What I would rip, burn, and bury is . . .
• Why would you even think that the Lagrangian formalism, which clearly treats space and time differently, could so easily be co-opted into a relativistic treatment?
• Why is one of your equations an gif instead of WordPress’ built in $\LaTeX$?
• I can tell you used Mathematica’s TeXForm command. You are really lazy.
• You did a simple constant force. What would something connected to a spring do?
• Non-relativistic (red) and relativistic (blue) mass on a spring
• What do you mean when you say that “it’s conserved so let’s just call it momentum”?
• Why didn’t you put that last question mark inside the quotation marks?
• What planet were you on when you figured out the 1.225 meter throw?
|
# Does rolling friction increase speed of a wheel?
I believe all you have is a misunderstanding of some terminology.
Rolling friction refers to the collection of effects that cause a wheel to resist rolling forward, not all of them being actual friction. These effects are dependent on the specific nature of the system, and are generally not modeled in detail in classical mechanics. For instance, if the wheel is on an axle, friction in the bearings can cause resistance. If the wheel or the contact surface are deformable, that deformation zaps energy from the rolling of the wheel, causing it to slow.
The frictional force in your diagram is actually static friction, which can theoretically provide any necessary force to prevent slipping at the contact point.
Rolling wheels do not pivot about their centers. They pivot about the contact point with the floor, or in the case of deformation the centerline of the contact. Move the center of rotation down by $$R$$ and reevaluate your torques; I think you will find they operate opposite of rotation.
You are right in saying that the rolling resistance increases the angular speed. But I think your confusion is mainly because
$$\vec{v}$$ is not independent of $$\omega$$ for pure rolling, since $$v=R\cdot\omega$$. To me, it looks like the resistance $$\vec{F}_R$$ that slows down the system accelerates $$\omega$$. And because of the kinematic connection between $$\omega$$ and $$\vec{v}$$, the latter must also increase.
Whatever you have written is correct, but for clarification, we will consider two cases. One where a net external force is acting on the system, and one where it isn't.
With external force (Gravity in this case)
Consider a sphere on an inclined plane, with the upper half of the plane as smooth, and the lower half with sufficient friction for rolling.
Here, as usual the body will have an increase in its speed while sliding on the frictionless surface from point B to point A. But after point A, the speed will decrease, but angular velocity will increase. This seems to go exactly opposite to $$v=R\cdot\omega$$. Right? The reason for this is that pure rolling hasn't started yet - it is transitioning, so this formula is not yet applicable. Here, the resistance is doing negative work for translation, which is exactly equal to the positive work on rotation(you may use equations of mechanics to prove this).
In short, here the resistance is extracting some energy from translation and gives it to rotation, until they are in such a way that the condition for pure rolling is satisfied.
After point A(once pure rolling has started), whatever you are saying is valid, as obviously, the sphere will speed up while rolling down the plane.
Without an external force and considering deformations
We have our good ol'blue sphere rolling towards right on the ground as shown here.
Here, a portion of the sphere is in contact with the ground, and the points on this portion will existence the force due to ground. In the figure, the red arrows indicate the forces, with their length proportional to the magnitude. The forces at the front will be greater than the forces towards the back. This is also clear according to this which says:
The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion.
To analyze the forces, we can assume an equivalent force $$F$$, which will produce the same effect as all these forces combined, and will act as shown below:
Here, the torque can be given as $$F.d$$ which will act against the rotating motion(You can use simple geometry to understand why it will act against. If any issues with that, feel free to ask for clarification in comments). Further, this force can be divided into two components to analyse the resultant acceleration. The vertical component $$N$$ will not contribute towards linear acceleration as it will be balanced by the weight, whereas the horizontal component $$R$$ will act as shown, and contribute towards retardation, opposing the velocity.
In the end, it can be concluded that both angular velocity and linear velocity are decreasing due to the deformations.
We did not use the components to analyze the torque because both would produce torque in opposite directions, and there comparison might not be very obvious.
Apart from these, the following also contribute towards rolling resistance
• Material of the sphere (or any other rolling body) : Compare a steel ball and a soft rubber ball
• Type of ground (or whatever surface it is rolling on) : Compare a sharing rink and sand
• Mass of the sphere : More the mass, more is the resistance
• Diameter of sphere : More the diameter, less is the resistance
• Hysteresis effects
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Sep 2018, 13:32
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A small pool filled only with water will require an addition
Author Message
TAGS:
### Hide Tags
Senior Manager
Joined: 24 Aug 2009
Posts: 476
Schools: Harvard, Columbia, Stern, Booth, LSB,
A small pool filled only with water will require an addition [#permalink]
### Show Tags
29 Jul 2013, 00:01
4
00:00
Difficulty:
65% (hard)
Question Stats:
65% (01:30) correct 35% (01:03) wrong based on 327 sessions
### HideShow timer Statistics
A small pool filled only with water will require an additional 300 gallons of water in order to be filled to 80% of its capacity. If pumping in these additional 300 gallons of water will increase the amount of water in the pool by 30%, what is the total capacity of the pool in gallons?
A. 1000
B. 1250
C. 1300
D. 1600
E. 1625
_________________
If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS.
Kudos always maximizes GMATCLUB worth
-Game Theory
If you have any question regarding my post, kindly pm me or else I won't be able to reply
Math Expert
Joined: 02 Sep 2009
Posts: 49271
Re: A small pool filled only with water will require an addition [#permalink]
### Show Tags
29 Jul 2013, 00:09
2
3
fameatop wrote:
A small pool filled only with water will require an additional 300 gallons of water in order to be filled to 80% of its capacity. If pumping in these additional 300 gallons of water will increase the amount of water in the pool by 30%, what is the total capacity of the pool in gallons?
A. 1000
B. 1250
C. 1300
D. 1600
E. 1625
Since pumping in additional 300 gallons of water will increase the amount of water in the pool by 30%, then initially the pool is filled with 1,000 gallons of water.
So, we have that 1,000 + 300 = 0.8*{total} --> {total} = 1,625.
Hope it's clear.
_________________
##### General Discussion
Manager
Joined: 28 Feb 2012
Posts: 112
GPA: 3.9
WE: Marketing (Other)
Re: A small pool filled only with water will require an addition [#permalink]
### Show Tags
30 Jul 2013, 07:33
4
fameatop wrote:
A small pool filled only with water will require an additional 300 gallons of water in order to be filled to 80% of its capacity. If pumping in these additional 300 gallons of water will increase the amount of water in the pool by 30%, what is the total capacity of the pool in gallons?
A. 1000
B. 1250
C. 1300
D. 1600
E. 1625
Difficult one in terms of understanding the wording under exam conditions.
we know that 300 gallons will increase the amount of existing water up to 30%, lets say there are x gallons of water and we add 300 => x+300=1.3x => x=1000
Now we now that 1000 gallons already exist in the tank and we add 300 => 1000+300=1300, which is 80% of the total capacity.
So the total capacity is (1300*100)/80=1625
_________________
If you found my post useful and/or interesting - you are welcome to give kudos!
Senior Manager
Joined: 03 Apr 2013
Posts: 283
Location: India
Concentration: Marketing, Finance
GMAT 1: 740 Q50 V41
GPA: 3
A small pool filled only with water will require an addition [#permalink]
### Show Tags
11 Jul 2016, 05:57
1
fameatop wrote:
A small pool filled only with water will require an additional 300 gallons of water in order to be filled to 80% of its capacity. If pumping in these additional 300 gallons of water will increase the amount of water in the pool by 30%, what is the total capacity of the pool in gallons?
A. 1000
B. 1250
C. 1300
D. 1600
E. 1625
we know that adding 300 gallons will increase the water currently in the pool by 30%..
Or...the water will become 130% of itself..and we also know that 30% of the water currently in the pool is 300 gallons..
By unitary method..
if 30% --- 300 gallons
then 130% --- $$300 * \frac{130}{100}$$
This is 80% of the total capacity..so the total capacity will be this value multiplied with
$$\frac{100}{80}$$
Thus the total capacity is
$$300 * \frac{130}{100}*\frac{100}{80}$$
Which gives us..? (E)
_________________
Spread some love..Like = +1 Kudos
Director
Joined: 20 Feb 2015
Posts: 733
Concentration: Strategy, General Management
Re: A small pool filled only with water will require an addition [#permalink]
### Show Tags
14 Jul 2016, 03:14
let the total capacity and initial water level be x and y respectively
now, as per the question
y+300=.8x -----1
y+300=1.3y ----2
.3y=300
y=1000
substitute in equation 1
1300=.8x
x=13000/8 = 1625
Current Student
Joined: 18 Oct 2014
Posts: 874
Location: United States
GMAT 1: 660 Q49 V31
GPA: 3.98
Re: A small pool filled only with water will require an addition [#permalink]
### Show Tags
14 Jul 2016, 04:21
fameatop wrote:
A small pool filled only with water will require an additional 300 gallons of water in order to be filled to 80% of its capacity. If pumping in these additional 300 gallons of water will increase the amount of water in the pool by 30%, what is the total capacity of the pool in gallons?
A. 1000
B. 1250
C. 1300
D. 1600
E. 1625
300 gallons of water increases capacity by 30% that means
30% is 300 gallons, so 100% would be = 300*100/30= 1000 gallons
Now 1000 +300 gallons is 80% capacity of tank.
so 100% capacity would be= 1300 *100/80= 1625
_________________
I welcome critical analysis of my post!! That will help me reach 700+
Non-Human User
Joined: 09 Sep 2013
Posts: 8105
Re: A small pool filled only with water will require an addition [#permalink]
### Show Tags
01 Nov 2017, 17:41
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: A small pool filled only with water will require an addition &nbs [#permalink] 01 Nov 2017, 17:41
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
# Lower Bounds for Learning Distributions under Communication Constraints via Fisher Information
7 Feb 2019
We consider the problem of learning high-dimensional, nonparametric and structured (e.g. Gaussian) distributions in distributed networks, where each node in the network observes an independent sample from the underlying distribution and can use $k$ bits to communicate its sample to a central processor. We consider three different models for communication... (read more)
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
|
# Möbius function from random number sequence
Consider some arbitrary number sequence like the decimal expansion of $\pi$ = {3, 1, 4, 1, 5, 9, 2}. Prepend the sequence with the number $1$ so that you get {1, 3, 1, 4, 1, 5, 9, 2}.
Then plug it into the first column in a matrix that has the following recurrence definition:
\begin{align} T(n,1) &= (n-1)\text{th digit of }\pi, \\ T(n,2) &= T(n,1) - T(n-1,2), \\ \text{for } k>2, T(n,k) &= \sum\limits_{i=1}^{k-1} T(n-i,k-1)-\sum\limits_{i=1}^{k-1} T(n-i,k) \end{align}
That table looks like this:
$$\displaystyle \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 3 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -2 & 3 & 0 & 0 & 0 & 0 & 0 \\ 4 & 6 & -2 & 3 & 0 & 0 & 0 & 0 \\ 1 & -5 & 3 & -2 & 3 & 0 & 0 & 0 \\ 5 & 10 & 0 & 3 & -2 & 3 & 0 & 0 \\ 9 & -1 & 2 & -3 & 3 & -2 & 3 & 0 \\ 2 & 3 & 7 & 7 & -3 & 3 & -2 & 3 \end{array} \right)$$
Then calculate the matrix inverse of the matrix above:
$$\displaystyle \left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & \frac{1}{3} & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & \frac{2}{9} & \frac{1}{3} & 0 & 0 & 0 & 0 & 0 \\ 0 & -\frac{14}{27} & \frac{2}{9} & \frac{1}{3} & 0 & 0 & 0 & 0 \\ -1 & -\frac{1}{81} & -\frac{5}{27} & \frac{2}{9} & \frac{1}{3} & 0 & 0 & 0 \\ 1 & -\frac{146}{243} & -\frac{28}{81} & -\frac{5}{27} & \frac{2}{9} & \frac{1}{3} & 0 & 0 \\ -1 & -\frac{688}{729} & -\frac{11}{243} & -\frac{1}{81} & -\frac{5}{27} & \frac{2}{9} & \frac{1}{3} & 0 \\ 0 & \frac{694}{2187} & -\frac{850}{729} & -\frac{92}{243} & -\frac{1}{81} & -\frac{5}{27} & \frac{2}{9} & \frac{1}{3} \end{array} \right)$$
Why then is there the Möbius function sequence in the first column?
I have checked this for random sequences in programs like this:
(*Mathematica*)
Clear[t, n, k, a, b];
nn = 8;
a = Flatten[{1, RealDigits[N[Pi, nn - 1]][[1]]}]
Length[a]
t[n_, 1] := t[n, 1] = a[[n]];
t[n_, k_] :=
t[n, k] =
If[And[n > 1, k > 1],
If[k == 2, t[n, k - 1] - t[n - 1, k],
Sum[t[n - i, k - 1], {i, 1, k - 1}] -
Sum[t[n - i, k], {i, 1, k - 1}]], 0];
A = Table[Table[t[n, k], {k, 1, nn}], {n, 1, nn}];
MatrixForm[A]
MatrixForm[Inverse[A]]
for up to 100 times 100 matrices and I always get the Möbius function in the first column.
I believe it has something do with the fact that the divisibility matrix equal to 1 if k divides n and 0 otherwise:
$$\displaystyle \left( \begin{array}{cccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \end{array} \right)$$
satisfies the recurrence, that (edit 21.7.2014) Jeffrey Shallit formulated for me:
\begin{align} T(n,1) &= 1, \\ \text{for } k>1, T(n,k) &= \sum\limits_{i=1}^{k-1} T(n-i,k-1)-\sum\limits_{i=1}^{k-1} T(n-i,k) \end{align}
and also the recurrence:
\begin{align} T(n,1) &=1, \\ T(n,2) &= T(n,1)-T(n-1,2),\\ \text{for } k>2, T(n,k) &= \sum\limits_{i=1}^{k-1} T(n-i,k-1) -\sum\limits_{i=1}^{k-1} T(n-i,k) \end{align} which is the same as the first recurrence in the beginning of the question except that the first column here is equal to 1,1,1...
Edit 28.3.2014:
(*Mathematica Mats Granvik 28.3.2014*)
Clear[A, t, n, k, a, nn];
nn = 8;
a = Table[StringJoin[{"x", ToString[n]}], {n, 1, nn}]
Length[a]
t[n_, 1] := t[n, 1] = a[[n]];
t[n_, k_] :=
t[n, k] =
If[And[n > 1, k > 1],
If[k == 2, t[n, k - 1] - t[n - 1, k],
Sum[t[n - i, k - 1], {i, 1, k - 1}] -
Sum[t[n - i, k], {i, 1, k - 1}]], 0];
A = Table[Table[t[n, k], {k, 1, nn}], {n, 1, nn}];
MatrixForm[A]
MatrixForm[a[[1]]*Inverse[A]]
Outputs: 1,-1,-1,0,-1,1,-1,0,...
Edit 7.4.2013:
Input: {"x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7", "x8", "x9", "x10"}
Mathematica program:
(*Mathematica Mats Granvik 7.4.2014*)
Clear[A, t, n, k, a, nn];
nn = 11;
a = Table[StringJoin[{"x", ToString[n - 1]}], {n, 1, nn}]
Length[a]
t[n_, 1] := t[n, 1] = If[n <= 3, 1, a[[n]]];
t[n_, k_] :=
t[n, k] =
If[n >= k,
If[n <= 3, 1,
If[And[k > 1],
If[Or[k == 2, k == 3],
t[n, k - 1] - Sum[t[n - i, k], {i, 1, k - 1}],
If[k >= 4,
Sum[t[n - i, k - 1], {i, 1, k - 2}] -
Sum[t[n - i, k], {i, 1, k - 1}], 0], 0], 0, 0], 0], 0];
A = Table[Table[t[n, k], {k, 1, nn}], {n, 1, nn}];
MatrixForm[A]
Inverse[A][[2 ;; nn, 1]]
Output; {-1, 0, -1, -1, -2, -1, -2, -2, -2, -1}
Which is the Mertens function with the first term negated.
Edit 21.7.2014:
The matrix inverse of this triangle:
$$\left( \begin{array}{cccccc} x_1 & 0 & 0 & 0 & 0 & 0 \\ x_2 & x_2 & 0 & 0 & 0 & 0 \\ x_3 & x_3-x_2 & x_2 & 0 & 0 & 0 \\ x_4 & x_2-x_3+x_4 & x_3-x_2 & x_2 & 0 & 0 \\ x_5 & -x_2+x_3-x_4+x_5 & x_4-x_3 & x_3-x_2 & x_2 & 0 \\ x_6 & x_2-x_3+x_4-x_5+x_6 & x_2-x_4+x_5 & x_4-x_3 & x_3-x_2 & x_2 \end{array} \right)$$
gives the möbius function function divided by $x_1$.
Edit 24.7.2014:
(*Program start*)
(*Mathematica Mats Granvik 24.7.2014*)
Clear[A, t, n, k, a, nn];
nn = 32;
Print["Random numbers as input:"]
a = RandomReal[7, nn]
Length[a];
t[n_, 1] := t[n, 1] = a[[n]];
t[n_, k_] :=
t[n, k] =
If[And[n > 1, k > 1],
If[k == 2, t[n, k - 1] - t[n - 1, k],
Sum[t[n - i, k - 1], {i, 1, k - 1}] -
Sum[t[n - i, k], {i, 1, k - 1}]], 0];
A = Table[Table[t[n, k], {k, 1, nn}], {n, 1, nn}];
MatrixForm[A];
B = a[[1]]*Inverse[A];
Print["Möbius function as output:"]
MatrixForm[B];
Chop[B[[All, 1]]]
(*program end*)
-
Your observation is correct. – Hagen von Eitzen Dec 31 '12 at 13:59
Have you tried it with a non-"random" sequence? Such as all $1$? – Thomas Andrews Dec 31 '12 at 13:59
@ThomasAndrews Yes I have tried it with the all 1 sequence. That is the same as the last recurrence at the end of my writing. – Mats Granvik Dec 31 '12 at 14:02
Yeah, my point is that the "random" part of the question is misleading - it has nothing to do with whether the initial sequence is "random." – Thomas Andrews Dec 31 '12 at 14:21
My answer is wrong - you probably want to unselect it if you want to get a correct answer. I'm thinking about how I can fix it, but it is being stubborn. I suspect I'm on the right track, just stuck on a few points. – Thomas Andrews Dec 31 '12 at 17:18
(This argumen isn't working out - the "shift" issue is the problem - it doesn't quite work.)
(Actually, I'm pretty sure this answer is wrong, but something like it is almost certainly the correct approach. The problem is that the value for $(0,1,0,0,...)$ is not as simple as the shift operator I gave.
Take the vector equal to the difference of your digits, so $x=(x_i)=(1,3-1,1-4,\dots)$.
Then your matrix is a linear map of $x=(x_i)$. That is, if $A(x)$ is the matrix gotten from $x=(x_1,...,x_m)$ by starting with $T(n,1)=\sum_{i=1}^n x_i$, and computing as above, then $A(x+x')=A(x)+A(x')$ and $A(\alpha x)=\alpha A(x)$ for any constant $\alpha$.
Now, when $x=(1,0,0,...)$ you get the matrix at the bottom of your question. Let's call that matrix $B = A(1,0,...0)$.
If $x=(0,1,0,...)$ then $A(x)$ is just that same matrix with the bottom row dropped, and a top row of $0$s added. That is $MB$, where $M$ is the matrix with $1$s just below the diagonal.
(This argument is still not quite working out, but it goes somethThis all means that if $x=(x_i)$ then $A(x)=\left(\sum_{i=1}^{m} x_i M^{i-1} \right)B$
The matrix $\sum_{i=1}^m x_iM^{i-1}$ is a lower diagonal matrix, with $x_1$ in all the diagonal elements. If $x_1\neq 0$, this is invertible, and its inverse is an lower -diagonal matrix, and all the diagonals are $x_1^{-1}$.
On the other hand $B^{-1}$ is can be easily seen to have $\mu(k)$ in the left column. So $B^{-1}\left(\sum_{i=1}^m x_iM^{i-1}\right)^{-1}$ has $x_1^{-1}\mu(k)$ along the left column.
But that is precisely the inverse of your matrix, so when $x_1=1$, we are done.
-
|
### $L\sp q$-functional inequalities and weighted porous media equations
Jean Dolbeault, Ivan Gentil, Arnaud Guillin & Feng-Yu Wang
Using measure-capacity inequalities we study new functional inequalities, namely Lq- Poincaré inequalities and Lq-logarithmic Sobolev inequalities for any q ∈ (0, 1]. As a consequence, we establish the asymptotic behavior of the solutions to a so-called weighted porous media equation in terms of L2-norms and entropies.
This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.
|
The Problem
This strikes me as a very natural problem which should have been asked (and solved?) already.
For each positive integer k, find a nice expression for the following generating function in the variable x: $$\sum_{\lambda/\mu} x^{|\lambda|}.$$
Here \lambda ranges over all partitions and \mu over those partitions contained in \lambda for which the skew Young diagram \lambda/\mu has k nodes i.e. each partition of n is weighted by the number of partitions of n-k it contains.
Examples: k=1, the function is $\frac{x}{(1-x)}P(x)$, where $P(x)=\prod_{i=1}^\infty(1-x^i)^{-1}$ is the partition generating function. So in this case I'm just enumerating partitions by the number of removable nodes. The formula is equivalent to the well-known fact that every partition has one more addable than removable node.
I've computed the cases k=2,3,4 also (k=4 was painful - I broke it into 14 possible types of skew-diagrams). For k=2 the generating function is $\frac{ x^2(2-x)}{(1-x)(1-x^2)}P(x).$
It seems plausible that there is a polynomial F_k(x) of degree at most k(k-1)/2 (with leading coefficient \pm 1) so that the power series is $$\frac{ x^k F_k(x)}{(1-x)(1-x^2)..(1-x^k)}P(x).$$
If $F_k(x)$ exists, it's easy to see that it must have lowest terms $p_k+p_{k+1}x+2(p_{k+2}-1)x^2+...$, where p_n=number of partitions of n, after which the terms depend on congruences for k. This suggests that its complicated. Perhaps there is no nice expression for $F_k(x)$. Even knowing whether $F_k(x)$ exists is of interest to me. Maybe there is a neater way of expressing the entire generating function?
Motivation
The coefficient of $x^n$ in the generating function is the dimension of the centre of a certain subalgebra of the complex group algebra of the symmetric group of degree n. This is ${\mathbb C}S_n^{S_{n-k}}$, the centralizer of the subgroup $S_{n-k}$ in ${\mathbb C}S_n$. It is easy to see that this has as ${\mathbb C}$-basis the $S_{n-k}$-orbit sums in $S_n$. The centre is indexed by pairs $(\chi,\phi)$, where $\chi$ is an irreducible character of $S_n$, and $\phi$ is an irreducible character of $S_{n-k}$ occuring in the restriction of $\chi$. The formulation above is then an easy consequence of the parametrization of irreducible characters of $S_n$, and the classic branching rule.
Literature
Yoshiaki Ueno, On the Generating Functions of the Young Lattice, J. Algebra 116 (1988) 261--270.
This gives a generating polynomial for the partitions contained in a given partition \lambda in terms of a determinant involving Gaussian coefficients. It's a beautiful result, but it did not give me any insight into my problem.
-
This problem is a special case of Exercise 3.150(a) of Enumerative Combinatorics, vol. 1 (second ed.). The polynomial $A_{\lbrace k\rbrace}(x)$ of this exercise is the $F_k(x)$ of the present question. The solution to this exercise gives a recipe for computing $F_k(x)$ which can probably be used to compute quite a few values and to prove some properties such as its degree. In the solution to part (b) there is given the formula $F_3(x)=3+2x-x^2-x^3$.
Thanks for the reference Richard. Also thanks for making the second edition available on your homepage. I'll order it for our library in any event. The method you suggest in your solutions is essentially the one I was using. For some reason I failed to spot that $$\frac{1}{(1-x^2)^2}=\frac{(1+x^2)}{(1-x^2)(1-x^4)}$$. So one of my summands for $k=4$ stumped me! I'm sure the degree bound is correct. Any chance that $F_k(x)$ is computable? – John Murray Jan 21 '12 at 21:28
|
zbMATH — the first resource for mathematics
Certified CYK parsing of context-free languages. (English) Zbl 1371.68137
Summary: We report a work on certified parsing for context-free grammars. In our development we implement the Cocke-Younger-Kasami parsing algorithm and prove it correct using the Agda dependently typed programming language.
MSC:
68Q42 Grammars and rewriting systems 68N15 Theory of programming languages 68Q45 Formal languages and automata
Agda; TRX
Full Text:
References:
[1] Firsov, D.; Uustalu, T., Certified parsing of regular languages, (Gonthier, G.; Norrish, M., Proc. of 3rd Int. Conf. on Certified Programs and Proofs, CPP 2013, Lect. Notes Comput. Sci., vol. 8307, (2013), Springer Berlin), 98-113 · Zbl 1303.68077 [2] Norell, U., Towards a practical programming language based on dependent type theory, (2007), Chalmers University of Technology Göteborg, Ph.D. thesis [3] Norell, U., Dependently typed programming in agda, (Koopman, P.; Plasmeijer, R.; Swierstra, S. D., Revised Lectures from 6th Int. School on Advanced Functional Programming, AFP 2009, Lect. Notes Comput. Sci., vol. 5832, (2009), Springer Berlin), 230-266 · Zbl 1263.68038 [4] Younger, D., Recognition and parsing of context-free languages in time $$O(n^3)$$, Inf. Comput., 10, 2, 189-208, (1967) · Zbl 0149.24803 [5] Valiant, L. G., General context-free recognition in less than cubic time, J. Comput. Syst. Sci., 10, 2, 308-314, (1975) · Zbl 0312.68042 [6] Nordström, B., Terminating general recursion, BIT Numer. Math., 28, 3, 605-619, (1988) · Zbl 0659.68020 [7] Wadler, P., Comprehending monads, (Proc. of 1990 ACM Conf. on LISP and Functional Programming, LFP ’90, (1990), ACM New York), 61-78 [8] Barthwal, A.; Norrish, M., Verified, executable parsing, (Castagna, G., Proc. of 18th Europ. Symp. on Programming Languages and Systems, ESOP ’09, Lect. Notes Comput. Sci., vol. 5502, (2009), Springer Berlin), 160-174 · Zbl 1234.68359 [9] Koprowski, A.; Binsztok, H., TRX: a formally verified parser interpreter, Log. Methods Comput. Sci., 7, 2, (2011), art. no. 18 · Zbl 1260.68194 [10] Jourdan, J.-H.; Pottier, F.; Leroy, X., Validating LR(1) parsers, (Seidl, H., Proc. of 21st Europ. Symp. on Programming, ESOP 2012, Lect. Notes Comput. Sci., vol. 7211, (2012), Springer Berlin), 397-416 · Zbl 1352.68131 [11] Danielsson, N. A., Total parser combinators, (Proc. of 15th ACM SIGPLAN Int. Conf. on Functional Programming, ICFP ’10, (2010), ACM New York), 285-296 · Zbl 1323.68330 [12] Sjöblom, T. B., An agda proof of the correctness of Valiant’s algorithm for context free parsing, (2013), Chalmers University of Technology Göteborg, Master’s thesis [13] Ridge, T., Simple, functional, sound and complete parsing for all context-free grammars, (Jouannaud, J.-P.; Shao, Z., Proc. of 1st Int. Conf. on Certified Programs and Proofs, CPP 2011, Lect. Notes Comput. Sci., vol. 7086, (2011), Springer Berlin), 103-118 · Zbl 1350.68164
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
Calculus Prerequisite for Math Stat I and II
Students in this mathematical statistics course are expected to be able to use many calculus techniques skillfully. Following is a list of most of the calculus topics used during this course.
1. Find the derivative of any function covered in calculus classes, including exponential, logarithmic, and trigonometric functions.
2. Use basic techniques of integration including substitution and integration by parts. Also be able to use integral tables.
3. Be able to use the Fundamental Theorem of Calculus to do problems such as
4. Find improper integrals (and be able to find when they do not converge.)
5. Use L'Hospital's rule.
6. Use Taylor approximations to functions and recognize sums as Taylor approximations for many basic functions such as .
7. Use standard techniques for finding the sum of an infinite series, including recognizing geometric series.
8. Find local and absolute extrema and points of inflection of functions.
9. Use mathematical software to graph functions and find local and absolute extrema of functions that are analytically intractable. (It is possible to learn to do this quickly, on free software. More information is available.)
10. Carry out all the calculus techniques on functions with parameters. (For example, find the local extrema and points of inflection of the probability density function of a normal distribution with parameters and .)
11. Compute double integrals.
12. Find local and absolute extrema of functions of two variables.
|
# Integral Limits: Infinity.
1. Dec 4, 2007
### PFStudent
1. The problem statement, all variables and given/known data
Consider the function,
$${{\vec{F}}(r)} = {\int_{-\infty}^{\infty}}{{\vec{f}}(r)}{\times}{d{\vec{r}}}$$
Is the following true?,
$${\int_{-\infty}^{\infty}}{{\vec{f}}(r)}{\times}{d{\vec{r}}} = {2}{\int_{0}^{\infty}}{{\vec{f}}(r)}{\times}{d{\vec{r}}}$$
2. Relevant equations
Knowledge of infinite limits.
3. The attempt at a solution
I believe it is true, since zero is considered the mid-point between an infinite sum of numbers in one direction and an infinite sum of numbers in the opposite direction.
So is that right?
Any help is appreciated.
Thanks,
-PFStudent
2. Dec 4, 2007
### rs1n
Consider the following piece-wise function:
$$f(x) = \begin{cases} x^2 & x \le 0,\\ \frac{1}{x^2} & x>0 \end{cases}$$
3. Dec 4, 2007
### PFStudent
Hey,
I noticed the function you mentioned approaches infinity for large negative values and approaches zero for large positive values. However, I am not sure how that relates to the question I proposed. What am I missing?
Thanks,
-PFStudent
4. Dec 4, 2007
### rs1n
It could be that I'm missing something (yes, I notice that your integrand is a cross product). However, your function needs have some symmetry in order for your claim to be true. In the example I gave, 2 times the integral of just the "right" side is NOT equal to the overall integral (from negative infinity to positive infinity). You can see this geometrically (just consider the integral as area beneath the curve). The "right" side has finite area whereas the "left" side has infinite area. This can be proven rigorously using limits (see improper integrals in any calculus book).
For example, if you graph $$e^{-x^2}$$, you'll notice it's symmetric about the y-axis (i.e. it's an even function). In this case, you can say
$$\int_{-\infty}^\infty e^{-x^2}\ dx = 2\int_0^\infty e^{-x^2}\ dx$$
Not all functions have such symmetry (e.g. the one I gave).
5. Dec 5, 2007
### HallsofIvy
Staff Emeritus
Here, you show $\vec{r}$ as a vector. How does a vector go from $-\infty$ to $\infty$?
As rs1 said, you need symmetry. Yes, you have "an infinite sum of numbers in one direction and an infinites sum of numbers in the opposite direction"- but not necessarily the same sum! The two sums are not necessarily the same.
6. Nov 18, 2011
### athenee
how can I solve this integral ?
∫_(-∞)^(+∞)▒〖1/(16π^2 D^2 (t-t")(t" -t^')) e^((-〖(x-x")〗^2)/(4D(t-t"))-〖(x"-x')〗^2/(4D(t"-t'))) 〗dx"
|
# Descriptive analysis in R
Posted by
This post shows an easy descriptive statistical analysis exercise of the Mid-Atlantic Wage Data showing some boxplots and checking for data normality.
The dataset can be found here: https://github.com/selva86/datasets/blob/master/Wage.csv
The fields in the data are the following:
• year: Year when the data was collected.
• maritl: marital status: 1. Never Married, 2. Married, 3. Widowed, 4. Divorced, and 5. Separated.
• age: worker’s age.
• race: 1. White, 2. Black, 3. Asian, and 4. Other.
• education: Education level: 1. < HS Grad, 2. HS Grad, 3. Some College, 4. College Grad, 5. Advanced Degree.
• region: Always Mid-Atlantic.
• jobclass: Job type 1. Industrial, 2. Information.
• health: Helath status: 1. <=Good, 2. >=Very Good)
• health_ins: Health insurance 1. Yes, 2. No.
• logwage: logarithm of wage.
• wage: ($1000s) descriptive_analysis.utf8 ## 1 Data analysis Reading the file data<-read.csv2("./Wage.csv",header = TRUE, sep = ",", stringsAsFactors = TRUE ) Summary of the variable type and levels. str(data) ## 'data.frame': 3000 obs. of 11 variables: ##$ year : int 2006 2004 2003 2003 2005 2008 2009 2008 2006 2004 ...
## $age : int 18 24 45 43 50 54 44 30 41 52 ... ##$ maritl : Factor w/ 5 levels "1. Never Married",..: 1 1 2 2 4 2 2 1 1 2 ...
## $race : Factor w/ 4 levels "1. White","2. Black",..: 1 1 1 3 1 1 4 3 2 1 ... ##$ education : Factor w/ 5 levels "1. < HS Grad",..: 1 4 3 4 2 4 3 3 3 2 ...
## $region : Factor w/ 1 level "2. Middle Atlantic": 1 1 1 1 1 1 1 1 1 1 ... ##$ jobclass : Factor w/ 2 levels "1. Industrial",..: 1 2 1 2 2 2 1 2 2 2 ...
## $health : Factor w/ 2 levels "1. <=Good","2. >=Very Good": 1 2 1 2 1 2 2 1 2 2 ... ##$ health_ins: Factor w/ 2 levels "1. Yes","2. No": 2 2 1 1 1 1 1 1 1 1 ...
## $logwage : Factor w/ 508 levels "3","3.04139268515822",..: 126 105 354 426 126 342 452 287 315 347 ... ##$ wage : Factor w/ 508 levels "100.013486924706",..: 397 376 117 189 397 105 215 50 78 110 ...
Although logwave and wage have been imported as factor, they seem to be numerical, so we transform the variables and check the dataset again with str.
data$logwage<-as.numeric(as.character(data$logwage))
data$wage<-as.numeric(as.character(data$wage))
str(data)
## 'data.frame': 3000 obs. of 11 variables:
## $year : int 2006 2004 2003 2003 2005 2008 2009 2008 2006 2004 ... ##$ age : int 18 24 45 43 50 54 44 30 41 52 ...
## $maritl : Factor w/ 5 levels "1. Never Married",..: 1 1 2 2 4 2 2 1 1 2 ... ##$ race : Factor w/ 4 levels "1. White","2. Black",..: 1 1 1 3 1 1 4 3 2 1 ...
## $education : Factor w/ 5 levels "1. < HS Grad",..: 1 4 3 4 2 4 3 3 3 2 ... ##$ region : Factor w/ 1 level "2. Middle Atlantic": 1 1 1 1 1 1 1 1 1 1 ...
## $jobclass : Factor w/ 2 levels "1. Industrial",..: 1 2 1 2 2 2 1 2 2 2 ... ##$ health : Factor w/ 2 levels "1. <=Good","2. >=Very Good": 1 2 1 2 1 2 2 1 2 2 ...
## $health_ins: Factor w/ 2 levels "1. Yes","2. No": 2 2 1 1 1 1 1 1 1 1 ... ##$ logwage : num 4.32 4.26 4.88 5.04 4.32 ...
## $wage : num 75 70.5 131 154.7 75 ... ## 2 Descriptive Analysis and Visualization ### 2.1 Descriptive Analysis First thing to do is showing a statistical summary of the data. summary(data) ## year age maritl race ## Min. :2003 Min. :18.00 1. Never Married: 648 1. White:2480 ## 1st Qu.:2004 1st Qu.:33.75 2. Married :2074 2. Black: 293 ## Median :2006 Median :42.00 3. Widowed : 19 3. Asian: 190 ## Mean :2006 Mean :42.41 4. Divorced : 204 4. Other: 37 ## 3rd Qu.:2008 3rd Qu.:51.00 5. Separated : 55 ## Max. :2009 Max. :80.00 ## education region jobclass ## 1. < HS Grad :268 2. Middle Atlantic:3000 1. Industrial :1544 ## 2. HS Grad :971 2. Information:1456 ## 3. Some College :650 ## 4. College Grad :685 ## 5. Advanced Degree:426 ## ## health health_ins logwage wage ## 1. <=Good : 858 1. Yes:2083 Min. :3.000 Min. : 20.09 ## 2. >=Very Good:2142 2. No : 917 1st Qu.:4.447 1st Qu.: 85.38 ## Median :4.653 Median :104.92 ## Mean :4.654 Mean :111.70 ## 3rd Qu.:4.857 3rd Qu.:128.68 ## Max. :5.763 Max. :318.34 The mean and the median of the variable year are the same so it has a symmetric distribution, the same happens with logwave where both values are not the same but very close. Age and wage have more skewed distributions, since there’s more difference between their mean and median. Comenzamos con las variables numéricas: We can also see the levels in the factor variables and the number of samples per level. As we can see the variables health and health_ins are less balanced than jobclass, and region has only one value. ### 2.2 Visualization Let’s start showing some boxplots to check the distribution of the variables and outliers. #### 2.2.1 Race vs age As we can see in the first boxplot the distribution of the ages per race is similar, we can find only two outliers. plot(x=data$race,y=data$age, xlab = "race", ylab = "age") #### 2.2.2 Jobclass vs age Also similar distribution of the age per jobclasses, some outliers in both cases. the mean of the age of the people with an industrial jobclass seems slightly lower than the one of the people with an information jobclass. plot(data$jobclass, data$age, xlab = "jobclass", ylab = "age") #### 2.2.3 Health status vs age The ages of the people with a very good health status seems slower than the ones with a good or less health status. plot(data$health, data$age, xlab = "health", ylab = "age") #### 2.2.4 Health insurance vs age Also the mean of the age of the people without a health insurance seems lower than the mean of the age of the people with a health insurance. plot(data$health_ins, data$age, xlab = "health_ins", ylab = "age") ### 2.3 Normality test Let’s check visually if the wage variable has a normal distribution. First, we create a density plot, as we can see the distribution does not seem normal. library(ggplot2) ggplot(data, aes(x=wage)) + geom_density() Let’s perform another test with a qqplot that compares the data points with a normal distribution. As we can see the data points move away from the normal distribution lines so we can say that the wage variable does not have a normal distribution. library(car) qqPlot(data$wage)
## [1] 207 1230
|
# Left inverses for matrix
Consider $A=\pmatrix{1&2\\1&-2\\0&1}$.
I'm trying to see how I can find 2 different left inverses for this $3\times2$ matrix. If it has a left inverse, then it is injective.
Also, for the Right inverse, how come it doesn't have any?
-
You can find a left inverse of the form $\pmatrix{a&0&b\\c&0&d}$ simply by ignoring the middle row of $A$ and inverting the rest. Repeat that to find a left inverse of the form $\pmatrix{0&p&q\\0&r&s}$.
As for a right inverse, assume that $AB=I_{3\times 3}$. Then, in particular $AB\mathbf e_1=\mathbf e_1$, $AB\mathbf e_2=\mathbf e_2$, and $AB\mathbf e_3=\mathbf e_3$. But $\{B\mathbf e_1, B\mathbf e_2, B\mathbf e_3\}$ are three vectors in $\mathbb R^2$ and therefore they must be linearly dependent. Multiplying with $A$ cannot then stop them being linearly dependent. Therefore, by contradiction, $AB\ne I_{3\times 3}$.
-
How did you come up with the conclusion of finding a left inverse by "ignoring the middle row of A". Can you demonstrate this further, or provide an example? Thanks – Buddy Holly Nov 5 '11 at 20:13
It comes naturally with a bit of experience with how matrix multiplication works. – Henning Makholm Nov 5 '11 at 20:26
@BuddyHolly A matrix is left invertible if and only if the number of columns= the number of linearly independent rows... You can try to prove this as a separate exercise :) Henning simply observed that by ignoring those rows, you get a square matrix with l.i. rows, thus invertible.... – N. S. Nov 6 '11 at 1:39
You need to solve:
$$\begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \end{pmatrix} \cdot \begin{pmatrix} 1 & 2 \\ 1 & -2 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$$
All matrices
$$A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \end{pmatrix}$$
satisfying the above equation will be left inverses. You can use the same argument to check whether any right inverses exist. However, this will not be the case since your matrix is not quadratic (if both exist they agree and the matrix must be quadratic).
-
To solve this, we will just RREF it right? Then this will give us a system of euations to find a11, a12, a13, etc. to which I can fill in the 2 answers to which Henning said above? – Buddy Holly Nov 6 '11 at 4:42
|
#### Volume 25, issue 5 (2021)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1364-0380 ISSN (print): 1465-3060 Author Index To Appear Other MSP Journals
The space of almost calibrated $(1,1)$–forms on a compact Kähler manifold
### Jianchun Chu, Tristan C Collins and Man-Chun Lee
Geometry & Topology 25 (2021) 2573–2619
##### Abstract
The space $\mathsc{ℋ}$ of “almost calibrated” $\left(1,1\right)$–forms on a compact Kähler manifold plays an important role in the study of the deformed Hermitian Yang–Mills equation of mirror symmetry, as emphasized by recent work of Collins and Yau (2018), and is related by mirror symmetry to the space of positive Lagrangians studied by Solomon (2013, 2014). This paper initiates the study of the geometry of $\mathsc{ℋ}$. We show that $\mathsc{ℋ}$ is an infinite-dimensional Riemannian manifold with nonpositive sectional curvature. In the hypercritical phase case we show that $\mathsc{ℋ}$ has a well-defined metric structure, and that its completion is a $CAT\left(0\right)$ geodesic metric space, and hence has an intrinsically defined ideal boundary. Finally, we show that in the hypercritical phase case $\mathsc{ℋ}$ admits ${C}^{1,1}$ geodesics, improving a result of Collins and Yau (2018). Using results of Darvas and Lempert (2012) we show that this result is sharp.
##### Keywords
mirror symmetry, deformed Hermitian Yang-Mills, special Lagrangian
##### Mathematical Subject Classification 2010
Primary: 32Q15
Secondary: 53C22, 53D05, 53D37
##### Publication
Received: 24 February 2020
Accepted: 5 July 2020
Published: 3 September 2021
Proposed: Gang Tian
Seconded: Tobias H Colding, Bruce Kleiner
##### Authors
Jianchun Chu Department of Mathematics Northwestern University Evanston, IL United States Tristan C Collins Department of Mathematics Massachusetts Institute of Technology Cambridge, MA United States Man-Chun Lee Department of Mathematics Northwestern University Evanston, IL United States
|
# d
• October 13th 2008, 11:17 AM
I Congruent
Subspaces
OOPS I MESSED UP THE TITLE CAN ANY1 TELL ME HOW I CAN CHANGE IT?
Let $V = R^3$, and consider the following subsets of V.
$U_1 = \{ t \begin{pmatrix}
1\\
0\\
0\end{pmatrix} : t \in R \}$
$U_2 = \{ t \begin{pmatrix}
0\\
1\\
0\end{pmatrix} : t \in R \}$
$U_3 = \{t \begin{pmatrix}
1\\
1\\
1\end{pmatrix} : t \in R \}$
$U_4 = \{ \begin{pmatrix}
r\\
s\\
0\end{pmatrix} : r,s \in R \}$
$U_5 = \{ \begin{pmatrix}
r\\
s\\
1\end{pmatrix} : r,s \in R \}$
$U_6 = \{ \begin{pmatrix}
r\\
s\\
0\end{pmatrix} : r,s,t \in R \text{ and } r+s+t=1 \}$
$U_7 = U_1 \cap U_2$
$U_8 = U_1 \cup U_2$
a) For each of the 8 sets sat whether it is a subspace, and briefly explain your answer.
b) For each of the 8 sets, classify it as one of
i. A line passing through the origin.
ii. A line not passing through the origin.
iii. A plane passing through the origin.
iv. A plane not passing through the origin.
v. none of the above
c) What do you conclude about the subspaces of $R^3$?
• October 13th 2008, 11:20 AM
I Congruent
I am not too confident with my answers and im completely stuck on part b)
|
There is already a fantastic set of building blocks available for supporting an open source risk modelling universe, including but not limited to: The Python language, tools and. If I want to calculate CVaR using Monte Carlo prices from the 3 investments, here is what I'm thinking: 1. We will first get input values from user using input() and convert it to float using float(). Calculating expected value of unknown random variable. In Python for Finance, Part I, we focused on using Python and Pandas to. Chapter 2 Value at Risk and other risk measures. - Calculate VaR deterministically - Calculate VaR using Monte Carlo method In this video, explore Value at Risk, calculate parametric VaR with simple formula and via Monte Carlo simulation. The correct estimation of VaR is essential for any financial insti-tution, in order to arrive at the accurate capital. Lets now code TF-IDF in Python from scratch. 1-day VaR) with a probability of. Value at Risk. Hence it is always a larger number than the corresponding VaR. >>> interestRate. GARCH conditional volatility estimates. Implementing Risk Forecasts 6. In Python Calculate the BMI of a person using the formula BMI = ( Weight in Pounds / ( ( Height in inches ) x ( Height in inches ) ) ) x 703 and assign the value to the variable bmi. The risk measure VaR is a merely a cutoff point and does not describe the tail behavior beyond the VaR threshold. This tool is intended for use in ModelBuilder and not in Python scripting. Calculating Value At Risk in Python by Variance Co variance and Historical Simulation Sandeep Kanao. Expected Shortfall has a number of aliases: Conditional Value at Risk (CVaR) Mean. For example, every afternoon, J. Select a statistical distribution to approximate the factors that affect your data set. # ##### # # - ABOUT THE PROGRAM - # Program name : tkinter addition calculator # Program description : takes two digit as input and calculates # the sum of it. Building Logistic Regression Model. Considering the market risk importance, its evaluation it is necessary to each bank applying the current measurement methods. First, we need to calculate the sum of squares between (SSbetween), sum of squares within (SSwithin), and sum of squares total (SSTotal). Use this sample simulation to see how IBM Spectrum Symphony can accelerate time-to-results for such workload by breaking down workload. There are three commonly used methodologies for calculating VaR (Bohdalová, 2007). We can compute the variance of the single stock using python as: Hence, the variance of return of the ABC is 6. This could be handy in allocating capital to algorithms proportional some multiple of the VaR value in order. Use two or four spaces to define each logical level. While there are several advantages which have led to big popularity of VAR, anybody using it should also understand the limitations of Value At Risk as a risk management tool. 5 Simple interest value: 31500. Homework Statement Calculate 5-day 1% Value at Risk of a portfolio using Monte Carlo simulation. Since Tkinter is cross-platform so it works on both windows and Linux. VaR allows investors to calculate the most probable amount of money they would lose within the defined time horizon. We need to provide a lag value, from which the decay parameter $\alpha$ is automatically calculated. In foreign exchange (forex) trading, pip value can be a confusing topic. In addition to a property's market value, one of the first things you'll want to do as a real estate investor who's considering buying a purchase is determine is its operating income and costs. This would be split to give two alpha values of 2. Using Pandas, calculating the exponential moving average is easy. Value at Risk (VaR) and Conditional Value at Risk (CVaR, also known as Expected Shortfall) are also calculated for the portfolio. SAS/IML® is used with Base SAS and Oracle® to produce a system to calculate value at risk with the flexibility to reflect changes in the database in the calculation and reporting. 4 - Import the Dependencies At The Top of The Notebook. Learning objectives. Value at Risk for Agiblocks. I have listen to a lot of "Chat with Traders" lately and noticed, that a many underlined the importance of good risk management. Please check your connection and try running the trinket again. Import the necessary libraries. Let's understand how to use a range() function of. Then, you will examine the calculation of the value of options and Value at Risk. Jorion defines VaR as the product of the Initial wealth and the lowest possible simple return given a confidence level c. I'm completely stuck at one of the introductory examples for the Value-at-Risk concept in the book I'm using The VaR is defined in a following way-->. We were given the stock prices from the last 15 years (4000 values each) of 4 companies, and have had to calculate Value at Risk of the portfolio. In fact, it is misleading to consider Value at Risk, or VaR as it is widely known, to be an alternative to risk adjusted value and probabilistic approaches. 2 we show how to compute it. The ideal position size can be calculated using the formula: Pips at risk x pip value x lots traded = amount at risk, where the position size is the number of lots traded. Compared to our previous experience with R, it was more work getting all the output values with Python. Exact value requires an infinite series, but this is pretty accurate - and is more accurate for angles near 0 than elsewhere, than compared to the product or cortran algorithms outlined below. Creating a GUI using tkinter is an. V alue at risk (VaR) is a measure of market risk used in the finance, banking and insurance industries. We used the log returns as the risk factors, assuming these were normally distr. Calculation of Value at Risk for a portfolio not only requires one to calculate the risk and return of each asset but also the correlations between them. View on trinket. There are 3 elements in definition of VaR: amount of loss in value. The Value at Risk, or VaR risk measure was actually in use by actuaries long before it was reinvented for investment banking. Marginal and Component Value-at-Risk: A Python Example Value-at-risk (VaR), despite its drawbacks, is a solid basis to understand the risk characteristics of the portfolio. 1987, 1997, 2008 almost led to the collapse of the existing financial system, which is why leading experts began to develop methods, with which you can control the uncertainty that prevails in. VaR is defined as the predicted worst-case loss with a specific confidence level (for example, 95%) over a period of time (for example, 1 day). The abs() function of Python's standard library returns the absolute value of the given number. The calculation method used to calculate value at risk Contribution (VaRC) can be briefly described as follows: The approach is based on the assumption of normal distribution of price factors. Ask Question Asked 3 years, 10 months ago. We can use pandas to construct a model that replicates the Excel spreadsheet calculation. It is the loss that can be expected in the worst n% of cases over a given number of days. We examine five basic models for calculating value at risk, and how to assess the effectiveness of value at risk models through backtesting. Course material. I’ll use them to highlight a few features of estimation error, and then I will illustrate an easier and more accurate method to calculate estimation risk. CVaR, also known as Expected Shortfall and Expected Tail Loss (ETL),. » Calculate risk in absolute terms or relative to your benchmark, another portfolio, fund or index » Only Bloomberg provides the ability to click through to the underlying fundamental data for full risk data transparency PORT — VaR tab Analyze the tail risk of your portfolio using the latest risk modeling techniques VALUE-AT-RISK. method of calculating value at risk popular. pdf python, optimization of conditional value-at-risk, quant at risk, conditional value at risk formula, python expected shortfall, cvar normal distribution, python monte carlo value at. Python with tkinter outputs the fastest and easiest way to create the GUI applications. VAR CALCULATION. Limitations of Value at Risk 1. , the differences between the sim-ulated future portfolio values and the present portfolio value, ∆Vi. Designed to meet the enormous rise in demand for individuals with knowledge of Python in finance, students are taught the practical coding skills now required in many roles within banking and finance. Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk management. Value-at-Risk was first used by major financial firms in the late 1980’s to measure the risk of their trading portfolios. There are multiple methods one can use in order to calculate Value at Risk. In order to estimate this risk, our tool analyzes the distribution of the model residuals (compared to reality). Homework Statement Calculate 5-day 1% Value at Risk of a portfolio using Monte Carlo simulation. Estimating the risk of loss to an algorithmic trading strategy, or portfolio of strategies, is of extreme importance for long-term capital growth. For example, given a calculated $1$-day VaR at the $99\%$ confidence level, then the portfolio is expected to lose a larger amount over a $1$-day period no more than $1$ day out of $100$. We will use the market stock data of IBM as an exemplary case study and investigate the difference in a standard and non-standard VaR calculation based on the parametric models. ,It returns a range object. Marginal and Component Value-at-Risk: A Python Example Value-at-risk (VaR), despite its drawbacks, is a solid basis to understand the risk characteristics of the portfolio. We test them under both normal and stressed market conditions using historical daily return data for capital-weighted stock indices from major markets around the world. Value at Risk in Python –Shaping Tech in Risk Management The aim of this article is to give a quick taste of how it is possible to build practical codes in Python for financial application using the case of Value at Risk (VaR) calculation. Market risk generally arises from movements in the underlying risk factors—interest rates, exchange rates, equity prices, or commodity. You’ll learn how to use Python to calculate and mitigate risk exposure using the Value at Risk and Conditional Value at Risk measures, estimate risk with techniques like Monte Carlo simulation, and use cutting-edge technologies such as neural networks to conduct real time portfolio rebalancing. I have listen to a lot of "Chat with Traders" lately and noticed, that a many underlined the importance of good risk management. If the number is a complex number, abs() returns. This calls for indicators showing the risk exposure of farms and the effect of risk reducing measures. Section 6 presents empirical analyses to examine whether past financial crisis have resulted in the tail risk of VaR and expected shortfall. On the other hand it is obvious that the setup used here contains the classical macauley setup when the common calculation rate is zero,. Assumes normal-distribution of logarithmic returns Parametric Method ----> Assumes normal-distribution of logarithmic returns. (I do not want to make an assumption about the probability distribution-especially not asssume a Gaussian distribution. A modified approach to VCV VaR. These methods basically differ by: - distributional assumptions for the risk factors (e. For illustration, a risk manager thinks the average loss on an investment is $10 million for the worst 1 per cent of potential outcomes for a portfolio. The risk measure VaR is a merely a cutoff point and does not describe the tail behavior beyond the VaR threshold. GARCH conditional volatility estimates. Absolute value of a number is the value without considering its sign. How to calculate absolute value in Python? Python Server Side Programming Programming. For example, given a calculated$1$-day VaR at the$99\%$confidence level, then the portfolio is expected to lose a larger amount over a$1$-day period no more than$1$day out of$100$. Using the derived exceedance distribution, the approach delivers an analytical formula for the ES (see McNeil, Frey and Embrechts, 2005, p. The function computeTF computes the TF score for each word in the corpus, by document. Python enforces indentation as part of the syntax. In Python, the Pandas library makes this aggregation very easy to do, but if we don't pay attention we could still make mistakes. for the VaR I basically want to find. The standard deviation is the root mean square distance of individual set values from the set average. Access properties of feature geometry. Use this odds ratio calculator to easily calculate the ratio of odds, confidence intervals and p-values for the odds ratio (OR) between an exposed and control group. The variance of the return on stock ABC can be calculated using the below equation. The information generated by this module can prove critical in your risk management activities and help you make decisions concerning your risk exposure. We started risk management on the CFA Level 3 curriculum with a disucssion of the different types of risk that we might look to hedge, whether those be financial or non-financial. Value-at-risk (VaR) is the risk measure that estimates the maximum potential loss of risk exposure given confidence level and time period. Then, you will examine the calculation of the value of options and Value at Risk. This Python program allows the user to enter any numerical value. Within risk management, Value at Risk became the gold standard in the mid-to-late 1990s. The 5% Value at Risk of a hypothetical profit-and-loss probability density function Value at risk ( VaR ) is a measure of the risk of loss for investments. Get the portfolio P&L as −. Value-at-Risk and factor-based models in Python, R and Excel/VBA A financial portfolio is almost always modeled as the sum of correlated random variables. If I want to calculate CVaR using Monte Carlo prices from the 3 investments, here is what I'm thinking: 1. In this article, we will learn how to use Python's range() function with the help of different examples. An Introduction to Value at Risk1 This chapter provides an introduction to value at risk. DPVr(x) 5def PV(r z (1 1 x)) 2 PV(r) is the change in the value of the portfolio, if the asset price moves 100x%. However, we. There are many approaches to calculate VaR (historical simulation, variance-covariance, simulation). Value-at-risk (VaR) is one of the most common risk measures used in finance. Python offers a lot of options to develop GUI applications, but Tkinter is the most usable module for developing GUI (Graphical User Interface). ; Calculate the parametric VaR(99) using the np. The model we use is the sympy module. The hybrid approach combines the two most popular approach to VaR estimation: RiskMetrics and Historical Simulation. Use the link below to share a full-text version of this article with your friends and colleagues. The critical value will then use a portion of this alpha on each side of the distribution. A recent proposal using quantile regression is the class of conditional autoregressive value at risk (CAViaR) models introduced by Engle and Manganelli (2004). They wouldn’t expect us to work backwards from a given value at risk to calculate the standard deviation and link in to business valuations would they?. Logistic regression model is one of the most commonly used statistical technique for solving binary classification problem. Calculating VaR is a purely mathematical function. Absolute value of a number is the value without considering its sign. VAR is widely used and has both advantages and disadvantages. Using the derived exceedance distribution, the approach delivers an analytical formula for the ES (see McNeil, Frey and Embrechts, 2005, p. That would suggest that the CTE90 calculation for 1,000 scenarios may be sufficiently reliable. The purpose of this new series of articles will be to compute the Value at Risk and keep all results, to be able to analyze them. Trinket: run code anywhere. Learning objectives. Web version: https://apps. Access the new random value operator. Calculate the m different values of the portfolio at time t+1 using the values of the simulated n-tuples of the risk factors. The reason I ask is that when you mentioned using std deviation squared over multiple year, and reference to variance, I link that back to risk and calculating beta factors. percentile() function on sim_returns. Python offers multiple options for developing GUI (Graphical User Interface). If I want to calculate CVaR using Monte Carlo prices from the 3 investments, here is what I'm thinking: 1. We can compare VaR using another confidence levels (3%, VaR 97 or 1%, VaR 99) to help us but we are going to use the Expected Shortfall with the same confidence level (5%). The Value at Risk figure is widely used, so it is an accepted standard in buying, selling, or recommending assets. Jorion defines VaR as the product of the Initial wealth and the lowest possible simple return given a confidence level c. a market risk committee of the Bank of International Settlement in Base1 has worked with international banks from different countries on standardizing the bank internal methods, so that the Value-at-Risk-results become comparable and therefore usable for the calculation of equity requirements. The potential loss is calculated from the volatility of risk factors. To calculate Credit Risk using Python we need to import data sets. The purpose of this new series of articles will be to compute the Value at Risk and keep all results, to be able to analyze them. The limitations of traditional mean-VaR are all related to the use of a symetrical distribution function. The ideal position size can be calculated using the formula: Pips at risk x pip value x lots traded = amount at risk, where the position size is the number of lots traded. Value-at-risk is a statistical measure of the riskiness of financial entities or portfolios of assets. The Introductory Guide to Value at Risk, covering Variance Covariance, Historical Simulation, and Monte Carlo methods of calculating Risk Exposures. How to calculate absolute value in Python? Python Server Side Programming Programming. Estimating value-at-risk using Monte Carlo. Estimating the risk of loss to an algorithmic trading strategy, or portfolio of strategies, is of extreme importance for long-term capital growth. Lets now code TF-IDF in Python from scratch. Value at risk is calculated using Monte Carlo simulation. As we have already noted in the introduction, risk measurement based on proper risk measures is one of the fundamental pillars of the risk management. A value-at-risk metric is our interpretation of the output of the value-at-risk measure. For example, a one-day 99% value-at-risk of$10 million means that 99% of the time the potential loss over a one-day period is expected to be less than or equal to $10 million. GARCH conditional volatility estimates. Various methods are possible to compute Value-at-Risk. Align the beginning and end of statement blocks, and be consistent. In this article, we will learn how to use Python's range() function with the help of different examples. time period over which risk is assessed. A formula for calculating the variance of an entire population of size N is: = ¯ − ¯ = ∑ = − (∑ =) /. There are three primary ways to calculate value at risk. It will also be an excellent opportunity to learn how to do it in Python, quickly and effectively. Estimating the risk of loss to an algorithmic trading strategy, or portfolio of strategies, is of extreme importance for long-term capital growth. Today we discussed a very quick example using python functions to calculate growth rates using CAGR. Assume the value of the weight in pounds has already been assigned to the variable w and the value of the height in inches has been assigned to the variable h. Expected Shortfall (ES) is the negative of the expected value of the tail beyond the VaR (gold area in Figure 3). To be able to compare with the short-time SMA we will use a span value of$20$. This method does not generate the variance covariance matrix and. Within risk management, Value at Risk became the gold standard in the mid-to-late 1990s. Value at Risk for Agiblocks. In fact, it is misleading to consider Value at Risk, or VaR as it is widely known, to be an alternative to risk adjusted value and probabilistic approaches. Calculating risk measures as Value at Risk (VaR) and Expected Shortfall (ES) has become popular for institutions and agents in financial markets. I was wondering what the value of C is in the listed equation. There are three primary ways to calculate value at risk. The correct way to calculate relative risk using 2 by 2 table. Value at risk is really concerned with measuring the given probability of loss within a specific investment portfolio over a defined period of time. Many techniques for risk management have been developed for use in institutional settings. Applications are run using Python and the NumPy and SciPy libraries. Credit Suisse First Boston (CSFB) launched in 1997 the model CreditRisk+ which aims at calculating the loss distribution of a credit portfolio on the basis of a methodology from actuarial mathematics. This is a forerunner for the use of yield curves in the risk calculations. The limitations of traditional mean-VaR are all related to the use of a symetrical distribution function. The independent t-test is used to compare the means of a condition between 2 groups. Net present value of any asset or investment is the present worth of that asset or investment based on analysis of future returns using appropriate discounting. For the playing card example, use the table of probabilities. The ideal position size can be calculated using the formula: Pips at risk x pip value x lots traded = amount at risk, where the position size is the number of lots traded. One common metric used by risk analysis is the "Value at Risk" or "VaR" of a portfolio--a measure of the amount of money likely to be lost on it during a particular period of time. 0 is not that far off the calculated value 7. Here, we will look at a way to calculate Sensitivity and Specificity of the model in python. Expected Shortfall (ES) is the negative of the expected value of the tail beyond the VaR (gold area in Figure 3). something like btc-e. Please check your connection and try running the trinket again. Python in Finance is a unique, easy-to-follow, introductory course which requires no prior programming knowledge or experience. The ratio of the largest and smallest CTE values is less than 101 percent. Warning : wordpress copies " differently. This then leads into the modeling of portfolios and calculation of optimal portfolios based upon risk. One and two-sided confidence intervals are reported, as well as Z-scores. Transform the independent standard normal variables into a set of returns corresponding to each risk factor using the matrix C. The calculation method used to calculate value at risk Contribution (VaRC) can be briefly described as follows: The approach is based on the assumption of normal distribution of price factors. VAR can be. It will be equal to the price in day T minus 1, times the daily return observed in day T. #Importing necessary libraries import sklearn as sk import pandas as pd import numpy as np import scipy as sp. Series Navigation ‹ Value at Risk (VaR) Three Methodologies for Calculating VaR ›. It measures the volatility of a portfolio of assets. When naming variables, note that Python is case sensitive, so value is not the same as Value. Python is a useful scripting language and is the preferred one for ArcGIS. The problem is coding each individual term is time-consuming and repetitive. For the playing card example, use the table of probabilities. Obtain the price of each risk factor one day from now using the formula. That's randomly select 21 days from historical dataset, calculate the return over this randomly drawed 21 days. In addition to a property's market value, one of the first things you'll want to do as a real estate investor who's considering buying a purchase is determine is its operating income and costs. Value At Risk, known as VAR, is a common tool for measuring and managing risk in the financial industry. A value-at-risk metric, such as one-day 90% USD VaR, is specified with three items: a time horizon; a probability; a currency. In the following examples, input and output are distinguished by the presence or absence of prompts (>>> and …): to repeat the example, you must type everything after the prompt, when the prompt appears; lines that do not begin with a prompt are output from the interpreter. 9070294784580498 >>> 1. If a risk measure is intended to support a metric that is a value-at-risk metric, then the measure is a value-at-risk measure. Dynamic Risk Budgeting in Investment Strategies: The Impact of Using Expected Shortfall Instead of Value at Risk Wout Aarts Abstract In this thesis we formalize an investment strategy that uses dynamic risk budgeting for insurance companies. At its most basic, a risk value is a simple multiplication of an estimate for probability of the risk and the cost of its impact. Financial Modeling for Algorithmic Trading using Python 3. # ##### # # - ABOUT THE PROGRAM - # Program name : tkinter addition calculator # Program description : takes two digit as input and calculates # the sum of it. VALUE-AT-RISK at GMAC While there are various ways of calculating Value-at-Risk, we use a two factor, interest rate and spread, correlation model. SymPy allows you to work with random variable expressions symbolically, including taking their expectation. Monte Carlo simulation is a popular method and is used in this example. Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. VAR expresses risk in terms of a single currency value. How to Find the Derivative of a Function in Python. The discount factor is useful while calculating the present value of future cashflows. Implementing Risk Forecasts 6. 718281827) using infinite series. com Main Features: - Add the stocks and currency pairs of your choice - 2-year historical data from Google Finance - User-defined portfolio consisting stocks you have added - View price chart, return chart and volatility chart using Exponentially Weighted Moving Average (EWMA) - Monitor your portfolio market values, profit/loss, portfolio return. It provides an estimate of the potential loss for a portfolio of assets based on the historical performance. # one-way 5% quantile, critical value is 1. Simulation Methods for VaR for Options and Bonds 8. md Calculating-Value-At-Risk-in-Python-by-Variance-Covariance-and-Historical-Simulation-Sandeep-Kanao-. Value-at-Risk measures the amount of potential loss that could happen in a portfolio of investments over a given time period with a certain confidence interval. For example, a one-day 99% value-at-risk of$10 million means that 99% of the time the potential loss over a one-day period is expected to be less than or equal to $10 million. The application of VaR has been extended from its initial use in securities houses to commercial banks and corporates, and from market risk to credit risk, following its introduction in October. , a plotting library) or have to be started as a separate system process (e. Value at Risk (VaR) is the minimum amount of loss any investment may incur over given period of time with certain probability. In this chapter, we will address in details the issue of such risk measures. But how to measure this threshold? Most of the interviewed traders are swing traders, that means they can't simply take the position size as "maximal to loose. Value at Risk (VaR) Value at risk ( VaR ) is the maximum potential loss expected on a portfolio over a given time period, using statistical methods to calculate a confidence level. Python is a useful scripting language and is the preferred one for ArcGIS. Keywords: Burned Area Emergency Response (BAER), Values-at-Risk, economic assessment, implied value Rocky Mountain Research Station Natural Resources Research Center 2150 Centre Avenue. probabilities using Monte Carlo simulation. 4 - Import the Dependencies At The Top of The Notebook. , a Python development environment). R/Python: R/Julia: MATLAB/Python: MATLAB/Julia: Python/Julia: 1. Value At Risk (VaR) is one of the most important market risk measures. Credit Suisse First Boston (CSFB) launched in 1997 the model CreditRisk+ which aims at calculating the loss distribution of a credit portfolio on the basis of a methodology from actuarial mathematics. This post is an extension of the previous post. Jorion defines VaR as the product of the Initial wealth and the lowest possible simple return given a confidence level c. Write a Python program to Calculate Simple Interest with example. simulation we • Value portfolio today • Sample once from the multivariate distributions of the ∆xi • Use the ∆xi to determine market variables at end of one day • Revalue the portfolio at the end of day. Create a new Python notebook, making sure to use the Python [conda env:cryptocurrency-analysis] kernel. Critical values are calculated using a mathematical function where the probability is provided as an argument. Python with tkinter outputs the fastest and easiest way to create the GUI applications. Calculate the market variance of your portfolio by squaring the market risk of your portfolio. The function computeTF computes the TF score for each word in the corpus, by document. Conditional Value at Risk (CVaR): The average size of the loss that can be expected when it exceeds the VaR level. VAR CALCULATION. It lets us ask go from "how far is a value from the mean" to "how likely is a value this far from the mean to be from the same group of observations?" Thus, the probability derived from the Z-score and Z-table will answer our wine based questions. Use this odds ratio calculator to easily calculate the ratio of odds, confidence intervals and p-values for the odds ratio (OR) between an exposed and control group. This then leads into the modeling of portfolios and calculation of optimal portfolios based upon risk. Ask Question Asked 3 years, 10 months ago. Calculate the Value at Risk (VaR) for a sample investment portfolio by running a Monte Carlo simulation in IBM Spectrum Symphony. Assumes normal-distribution of logarithmic returns Parametric Method ----> Assumes normal-distribution of logarithmic returns. VaR = 49,706. 4161618430166989 Alternatively, you can do bootstraps. Calculate the simulated profits and losses, i. We'll also teach you the difference between VAR and CVAR. The Introductory Guide to Value at Risk, covering Variance Covariance, Historical Simulation, and Monte Carlo methods of calculating Risk Exposures. Credit Metrics estimates risk of portfolio based on the changes in obligators credit quality. Calculate the market variance of your portfolio by squaring the market risk of your portfolio. Calculation of risks using the Value at Risk method In recent decades, the global economy has regularly fallen into the maelstrom of financial crises. Keywords: Burned Area Emergency Response (BAER), Values-at-Risk, economic assessment, implied value Rocky Mountain Research Station Natural Resources Research Center 2150 Centre Avenue. Jorion defines VaR as the product of the Initial wealth and the lowest possible simple return given a confidence level c. This post will explain how to use dictionaries in Python. And finally, two functions (simple_optimise_return and optimise_risk_return) to optimise the portfolio for high returns and the risk/return ratio, respectively. Select a statistical distribution to approximate the factors that affect your data set. pdf python, optimization of conditional value-at-risk, quant at risk, conditional value at risk formula, python expected shortfall, cvar normal distribution, python monte carlo value at. I am working on a risk management assignment but stuck what to do. But in order to understand the application of copula function in Credit. As far as I know, Value at Risk is always Value at Risk. For example, if the 95% one-month VAR is$1 million, there is 95% confidence that over the next month the portfolio will not lose more than $1 million. Multiply each value times its respective probability. in measuring the capital charge for market risk but use the VaR methodology for internal risk measurement purposes. ) See Translation of: Python. interest rates, exchange rates and stock prices). When using the variance. 0 is not that far off the calculated value 7. For example, if the EUR/USD moves from 1. The set with a smaller standard deviation has individual returns that are closer to the average return. VAR expresses risk in terms of a single currency value. For cosine use (2*i) in place of (2*i + 1). It is defined as the maximum dollar amount expected to be lost over a given time horizon, at a pre-defined confidence level. To be able to compare with the short-time SMA we will use a span value of$20$. Returns a value based on a specified Python expression. We will be using copula function in Credit Metric to calculate VaR. Use of simulations, resampling, or Pareto distributions all help in making a more accurate prediction, but they are still flawed for assets with significantly non. The built-in function range() generates the integer numbers between the given start integer to the stop integer, i. With Python expressions and the Code Block parameter, you can. Cheung & Powell (2012), using a step-by-step teaching study, showed how a nonparametric historical VaR. Python offers a lot of options to develop GUI applications, but Tkinter is the most usable module for developing GUI (Graphical User Interface). To be able to compare with the short-time SMA we will use a span value of$20$. Due to the method it is not a great method for risk management - but can get you in the ball park. We will use the market stock data of IBM as an exemplary case study and investigate the difference in a standard and non-standard VaR calculation based on the parametric models. There are three methods to calculate VaR: Monte-Carlo Method—> Assumes normal-distribution of logarithmic returns. In the following examples, input and output are distinguished by the presence or absence of prompts (>>> and …): to repeat the example, you must type everything after the prompt, when the prompt appears; lines that do not begin with a prompt are output from the interpreter. Naïve algorithm. Let's assume you have a$10,000 account and you risk 1% of your account on each trade. Volume indicates how many stocks were traded. There are three primary ways to. 5% on either side of the. Today we discussed a very quick example using python functions to calculate growth rates using CAGR. Calculation of Value at Risk for a portfolio not only requires one to calculate the risk and return of each asset but also the correlations between them. A contradiction when calculating the expected value of a discrete random variable. Sort the returns. The hybrid approach combines the two most popular approach to VaR estimation: RiskMetrics and Historical Simulation. In order to use this module, you must first install it. # one-way 5% quantile, critical value is 1. Implementing With Python. CVA is calculated as the difference between the risk free value and the true risk-adjusted value. To use a value-at-risk measure, we must implement it. Some of you may remember that I posted about the SCOR Framework for Supply Chain Risk Management earlier this year, and today I will take a closer look at it again, because I recently found a post on scdigest. Hence absolute of 10 is 10, -10 is also 10. PYTHON TOOLS FOR BACKTESTING • NumPy/SciPy - Provide vectorised operations, optimisation and linear algebra routines all needed for certain trading strategies. The use of the Value at Risk method to measure interest rate risk, though, calls for application of specific behavior, differing f rom that when quantifying other types of risk by means of the. When we calculate the VaR with 5% of confidence level (VaR 95), we see that both assets have the same result. Calculate the market variance of your portfolio by squaring the market risk of your portfolio. Expected Shortfall (ES) is the negative of the expected value of the tail beyond the VaR (gold area in Figure 3). In this article, we will learn how to use Python's range() function with the help of different examples. Next, Python finds the square of that number using an Arithmetic Operator. Exact value requires an infinite series, but this is pretty accurate - and is more accurate for angles near 0 than elsewhere, than compared to the product or cortran algorithms outlined below. The function computeIDF computes the IDF score of every word in the corpus. The Value-at-Risk Concept Let PV(r) denote the present value of a given portfolio at price r of the underlying assets. In foreign exchange (forex) trading, pip value can be a confusing topic. View on trinket. 4 - Import the Dependencies At The Top of The Notebook. 3 Need for Value-at-Risk The concept and use of Value-at-Risk is recent. GARCH conditional volatility estimates. Access properties of feature geometry. 377 p-value = 0. We must set up a loop that begins in day 1 and ends at day 1,000. CVA is calculated as the difference between the risk free value and the true risk-adjusted value. This tool is intended for use in ModelBuilder and not in Python scripting. How to Find the Derivative of a Function in Python. Various methods are possible to compute Value-at-Risk. We will see that TVaR reflects the shape of the tail beyond VaR threshold. How to use the calculator. This then leads into the modeling of portfolios and calculation of optimal portfolios based upon risk. Value at Risk for Agiblocks. Next, Python finds the square of that number using an Arithmetic Operator. In this article, we show how to find the derivative of a function in Python. Hence it is always a larger number than the corresponding VaR. The roots of information value, I think, are in information theory proposed by Claude Shannon. This is a great feature that a lot of data-streams ask their customers to pay a pretty penny for each month. Use {} curly brackets to construct the dictionary, and [] square brackets to index it. Some of you may remember that I posted about the SCOR Framework for Supply Chain Risk Management earlier this year, and today I will take a closer look at it again, because I recently found a post on scdigest. Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk management. Write a Python Program to Calculate the square of a Number using Arithmetic Operators and Functions with an example. The standard deviation is the root mean square distance of individual set values from the set average. ,It returns a range object. This is a forerunner for the use of yield curves in the risk calculations. To study the relationship between these 2 indices, we first calculated the rolling 20-days correlation of the VIX and VVIX returns from January 2007 to March 2020. We will use the BMI formula, which is weight/(height**2). Align the beginning and end of statement blocks, and be consistent. We started risk management on the CFA Level 3 curriculum with a disucssion of the different types of risk that we might look to hedge, whether those be financial or non-financial. 3 The value-at-risk of levelp (usually p 5 5% or p 5 1%) is defined as the infimum value, such that P~DPVr~x. Expected Shortfall (ES) is the negative of the expected value of the tail beyond the VaR (gold area in Figure 3). ANOVA is an omnibus test, meaning it tests the data as a whole. VAR calculation is maybe not the best business use case in order to illustrate that need but I will reuse it as it has already been defined. Shareable Link. Then, you will examine the calculation of the value of options and Value at Risk. The information generated by this module can prove critical in your risk management activities and help you make decisions concerning your risk exposure. a benchmark of choice (constructed with wxPython). This is a great feature that a lot of data-streams ask their customers to pay a pretty penny for each month. py (pronounced pie dot pie), evil laugh. In this paper, we compare two risk measures, Value at Risk (VaR) and Expected Shortfall (ES) in their ability to capture risk associated with tail thickness. Risk ratio here is the relative increase in chance of the outcome being 1 rather than 0 if the predictor is 1 rather than 0. Expected Shortfall has a number of aliases: Conditional Value at Risk (CVaR) Mean. 2) For Hamming Distance the article says 'If the predicted value (x) and the real value (y) are same, the distance D will be equal to 0. (VaR is capitalized differently to distinguish it from VAR, which is used to denote variance. The proposed model is based on a combination of the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model and the Extreme Learning Machine (ELM), and can be used to calculate Value-at-Risk (VaR). Program to calculate BMI in Python. A recent proposal using quantile regression is the class of conditional autoregressive value at risk (CAViaR) models introduced by Engle and Manganelli (2004). ipynb README. What is Value At Risk ? VAR is a method of calculating and controlling exposure to Market Risk. ARMAX-GARCH Toolbox (Estimation, Forecasting, Simulation and Value-at-Risk Applications). A value-at-risk metric is our interpretation of the output of the value-at-risk measure. Multiply each value times its respective probability. More precisely, it is a statement of the following form: With probability q the potential loss will not exceed the Value at Risk figure [→ one sided confidence interval]. Learning objectives. Here, in part 1 of this short series on the topic, we. For example, if you are calculating the risk variance of a proposed investment scenario, choose a distribution that. For example, every afternoon, J. Value at Risk (VaR) is the minimum amount of loss any investment may incur over given period of time with certain probability. In some cases, a method equivalent to the variance covariance approach is used to calculate VAR. Hence it is always a larger number than the corresponding VaR. In this context, I will present the measurement method Value at Risk (VaR) and calculating methods of VaR. VaR is always specified with a given confidence level α – typically α=95% or 99%. " CISOs can use this risk potential. 3 The value-at-risk of levelp (usually p 5 5% or p 5 1%) is defined as the infimum value, such that P~DPVr~x. The discount factor is useful while calculating the present value of future cashflows. This is a typical topic which is greatly misunderstood by students who attend typical BSc/MSc Finance degrees (or any derived degree which has (mathematical) finance related topics) as well as their professors who provide the lecture material. In this article, we will learn how to use Python's range() function with the help of different examples. Learn what value at risk is, what it indicates about a portfolio, and how to calculate the value at risk (VaR) of a portfolio using Microsoft Excel. Anybody can do Value at Risk: A Teaching Study using Parametric Computation and Monte Carlo Simulation Abstract The three main Value at Risk (VaR) methodologies are historical, parametric and Monte Carlo Simulation. " CISOs can use this risk potential. • Pandas - Provides the DataFrame, highly useful for “data wrangling” of time series data. After creating a new integer field in the table to store an integer (let's call it Comparison), the basic idea was to:. Python is a useful scripting language and is the preferred one for ArcGIS. This then leads into the modeling of portfolios and calculation of optimal portfolios based upon risk. worst value of the 1,000 scenarios. So here i am using Tkinter module to create a simple python calculator. Value at Risk in Python – Shaping Tech in Risk Management Published by BSIC on 12 March 2017 12 March 2017 The aim of this article is to give a quick taste of how it is possible to build practical codes in Python for financial application using the case of Value at Risk (VaR) calculation. DPV r (x) 5 def PV(r z (1 1 x)) 2 PV(r) is the change in the value of the portfolio, if the asset. This way the Mark to market can be accessed by the spread and portfolio risk can accessed by using risk calculations based on the common rate. Calculate the daily returns. Value at Risk Value at Risk is being widely used as measure of market risk of an asset or of a portfolio. In this post, we are going to walk you through an example of calculating the weighted average cost of capital (WACC) using Excel. Value at Risk, often referred to as VaR, is a way to estimate the risk of a single day negative price movement. 5 Simple interest value: 31500. Credit Metrics is a tool for assessing portfolio risk and is used widely to find Value at Risk (VaR) of a portfolio. Multiply each value times its respective probability. The overall process is covered and aspects of the calculation are highlighted. This calculation of probability of being past a certain Z-score is useful to us. A pip is a unit of measurement for currency movement and is the fourth decimal place in most currency pairs. Your program should return the corresponding estimation of π by using the formula from method #1: π = Circumference / Diameter. At its most basic, a risk value is a simple multiplication of an estimate for probability of the risk and the cost of its impact. The Introductory Guide to Value at Risk, covering Variance Covariance, Historical Simulation, and Monte Carlo methods of calculating Risk Exposures. VAR calculation is maybe not the best business use case in order to illustrate that need but I will reuse it as it has already been defined. When naming variables, note that Python is case sensitive, so value is not the same as Value. Open is the price of the stock at the beginning of the trading day (it need not be the closing price of the previous trading day), high is the highest price of the stock on that trading day, low the lowest price of the stock on that trading day, and close the price of the stock at closing time. Homework Statement Calculate 5-day 1% Value at Risk of a portfolio using Monte Carlo simulation. We need to provide a lag value, from which the decay parameter $\alpha$ is automatically calculated. I want to use the historical data. Learning objectives. To calculate a share's VAR complete the yellow input cells, click on the hyperlinks to download 3 years. Import the necessary libraries. Considering the market risk importance, its evaluation it is necessary to each bank applying the current measurement methods. It quantifies the value of risk to give a maximum possible loss for a company or a stock or a portfolio. I have S&P 500 returns and have calucated the 5% Value at Risk. Let us denote these values by Vt+1,1,Vt+1,2,,Vt+1,m. The analysis of variance (ANOVA) can be thought of as an extension to the t-test. This course will teach you the essential elements of Python to build practically useful applications and conduct data analysis for finance. In order to estimate this risk, our tool analyzes the distribution of the model residuals (compared to reality). Using Python to calculate TF-IDF. Your challenge consists of writing a Python script that prompts the end-user to enter both the diameter and the circumference of a round object. #Importing necessary libraries import sklearn as sk import pandas as pd import numpy as np import scipy as sp. Value at Risk (VaR) is a measurement of the incurred risk of an investment expressed as the most likely maximum loss of a portfolio or an asset give a confidence interval (CI) and time horizon. method of calculating value at risk popular. To calculate a share's VAR complete the yellow input cells, click on the hyperlinks to download 3 years. Select a statistical distribution to approximate the factors that affect your data set. Value at risk is calculated using Monte Carlo simulation. Value At Risk interpretation. If I want to calculate CVaR using Monte Carlo prices from the 3 investments, here is what I'm thinking: 1. The function computeIDF computes the IDF score of every word in the corpus. Let's understand how to use a range() function of. sqrt(21) * 1. A modified approach to VCV VaR. Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is: = (∑ = − (∑ =)) ⋅ −. For details and proof of the method, please read [13,21]. It is defined as the maximum dollar amount expected to be lost over a given time horizon, at a pre-defined confidence level. It measures the volatility of a portfolio of assets. The independent t-test is used to compare the means of a condition between 2 groups. If you recall the basics of the notebook where we provided an introduction on market risk measures and VAR, Calculating Sharpe Ratio with Python. Marginal and Component Value-at-Risk: A Python Example Value-at-risk (VaR), despite its drawbacks, is a solid basis to understand the risk characteristics of the portfolio. Use {} curly brackets to construct the dictionary, and [] square brackets to index it. You’ll learn how to use Python to calculate and mitigate risk exposure using the Value at Risk and Conditional Value at Risk measures, estimate risk with techniques like Monte Carlo simulation, and use cutting-edge technologies such as neural networks to conduct real time portfolio rebalancing. # ##### # # - ABOUT THE PROGRAM - # Program name : tkinter addition calculator # Program description : takes two digit as input and calculates # the sum of it. The most significant advantage of using the median() method is that the data-list does not need to be sorted before being sent as a parameter to the median() function. Python Calculator Tutorial - Getting Started With Tkinter. We started risk management on the CFA Level 3 curriculum with a disucssion of the different types of risk that we might look to hedge, whether those be financial or non-financial. The following tool visualize what the computer is doing step-by-step as it executes the said program: There was a problem connecting to the server. There are many approaches to calculate VaR (historical simulation, variance-covariance, simulation). Calculation of risks using the Value at Risk method In recent decades, the global economy has regularly fallen into the maelstrom of financial crises. probabilities using Monte Carlo simulation. Calculate Value at Risk (VaR) for a specific confidence interval by multiplying the standard deviation by the appropriate normal distribution factor. Value-at-risk is a statistical measure of the riskiness of financial entities or portfolios of assets. com a simplified , expected shortfall normal distribution formula, norm. Instead, use a simple Decision Tree to combine phase-specific risk and cash flow to create a technically correct eNPV. Variance from value to value was 20-50% at some points, that's a very high variance. ipynb README. Value at risk (also VAR or VaR) is the statistical measure of risk. Financial Markets, Prices and Risk 2. Value at Risk for Agiblocks. Note: The retrieval of data from Yahoo is optional and the portfolio optimization process does not in any way depend on Yahoo data. The "expected shortfall at q% level" is the expected return on the portfolio in the worst % of cases. This was developed in 1993 in response to the collapse of Barings The greater the volatility, the greater the risk. thinkorswim RTD/DDE data into Python Many may not know it, but thinkorswim provides users the ability to access real time data (RTD) in excel. When using the variance. DPV r (x) 5 def PV(r z (1 1 x)) 2 PV(r) is the change in the value of the portfolio, if the asset. After train the model we will test the model using random noise until e. On the other hand it is obvious that the setup used here contains the classical macauley setup when the common calculation rate is zero,. Value at risk (VAR or sometimes VaR) has been called the "new science of risk management ," but you don't need to be a scientist to use VAR. If I want to calculate CVaR using Monte Carlo prices from the 3 investments, here is what I'm thinking: 1. The weighted average cost of capital (WACC) is the rate that a company is expected to pay on average to all its security holders to finance its assets. The fastest methods rely on simplifying assumptions about changes in underlying risk factors and about how a portfolioÕs value responds to these changes in the risk factors. The reason for my belief is the similarity information value has with a widely used concept of entropy in. Output: As you can see there is a substantial difference in the value-at-risk calculated from historical simulation and variance-covariance approach. One and two-sided confidence intervals are reported, as well as Z-scores. Most brokers provide fractional pip pricing, so you'll also see a fifth decimal place such as in 1. Value-at-Risk Credit Value Adjustments Time Series Analysis Bayesian Statistics Reporting Python Quant Platform — 2 Infrastructure and Applications Python Full-Fledged Python Stack Deployment Powerful, Dedicated Server Infrastructure Applications Derivatives, Portfolio, Trading, Risk, Analysis 32 cores 96 GB RAM 6 TB disk NumPy, SciPy,. Exact value requires an infinite series, but this is pretty accurate - and is more accurate for angles near 0 than elsewhere, than compared to the product or cortran algorithms outlined below. Python offers multiple options for developing GUI (Graphical User Interface). It is expected to improve de-fensibility of VAR valuation and post-fire emergency treatment decisions. It estimates the VaR of a portfolio by applying exponentially declining weights to past returns and then finding the appropriate percentile of this time-weighted empirical distribution. But in order to understand the application of copula function in Credit. The information generated by this module can prove critical in your risk management activities and help you make decisions concerning your risk exposure. pdf python, optimization of conditional value-at-risk, quant at risk, conditional value at risk formula, python expected shortfall, cvar normal distribution, python monte carlo value at. Value At Risk interpretation. The optimizer can be used with historical price data from any source, such as Bloomberg providing that data can be placed in columns (one column per symbol) in any spreadsheet. ,It returns a range object. The $10$-day Var is used to set market-risk capital requirements and the $1$-day VaR is used in back-testing to check the fidelity of the calculation. How to Find the Derivative of a Function in Python. Some of you may remember that I posted about the SCOR Framework for Supply Chain Risk Management earlier this year, and today I will take a closer look at it again, because I recently found a post on scdigest. This calls for indicators showing the risk exposure of farms and the effect of risk reducing measures. Calculating Value At Risk In Excel & Python This post will take you through the step-by-step process to understand and compute VaR in Excel and Python using Historical Method and Variance-Covariance approach. Value At Risk (VaR) is one of the most important market risk measures. First, we need to calculate the sum of squares between (SSbetween), sum of squares within (SSwithin), and sum of squares total (SSTotal). where $\phi$ is the normal probability density function. Value at Risk for Agiblocks. Calculating Value At Risk in Python by Variance Co variance and Historical Simulation Sandeep Kanao. Then print the result using conditional statements. Here, we will look at a way to calculate Sensitivity and Specificity of the model in python. Select a statistical distribution to approximate the factors that affect your data set. Basicly they don't hold any position above a defined maximal value (like some percantage of their booksize). Credit Suisse First Boston (CSFB) launched in 1997 the model CreditRisk+ which aims at calculating the loss distribution of a credit portfolio on the basis of a methodology from actuarial mathematics. This course is a component of the Data Analysis and Programming for Finance Professional Certificate. In Python, the Pandas library makes this aggregation very easy to do, but if we don't pay attention we could still make mistakes. For example, every afternoon, J. To find the partial value due to each outcome, multiply the value of the outcome times its probability. The function computeTF computes the TF score for each word in the corpus, by document. 3 Banks are free to use models such as variance-covariance matrices (parametric approach), historical. (e is also known as Euler's number and Napier's constant. (I do not want to make an assumption about the probability distribution-especially not asssume a Gaussian distribution. Tail-value-at-risk (TVaR) is risk measure that is in many ways superior than VaR. The variance of the return on stock ABC can be calculated using the below equation. Steps to make it work: Install R (and Rstudio). But in order to understand the application of copula function in Credit. The "expected shortfall at q% level" is the expected return on the portfolio in the worst % of cases. Naïve algorithm. Value-at-Risk is now a widely used quantitative tool to measure market risk. Value at risk (also VAR or VaR) is the statistical measure of risk. Use a for loop to calculate a Taylor Series ¶ If we want to get closer to the value of. The ideal position size can be calculated using the formula: Pips at risk x pip value x lots traded = amount at risk, where the position size is the number of lots traded. Python offers a lot of options to develop GUI applications, but Tkinter is the most usable module for developing GUI (Graphical User Interface). As we have already noted in the introduction, risk measurement based on proper risk measures is one of the fundamental pillars of the risk management. ; Open the script, make sure your working directory is the folder with all the files and install the required packages at. Considering the market risk importance, its evaluation it is necessary to each bank applying the current measurement methods. # one-way 5% quantile, critical value is 1. We need to provide a lag value, from which the decay parameter $\alpha$ is automatically calculated. VAR can be. Create a new Python notebook, making sure to use the Python [conda env:cryptocurrency-analysis] kernel. Course material. Value at Risk tries to provide an answer, at least within a reasonable bound. At a high level, VaR indicates the probability of the losses which will be more than a pre-specified threshold dependent on. Calculate the m different values of the portfolio at time t+1 using the values of the simulated n-tuples of the risk factors. management area has been the emergence of Value-at-Risk (VaR). Dynamic Risk Budgeting in Investment Strategies: The Impact of Using Expected Shortfall Instead of Value at Risk Wout Aarts Abstract In this thesis we formalize an investment strategy that uses dynamic risk budgeting for insurance companies. The most significant advantage of using the median() method is that the data-list does not need to be sorted before being sent as a parameter to the median() function. This could be handy in allocating capital to algorithms proportional some multiple of the VaR value in order. The WACC is commonly referred to as the firm’s cost of capital. Python enforces indentation as part of the syntax. V alue at risk (VaR) is a measure of market risk used in the finance, banking and insurance industries. The predicted output will be the normal distribution which is WGAN-GP returns. Here is my shot at doing Historical Simulation to find the Value at Risk of your portfolio. Anybody can do Value at Risk: A Teaching Study using Parametric Computation and Monte Carlo Simulation Abstract The three main Value at Risk (VaR) methodologies are historical, parametric and Monte Carlo Simulation. In the following examples, input and output are distinguished by the presence or absence of prompts (>>> and …): to repeat the example, you must type everything after the prompt, when the prompt appears; lines that do not begin with a prompt are output from the interpreter. PYTHON TOOLS FOR BACKTESTING • NumPy/SciPy - Provide vectorised operations, optimisation and linear algebra routines all needed for certain trading strategies. The set with a smaller standard deviation has individual returns that are closer to the average return. Please check your connection and try running the trinket again. Finally, we can generate values for our price list. You can use this T-Value Calculator to calculate the Student's t-value based on the significance level and the degrees of freedom in the standard deviation. retrieve financial time-series from free online sources (Yahoo), format the data by filling missing observations and aligning them, calculate some simple indicators such as rolling moving averages and; visualise the final time-series. There are three primary ways to. What is Mean in Python? Mean is simply another name for average. Despite the current challenges in applying the model, companies that have been exposed to cyber value-at-risk express enthusiasm for it. Calculating Value at Risk (VAR) VAR calculates the expected maximum loss of a portfolio as a result of a adverse change in the risk factors ( e. Conditional Value Risk Calculator – Background. Import the necessary libraries. CVA is calculated as the difference between the risk free value and the true risk-adjusted value. VaR and expected shortfall. Exact value requires an infinite series, but this is pretty accurate - and is more accurate for angles near 0 than elsewhere, than compared to the product or cortran algorithms outlined below. There are three primary ways to. The limitations of traditional mean-VaR are all related to the use of a symetrical distribution function. Since that time, the use of Value-at-Risk has exploded. Value-at-risk (VaR) is the risk measure that estimates the maximum potential loss of risk exposure given confidence level and time period. The Value at Risk, or VaR risk measure was actually in use by actuaries long before it was reinvented for investment banking. With Python expressions and the Code Block parameter, you can. Value-at-Risk (VaR) is a risk model which predicts the loss that an investment portfolio may experience over a period of time. These are the delta‐normal method. , a plotting library) or have to be started as a separate system process (e. Naïve algorithm. The independent t-test is used to compare the means of a condition between 2 groups.
|
# WIT2012 Workshop on Intelligent Trackers
3-5 May 2012
INFN Pisa
Europe/Paris timezone
Home > Timetable > Session details > Contribution details
PDF | XML | iCal
# The ultra low mass cooling system of the Belle II DEPFET detector
Presented by Dr. Carlos MARINAS PARDO on 3 May 2012 from 20:00 to 21:00
Session: Posters
## Content
The new e$^{+}$e$^{-}$ colliders impose unprecedented demands to the performance of the vertex detectors. To achieve the required resolution in the vertex reconstruction, besides highly segmented pixel detectors, the material budget has to be kept at very low levels to reduce the multiple Coulomb scattering. These requirements are even more challenging in the case of the new Japanese Super Flavour Factory (SuperKEKB) where the very low momentum of the particles in the final state requires a vertex detector with less than 0.2%~X$_{0}$ per layer, together with 50x50~$\mu$m$^{2}$ pixels, to achieve the aimed resolution of 8.5~$\mu$m. As a consequence, there is an obvious impact on the cooling system, that has to be carefully designed, not allowing active cooling pipes inside the acceptance region. Due to the low power dissipation of the DEPFET sensor and the special geometry of the detectors (with the front end electronics placed at both ends of the ladder), the system can be chilled using 2-phase CO$_{2}$ cooling through the massive support structures outside of the acceptance, while the sensitive area relies on forced convection with cold dry air. In the talk not only full thermal simulations will be presented but also measurements done with a real mock up, showing that a proper cooling of the vertex detector can be made using this approach.
## Place
Location: INFN Pisa
Address: Largo Bruno Pontecorvo 3 56127 Pisa Italy
Room:
More
More
|
# Reactive force on two objects with central forces
I have a confusion with reactive forces, in how they act, and I hope to highlight my confusion with some (very) poorly drawn graphics by me.
From my understanding (likely false), a Newton's Third Law reactive force is the force vector equal in magnitude and opposite in direction of an applied force. In the case of an object undergoing circular motion, the pivot feels an outward force from the orbiting body. If I fire a gun, the force of the shot pushes me back (hence, kick). Cannons have wheels for a reason. My understanding is shown in a graphic I've butchered, but is that of a person punching someone's face, and the respective forces on either as a result: Now, I'm not actually sure if $F_{face}$ is the same as the normal force, but the forces should be balanced, as once contact is made the fist stops moving and experiences shock (in the same way punching a skull can break you hand). The normal force, or, just $F_{face}$ is a reactive force from the face upon the face as a result of the punch itself. That's my view, at least.
This understanding of reactive forces hurts me now, when I consider the following scenario: two objects feeling central forces as a result of eachother (let's let the central force be gravity).
In my second bad graphic, I have $A$ and $B$ both exerting forces on eachother. $A$ feels force $F_{ab}$ (Force on $A$ as a result of $B$) while $B$ feels $F_{ba}$ (Force on $B$ as a result of $A$). Let $F_{ab} = -F_{ba}$.
The forces with "R" in front are reactive forces - and I've constructed them out of misguided necessity that I cannot refute rigorously. If $A$ pulls on $B$, then by Newton's Third Law, $B$ must pull on $A$, just as a face strikes a fist as a fist strikes a face. This assertion explains force $RF_{ba}$ on $A$, and the same faulty logic applies to $B$ and $RF_{ab}$. Can someone highlight the key misunderstandings that are causing me to have this odd train of thought?
• The forces $RF_{ba}$ and $RF_{ab}$ simply don't exist, you've made them up. $F_{ab}$ is equal and opposite $F_{ba}$ which satisfies Newton's third law. Sep 21 '17 at 23:39
• -1. No research effort. This site has many similar questions about Newton's 3rd Law. Have you read any of them? Sep 22 '17 at 23:31
The force $F_{ba}$ is the reactive force corresponding to $F_{ab}$ and vice versa. The forces you labeled with $R$ do not exist. It is perfectly normal for the forces on a single object to not balance. For example, the forces on $A$ and $B$ could be due to gravity. Both objects feel an unbalanced attractive force and begin to fall towards each other.
• Interesting. So, from what I gather from what you said in the first paragraph, even if the forces are due to gravity, where each object will exert a force on one another, each individual force is the reactive force of the other.. So, $F_{ba}$ is the reactive force of $F_{ab}$ even though $F_{ba}$ is the force $A$ exerts on $B$ due to gravity, and also the reactive force of $F_{ab}$, which is the reactive force of $F_{ba}$. I feel like there's some kind of never ending loop of consequence there. Sep 21 '17 at 23:59
• My view states that for each force $F$ there is a reactive force. There are two unique forces at play here, and as such need a reactive force to pair with them.. Sep 22 '17 at 0:00
• @sangstar I think what leads you astray is thinking that a force causes the reactive force, which would lead you to think that the reactive force appears after the original force. This is not so. Forces always occur in pairs. $F_{ab}$ and $F_{ba}$ arise simultaneously, no matter what the nature of the forces happen to be. Neither force precedes the other. Sep 22 '17 at 1:18
• But in the case of two objects feeling a gravitational force, which do you consider the reactive force if they each feel their own force? I would understand that if one object ONLY, like $A$, exerts a force on $B$ like in my graphic then $A$ would feel a reactive force, but they both have a force on eachother independent of eachother.. if that makes sense. I understand from what you said though that this explains the case of someone punching a face. The fist feels the reactive normal force, in that case. Sep 22 '17 at 11:38
|
The just numbers that shows up in both of this lists are $$36, 6^2 = 36, \frac8*92 = 36, 1, 1^2=1, \frac1*22=1$$
Are there any type of beyond this?
Well, $2x^2=y^2+y$. Just how do i go on native this? What carry out I need to use?
What space the numbers whereby the number you instead of in room equal, and also their outcomes are equal?
Lets say:
What if $x=y$?
$$x^2=\fracx(x+1)2$$
$2x^2=x(x+1)$
$2x^2=x^2+x$
$x^2-x=0$
$x(x-1)=0$
$x$ or $(x-1)$ has to be $0$.
Therefore, $x = 0.5\pm 0.5$
Second question:
But is $0$ technically a square or triangle number?
You are watching: Read the numbers and decide what the next number should be. 256 196 144 100 64 36 16
square-numbers
re-publishing
point out
follow
asked may 5 "17 in ~ 15:52
VortexYTVortexYT
$\endgroup$
1
include a comment |
4
$\begingroup$
From her equation girlfriend get$$8x^2=4y^2+4y$$or$$8x^2+1=(2y+1)^2.$$You need to solve$$z^2-8x^2=1$$in optimistic integers ($z$ will instantly be odd therefore $y=\frac12(z-1)$will it is in an integer). This is a kind of Pell"s equation. That solutionsare $(x_n,z_n)$ where$$z_n+2\sqrt 2 x_n=(3+2\sqrt 2)^n.$$So $x_1=1$, $z_1=3$ providing $1$ as square-triangular.Then $x_2=6$, $z_2=17$ offering $36$ together square-triangular.Then $x_3=35$, $z_3=99$ providing $1225$ as square-triangular, etc.
share
mention
monitor
edited might 6 "17 at 16:35
will certainly Jagy
answered may 5 "17 at 16:00
Angina SengAngina Seng
$\endgroup$
1
include a comment |
1
See more: How Many Glasses Of Champagne Per Bottle, Champagne Toast Calculator And Event Drinks Guide
$\begingroup$
Following Shark, below is just how to resolve the Pell equation $z^2 - 8 x^2 = 1$ by hand, back one can quickly guess the first as $9-8=1:$
Method described by Prof. Lubin in ~ Continued fraction of $\sqrt67 - 4$
$$\sqrt 8 = 2 + \frac \sqrt 8 - 2 1$$ $$\frac 1 \sqrt 8 - 2 = \frac \sqrt 8 + 2 4 = 1 + \frac \sqrt 8 - 2 4$$ $$\frac 4 \sqrt 8 - 2 = \frac \sqrt 8 + 2 1 = 4 + \frac \sqrt 8 - 2 1$$
Simple continued fraction tableau: $$\beginarraycccccccccc & & 2 & & 1 & & 4 & \\ \\ \frac 0 1 & \frac 1 0 & & \frac 2 1 & & \frac 3 1 \\ \\ & 1 & & -4 & & 1 \endarray$$
$$\beginarraycccc \frac 1 0 & 1^2 - 8 \cdot 0^2 = 1 & \mboxdigit & 2 \\ \frac 2 1 & 2^2 - 8 \cdot 1^2 = -4 & \mboxdigit & 1 \\ \frac 3 1 & 3^2 - 8 \cdot 1^2 = 1 & \mboxdigit & 4 \\ \endarray$$
Anyway, given a solution $(z,x)$ in optimistic integers come $z^2 - 8 x^2 = 1,$ we acquire the next in an boundless sequence by$$(z,x) \mapsto (3z + 8x, z + 3x),$$ so$$( 1,0 ),$$$$(3,1),$$$$( 17,6),$$$$(99 ,35 ),$$ $$( 577, 204 ),$$$$(3363 , 1189 ),$$By Cayley -Hamilton, the works with $z_n, x_n$ obey$$z_n+2 = 6 z_n+1 - z_n,$$$$x_n+2 = 6 x_n+1 - x_n.$$
|
# Probability that 2 nodes are connected in a random, large network
Given a a large random network with a degree squence ${d_i} = {d_1, d_2, d_3,...,d_N}$ with $N>>1$ ($d_i$ is the degree of node $i$). The number of links of this network therefore is $L = \frac{1}{2}\sum_{i = 1}^N d_i$. Show: The probability $p_{ij}$ that 2 nodes $i$, $j$ are connected is $$p_{ij} = \frac{d_i d_j}{2L}.$$
I think with the right argumentation, this example should be rather easy, but even though i have tried to come up with one for quiet some time, i am still left with nothing. Can someone help me please?
• Is this true? If, say, $d_i=N-1$ then $p_{ij}=1$ for all choices of $j\neq i$, but that doesn't seem to be consistent with your formula. Am I misunderstanding? Or was your formula intended to be an approximation of some sort? – lulu Nov 22 '17 at 11:59
• Yeah should be, it’s an example given to us and yes I think it’s indtended to be an approximation. – iqopi Nov 22 '17 at 12:06
• Well, do you agree that my example is a counterexample to the result? Granted, in a large random graph it is improbable that a node has valence $N-1$ but...well, then your quantifiers are confusing. Do you fix a degree sequence and then look at a random graph with that sequence? And, if so, what sort of approximation is intended? – lulu Nov 22 '17 at 12:09
• I think the result of your counterexample is $(N-1)/N \approx 1$ for $N>>1$, so i think this is in the range of the approximation. I don't really know what kind of approximation it should be, since it isn't given in the example. I think anything remotely correct would be enought for this example. – iqopi Nov 22 '17 at 12:12
• Well...that calculation doesn't look right. $2L$ should be about $\frac {N^2}2$, surely. In any case, I think I don't understand the question. Perhaps I am missing something. I suggest you edit your post to, at least, indicate that you are looking for an approximate answer and, if possible, to say what sort of approximation you seek. – lulu Nov 22 '17 at 12:18
The version of the statement that's exactly true is the following:
In a random multigraph with degree sequence $(d_1, d_2, \dots, d_n)$, the expected number of edges between vertices $i$ and $j$ is $$\frac{d_i d_j}{2L-1}.$$
Here, we pick a random multigraph according to the configuration model. That is, initially there are $n$ isolated vertices and the $i^{\text{th}}$ vertices has $d_i$ half-edges out of it. We pick a uniformly random perfect matching between the half-edges, and connect matched half-edges together into an edge.
Then there are $d_id_j$ different ways to choose a half-edge out of vertex $i$ and a half-edge out of vertex $j$. The probability that they are joined together in the matching is $\frac1{2L-1}$: there are $2L$ half-edges total, so a half-edge has $2L-1$ others to be joined to, which are chosen uniformly.
Presumably, we want the approximation to hold with high probability as $n \to \infty$, and in the random graph rather than the random multigraph. (You have to be careful about what "$n \to \infty$" means here and what that does to the degree sequence, but we can generally work that out.)
Then, we need to show two things:
1. The expected number of edges between $i$ and $j$ is asymptotically equal to the probability that there is an edge (in the configuration model).
2. Having an edge between $i$ and $j$ does not significantly impact the probability that the multigraph is simple.
The first statement should hold provided that $\frac{d_i d_j}{2L} \to 0$ as $n \to \infty$. But if we further have $d_i d_j \ll L^{1/2}$, then it has an easy proof; the probability that there are two or more edges is asymptotically at most $\frac{(d_i d_j)^2}{(2L)^2}$, there are always at most $d_i d_j$ edges (actually, at most $\max\{d_i, d_j\}$), so the multi-edge case contributes at most $\frac{(d_i d_j)^3}{(2L)^2} \ll \frac{d_i d_j}{2L}$ to the expectation.
(If $d_id_j$ is larger than that, we can probably do a Poisson approximation.)
The second statement should always hold if $\Delta = \max\{d_1, d_2, \dots, d_n\}$ is sufficiently small: say, if $\Delta \le n^{1/6}$. In that case, the probabiity that the multigraph is simple is asymptotically $e^{-\gamma(\gamma+1)}$, where $$\gamma = \frac{\sum_i d_i (d_i - 1)}{2 \sum_i d_i},$$ and this does not significantly change if we condition on one edge being present (essentially, reducing both $d_i$ and $d_j$ by $1$).
|
# Always Be Yourself Unless You Can Be A Unicorn Gifts
Filter
x
Categories
-
Show all
Product features
-
Colours
-
black
grey
white
blue
aqua
green
yellow
orange
red
pink
purple
Sort by
Relevance
Always be yourself unless you can.
Always be a yourself, unless you can be a unicorn
Always be yourself, unless you can be a unicorn
Always be yourself, unless you can be a unicorn
New
jaguar
Always Be Yourself Unless You Can Be A Unicorn
Always Be Yourself - Unless You Can Be A Unicorn..
Always be a yourself, unless you can be a unicorn
Always be yourself unless you can be a unicorn
New
always be yourself.
Always be a yourself, unless you can be a unicorn
Always be a yourself, unless you can be a unicorn
Always Be Yourself - Unless You Can Be A Unicorn..
Always Be Yourself - Unless You Can Be A Unicorn
New
Unicorn Always Be Yourself
Always Be Yourself Unless You Can Be A Unicorn
Always be yourself, unless you can be a unicorn
Always be yourself unless you can be a unicorn
Always be yourself unless you can be a unicorn
New
Always be yourself, unless you can be a unicorn
Always be yourself, unless you can be a unicorn
Always Be Yourself Unless You Can Be A Unicorn Th
Always be yourself unless you can be a unicorn
New
Panda polygon
Always be yourself, unless you can be a unicorn
alway be yourself unless you can be punk
Always Be Yourself Unless You Can Be A Koala
Always Be Yourself Unless You Can Be A Sloth
New
Be always yourself, unless you can .. - Gift
Always Be Yourself Unless You Can Be A Shark
Be always yourself, unless you can be me ...
Always Be Yourself Unless You Can Be An Armadillo
Always be yourself. Unless you can be a Viking
New
Foxy - Fox - Be always yourself - Self-conscious
Always Be Yourself Unless You Can Be a Giraffe
Always Be Yourself Unless You Can Be A Flamingo
Always Be Yourself Unless You Can Be A Red Panda
Always be yourself unless you can be a dolphin
New
Always Be Yourelf Unless You Can Be A Monkey
Always Be Yourself Unless You Can Be A Flamingo
Always be yourself, unless you can be a viking
Always be yourself unless you can be a mermaid
Always Be Yourself Unless You Can Be A Pirate
New
First Love Yourself Gift Idea Love
Always Be Yourself Unless You Can Be A Dragon Gift
Always be yourself unless you can be a narwhal
Always be yourself unless you can be a panda
Always Be Yourself Unless You Can Be A Narwhal
New
Always Be Yourself Unless You Can Be A Penguin
Always be yourself, unless you can be a viking
New
Always Be Yourself Unless You Can Be A Penguin
Always Be Yourself Unless You Can Be A Shark
Always Be Yourself Unless You Can Be A Squirrel
New
lamabealama
New
Be always yourself, unless you can .. - Gift
Be always yourself, unless you can be me ...
Always Be Yourself Unless You Can Be A Parrot
Always Be Yourself Unless You Can Be A Leprechaun
New
Mermaid "Always be yourself"
Always be yourself unless you can be a Dragon
Always be yourself unless you can be a Frog
Always Be Yourself Unless You Can Be An Owl Funny
Always be yourself. Unless you can be a viking
Always Be Yourself Unless You Can Be An Otter
Always be yourself. Unless you’re a unicorn.
Unless you can be in Unicorn
Be yourself, unless you can be a fox
Always be youself unless you can be a unicorn!
Always be youself unless you can be a unicorn!
Always be youself unless you can be a unicorn!
Always be youself unless you can be a unicorn!
Page 1 of 6
1234
...
6
Customers looked for
|
# centroid.owin
0th
Percentile
##### Centroid of a window
Computes the centroid (centre of mass) of a window
Keywords
spatial, math
##### Usage
centroid.owin(w, as.ppp = FALSE)
##### Arguments
w
A window
as.ppp
Logical flag indicating whether to return the centroid as a point pattern (ppp object)
##### Details
The centroid of the window w is computed. The centroid (centre of mass'') is the point whose $x$ and $y$ coordinates are the mean values of the $x$ and $y$ coordinates of all points in the window.
The argument w should be a window (an object of class "owin", see owin.object for details) or can be given in any format acceptable to as.owin().
The calculation uses an exact analytic formula for the case of polygonal windows.
Note that the centroid of a window is not necessarily inside the window, unless the window is convex. If as.ppp=TRUE and the centroid of w lies outside w, then the window of the returned point pattern will be a rectangle containing the original window (using as.rectangle.
##### Value
Either a list with components x, y, or a point pattern (of class ppp) consisting of a single point, giving the coordinates of the centroid of the window w.
##### Aliases
• centroid.owin
##### Examples
w <- owin(c(0,1),c(0,1))
centroid.owin(w)
# returns 0.5, 0.5
data(demopat)
w <- Window(demopat)
# an irregular window
cent <- centroid.owin(w, as.ppp = TRUE)
## Not run:
# plot(cent)
# # plot the window and its centroid
# ## End(Not run)
|
New comments cannot be posted and votes cannot be cast. 1234/2008 are met. The IB is a rigorous curriculum, where students strive to be 21st century learners. What is known by IB Students as the “IB Resources” comprises mainly 10 folders, these being: ... you’ll find the IB Extended Essay Reports which are documents that IB prepares every session in which they point out all the strengths … hide . https://paperarchive.space/Past%20Papers/IB/. International Baccalaureate Teacher Profes sional’, in both theory and practice. Registration Documents Announcement Newton Bhabha Ph.D. Placement Programme 2020-21 Is it just me, or is the entire IB documents page down? report. DP Mathematics HL Questionbank. 17M.1.sl.TZ1.6a.i: Write down the gradient of the curve of $$f$$ at P. 08M.2.sl.TZ1.5a: On the grid below, sketch a graph of $$y = f''(x)$$ , clearly indicating the x-intercept. Welcome to the website for IB candidates. magnet:?xt=urn:btih:e40be53f13561821e4f25b57edeb3d3c27f0970e&dn=IBDocuments_Nov_2019&tr=udp%3a%2f%2ftracker3.itzmx.com%3a6961%2fannounce, Idk for sure, but it hasn’t in the past. Select the expand down icon one more time to display all four hierarchy levels of detail for Tennessee on your treemap. Join the discussions, network, and share your expertise and tips. IB BOOKS. 11N.2.sl.TZ0.10b(i) and (ii): (i) Write down the x-coordinate of the maximum point on the graph of f . The prime minister said the IB had already declared the list as “a forged document” and that the bureau had also initiated an inquiry to find out those responsible for this fiasco. The IB is extremely proud of its graduates, and the alumni network connects them with one another and with the IB community. Break down in order to bring out the essential elements or structure. For a detailed video guide on how to download resources from this page check our TUTORIAL. Write down the coordinates of the minimum point on the graph ... DP Mathematics HL Questionbank - IB Documents DP Further Mathematics HL; DP Mathematics HL; DP Mathematics SL; DP Mathematical ... BTC: IB Documents BTC: Terms and conditions. DP I will be explaining all these three educational programs of IB. Please enter your login credentials. Resources. Poster: Extended essay assessment criteria explained Jonathan Newell Print out this issue's centre pages to display in class IB Diploma. e-review: Can you tell when news is fake? Sort by. Type IB variation: a minor variation, provided that the conditions for such variation laid down in Articles 2(5) and 3(2) of Commission Regulation (EC) No. Log in or sign up to leave a comment Log In Sign Up. Directly related questions. The International Baccalaureate Organization’s (IB) first global virtual conference begins this week. Note that the subreddit is not run by the International Baccalaureate. YOU CAN ACCESS OUR EXEMPLARS REPOSITORY HERE for IAs and EEs . Phot o Album. 11 Dec. With the growing accessibility of digital resources, IB students can better develop understanding and broaden their knowledge outside of the classroom. This subreddit encourages questions, constructive feedback, and the sharing of knowledge and resources among IB students, alumni, and teachers. The research used an online survey, focus groups with IB teachers, and document review to address the research questions. The IB is extremely proud of its graduates, and the alumni network connects them with one another and with the IB community. Nonetheless here are a few documents of notes I found one evening while scrolling … I remember that during some time every year, the IB Documents Team will get taken down by IB. INTERNATIONAL BACCALAUREATE DOCUMENTS FOLDER (New Links/Mirror) Resources [ Removed by reddit in response to a copyright notice. ] This is the unofficial subreddit for all things concerning the International Baccalaureate, an academic credential accorded to secondary students from around the world after two vigorous years of study, culminating in challenging exams. The Type IB application will be handled as set-out in section a) above. With the growing accessibility of digital resources, IB students can better develop understanding and broaden their knowledge outside of the classroom. TOP SECRET IB DOCUMENTS/ 2020-03-18 03:37 - Use of Calculators in Examinations/ 2019 … 11N.1.sl.TZ0.3c: Find the probability that one marble is red and one marble is blue. use this! IB Command Terms. Does anyone remember what time does that usually happen, since I'm planning to save some of the past papers before it get taken down, but I'm currently pretty busy at the moment. This all-time most amazing source often gets taken down but trust me if you can get a hold of it when it is still up you won't regret it. I'm the new server manager. 11M.1.sl.TZ1.5a: Use the quotient rule to show that $$g'(x) = \frac{{1 - 2\ln x}}{{{x^3}}}$$ . This subreddit encourages questions, constructive feedback, and the sharing of knowledge and resources among IB students, alumni, and teachers. ... that can be broken down into simple, routine tasks are easier to offshore than jobs requiring complex thinking, judgement and human interaction. Learn more. Variations which are neither a minor variation of Type IA nor a major variation of Type II nor an extension are classified as Type IB variation by default. So by voluntarily temporarily disabling the website, it helps us ensure that IB doesn't file for it to be taken down. RULE 13 - Piracy is strictly forbidden, as per the Discord Community Guidelines.. Do not share or ask for any pirated resources or materials, or directly reference where one may find them illegally or you will be banned. The research used an online survey, focus groups with IB teachers, and document review to address the research questions. On each Oracle Fabric Interconnect, log in as admin. Looks like you're using new Reddit on an old browser. Assessment principles and practices—Quality assessments in a digital age Overview a world class experience for its students. IBM’s portfolio of enterprise-ready pre-built applications, tools and runtimes are designed to reduce the costs and hurdles of AI adoption while maximizing outcomes and responsible use of AI. Join the network; Tweet Email. report. IB documents is down! This subreddit encourages questions, constructive feedback, and the sharing of knowledge and resources among IB students, alumni, and teachers. 09M.1.sl.TZ1.2c: Find $${\rm{P}}(A \cap B)$$ . Nevertheless, there are complex jobs Save documents in OneDrive. Intentionally blank pages have also been used in documents distributed in ring binders. Shut down the IB subnet manager: # set system is-subnet-manager false. The majority of the content of this document refers to the way in which the IB assesses candidates to award Diploma Programme (DP), Career-related Programme (CP) and the … This is a measure to prevent academic misconduct. 09M.1.sl.TZ1.4a: Complete the following table by noting which graph A, B or C corresponds to each function. 09M.1.sl.TZ1.4a: Complete the following table by noting which graph A, B or C corresponds to each function. ... And M21 gets cancelled and we all sit down sipping from our newly bought champagne like the masterminds we are. Candidate results can now be accessed on https://candidates.ibo.org. Note that the subreddit is not run by the International Baccalaureate. Volume 3, Number 2, November 2016. IB Documents Server Is Back For Good. This subreddit encourages questions, constructive feedback, and the sharing of knowledge and resources among IB students, alumni, and teachers. best. Share them with others and work together at the same time. Press question mark to learn the rest of the keyboard shortcuts, M20 Tz2 | HL: Bio, Chem, 中文B // SL: Eng L&L, Maths, ESS. share. Registering with My IB. (ii) Write down... 11N.2.sl.TZ0.10d: Find the interval where the rate of change of f is increasing. Virtual DP, MYP, PYP, IB Education and Leadership Workshops in St. Pete Beach, United States- FLIBS (December 11-15, 2020) This event will be delivered virtually. Use the Telegram link on the ibdocuments.com page for past papers. This is were sources come into play. Break down in order to bring out the essential elements or structure. 100% Upvoted. but it doesn't have them by topic :( is there another server? Right from the starting of the Academy, we hire only … By using our Services or clicking I agree, you agree to our use of cookies. To this end the organization works with schools, governments and international organizations to develop challenging programmes of international education and rigorous assessment. 17M.1.sl.TZ1.6a.ii: Find the equation of the normal to the curve of $$f$$ at P. 17M.1.sl.TZ1.6a.i: Write down the gradient of the curve of $$f$$ at P. User interface language: English | Español. 11 comments. Is it just me, or is the entire IB documents page down? 09M.1.sl.TZ1.4b: Write down the value of t when the velocity is greatest. IB Answers; IB Documents Team; Updates to Questionbank. This is the unofficial subreddit for all things concerning the International Baccalaureate, an academic credential accorded to secondary students from around the world after two vigorous years of study, culminating in challenging exams. Contrast. Shut Down the IB Subnet Manager (QDR) Use this procedure to deactivate the IB subnet manager running on both Oracle Fabric Interconnects. 08N.1.sl.TZ0.9a: Find the acceleration of the particle at $$t = 0$$ . Middle Years Programme … The IB is a rigorous curriculum, where students strive to be 21st century learners. Sticky: International Baccalaureate May 2020 Question & Help Thread by Aspenfire by sakalata 24-01-2020 24 Jan 15:03 Replies: 8 Views: 1,720 Press question mark to learn the rest of the keyboard shortcuts, M20| WSEE A; HL: BM 7,EngA 6,ITGS 5; SL: Bio 5,MathSt 5,SpanAB 6, M21 | [HL: Bio, Chem, Math AI| SL: Eng A, Spanish Ab, Economics], N22 | [eng l&l hl, dutch ab, history hl, ess, ai hl, art hl], M21 | HL: EN Lit, Physics, History, Math AA; SL: Chem, ES B (EE). Students who will graduate in the next two years are encouraged to join. If you want to know more about IB Resources in general and what's available on this page check our IB Documents Team Resources Guide. 18M.2.sl.TZ2.9b: Find the maximum speed of P. 18M.2.sl.TZ2.9a: Find the initial velocity of P. 18M.2.sl.TZ1.10b.ii: For the graph of $$f$$, write down the period. This is the unofficial subreddit for all things concerning the International Baccalaureate, an academic credential accorded to secondary students from around the world after two vigorous years of study, culminating in challenging exams. Is it because of current exam season or because IB's coming at us with copyright? The MHA IB ACIO 2020 notification is not yet released and is expected to be released in the month of August. (i) Write down an... 11M.1.sl.TZ1.10a: Write down the velocity of the particle when $$t = 0$$ . The goal of ibresources.org is to showcase the top online resources that have helped IB students learn, study and revise for their IB exams. 100% Upvoted. 09M.1.sl.TZ1.2b: Find $${\rm{P}}(B|A)$$ . Invitations to complete an online survey were emailed to 14,407 IB teachers who previously participated in an IB workshop; 3,845 surveys were ... a … A mixed methods design ... pertinent to an IB Teacher. Join the network; Tweet Email.
2020 ib documents down
|
# Shutdown-Script for Don't Starve Together Dedicated Server
I have written my first LUA-Script/Function to shut down the Dedicated Server for Don't Starve Together.
I wanted to shut down the server in 2 minutes and inform everyone on the server that the server is shutting down.
The function os.execute is nil in the dedicated server console, so I can't use os.execute("sleep " .. seconds).
Can I improve something on this script? Is it best practice?
function sleep(seconds)
local t = os.time()
local diff = 0
while diff < seconds do
diff = os.difftime(os.time(), t)
end
end
local steps = {120,60,30,10,5,4,3,2,1}
local n = table.getn(steps)
for i=1,n-1,1 do
TheNet:SystemMessage("The server shuts down in " .. steps[i] .. " seconds")
sleep(steps[i] - steps[i+1])
end
TheNet:SystemMessage("The server shuts down in " .. steps[n] .. " seconds")
sleep(steps[n])
TheNet:SystemMessage("The server is shutting down now")
c_shutdown(true)
Link to the server command list for Don't Starve.
• TheNet:SystemMessage prints the message into the chat
• c_shutdown(true) shutdown the server and persist the world
• I think is correct. Some nit suggestions are: 1) in the for loop, by default increase is '+1' so it's not necessary to type it ('for i=1,n-1 do' works too) 2) instead of 'table.getn(steps)' could simply write '#steps' Dec 17 '17 at 14:20
1. table.getn has been deprecated for more than 2 years now.
2. Why iterate only until n-1?
3. I'd prefer using the string.format over plain concatenation. This is just a personal preference, as the strings are easier to read that way.
4. Although you do not need to pass true to the c_shutdown, do mention in a comment what it does. Maintaining comments is a nice way to recognise engine specific hard-bound parameters; without having to refer the docs again.
5. Iterate over the table using ipairs
Rewritten, the code would be:
local function sleep(seconds)
local diff, t = 0, os.time()
while diff < seconds do
diff = os.difftime(os.time(), t)
end
end
local steps = {
120, 60, 30,
10, 5, 4, 3,
2, 1
}
local notice = "The server shuts down in %d seconds"
for _, secs in ipairs(steps) do
TheNet:SystemMessage(notice:format(secs))
sleep(secs)
end
TheNet:SystemMessage("The server is shutting down now")
c_shutdown(true) -- true parameter implies that the game will be saved
• 1. ok 2. only until n-1, because: steps[i+1] 3. cool thing 4. ok 5. this ipairs strategy is not so clear for me. the _ stands for the key and this isn't used. Isn't there anything like foreach? Dec 21 '17 at 7:50
• @Shinigami ipairs is the same as foreach. I am not using _ because we are not interested in the index of each value. If you want, you can replace secs with steps[_]. foreach loops, AFAIK, also iterate over key => value pairs, and both values are returned in the generators. Dec 21 '17 at 9:47
function sleep(seconds)
local t = os.time()
local diff = 0
while diff < seconds do
diff = os.difftime(os.time(), t)
end
end
Correct me, but as far as I can see, that's a busy sleep, which might also be called "let's see how fast the CPU can go" sleep.
You need a "native" sleep function, something that is being woken up by the OS as to not make the CPU spin, or any other sort of callback after a certain amount of time.
local t = os.time()
Do not shorten variable names just because you can, it makes the code harder to read and harder to maintain.
local steps = {120,60,30,10,5,4,3,2,1}
You could also sleep for half the amount of time, like this:
local timeUntilShutdownInSeconds = 120;
while timeUntilShutdownInSeconds >= 1 do
-- Do Something
timeUntilShutdownInSeconds = timeUntilShutdownInSeconds / 2
sleep(timeUntilShutdownInSeconds)
end
$$$$
`
|
I'm no Keynes, but Germany seems to have illogical economic goals
1. Nov 18, 2005
wasteofo2
http://news.bbc.co.uk/2/hi/europe/4449662.stm
I'm a teenager. There's obviously many things I don't know. Perhaps there are some principals involved with this that I'm not aware of.
However, this seems rather opposed to almost everything I've heard about economics.
Obviously, the two main ways to stimulate the economy are to increase govt. spending, or to cut taxes so as to increase private spending. Usually, as in the Great Depression, defecit spending has been preferred, since an already sluggish economy wouldn't fair well with higher taxes to fund the public spending. Spending cuts are usually proposed so that taxes can be lowered to encourage increased private spending.
So to me, the fact that the Germans want to hike taxes and cut spending, would seem to indicate that both Private and Pubilc expenditures will drop, and I don't see how this could be construed as a mean of revitalizing an economy.
I suppose that if the German government gets its defecit under control, interest rates can be allowed to fall, and firms might be more inclined to invest in Germany, since it's government has itself under control. Is that the angle they're working? Because if so, it would seem that this would really be a process that would take many years for any noticable change to take place.
Could someone more versed in economics explain if there's something I'm missing here, because it seems that the German government is just out of touch with basic economic principals to me.
2. Nov 18, 2005
kleinwolf
Right, to make economy, there is an apparently historical method that works well : just let everybody do what they want (how do you want to make the other way anyhow ? , there are crimes everyday I heard...) and raise military power on top...this is well working....
3. Nov 18, 2005
Art
This is the approach Clinton took in the US and it worked well. However I believe the key to his success was that less borrowing by gov't led to lower interest rates which meant a net income gain to the vast majority of the population resulting in higher spending and thus 8,000,000 more jobs.
Germany is more complicated as they do not have control over their interest rates as these are set by the european central bank. I don't know a lot about the details of Germany's economy but even without interest rate declines reducing the deficit allows for reducing the allocation for debt servicing which frees up money for investment. If tightened too much it can lead to a downward spiral and deeper recession so it is a delicate balancing act made more so by not having the cushion of increased income from lower interest rates to cushion the impact.
Supply side economics whereby gov'ts reduce tax and try to spend their way out of recesson has a dismal track record although that is what the US has done first under Bush snr and now under Bush jnr. Britain's attempts at following this model some years back resulted in them having to ask the IMF to bail them out.
Last edited: Nov 18, 2005
4. Nov 18, 2005
My own thoughts are that if you finance your current gov't expenditure with longterm debt you are heading for a major fall. for eg the costs of the war in Iraq ($00 billions so far) is being financed entirely by borrowings. Borrowing for investment is fine (provided the investments are sound) but using your credit card to feed the electricity meter is a dangerous path to take. Last edited: Nov 18, 2005 11. Nov 18, 2005 Art Yes I just had a quick look at your balance of trade statement for 2005 and it looks pretty good. Luckily for you the gov't is not following the supply side economists' advice. Their economic policy sounds very Clintonesque. http://www.dfait-maeci.gc.ca/eet/pdf/SOT-2005-English.pdf Last edited: Nov 18, 2005 12. Nov 18, 2005 loseyourname Staff Emeritus As far as I know, it isn't. Supply-side economics simply advocates that the revenue streams of suppliers of goods and services be increased (by not taking quite as much off the top), thereby allowing them to create more jobs and better products and services, which in theory should end up benefitting everybody. If we look at the government as being a supplier of goods and services, then cutting their own revenue stream does not constitute a supply-side approach. What they should be doing is handing the ball off, allowing private firms to provide goods and services that the government was previously providing. The only reason public spending increased under Reagan and the Bushes is because all of them also carried out wars which required massive military buildups. The only jobs being created by the government are for soldiers, arms dealers, and contractors that work to rebuild the countries that we occupy. You shouldn't mistake the fiscal stupidity of politicians for real economic theory. 13. Nov 18, 2005 Art That's why I added 'in practice' to my mail above. Gov'ts which follow half an economic plan and half a political plan are unlikely to succeed well on either count. However besides the numerous jobs created directly by gov't expenditure on defence there are also a huge amount of secondary jobs funded by the gov't in supplying for instance health care under the gov't health insurance plans and also of course in the businesses of those people supplying gov't agencies; Halliburton springs to mind As you say ideally (in theory) this expenditure and hence supply should be farmed out to private companies but there are 2 problems with that. Firstly the 'good' problem is private companies are motivated by profit only and so in areas such as health the most vulnerable in society get trampled on and secondly the 'not so good problem' is politicians like to be able to target large amounts of expenditure as it can be very handy when you are looking to win votes in a key marginal 14. Nov 18, 2005 Smurf *shrug* if you say so. They're not listening to traditionalists either. Like I said, their budgets are designed by lawyers. We'll see what happens with the New Budget (assuming it gets through) 15. Nov 18, 2005 Art I'm not endosing their policies as I don't know anything about them other than what I gleaned form the article I read but balanced budgets and positive trade balances sounds good. Do you not agree? 16. Nov 18, 2005 Smurf We have a balanced budget? :surprised When did that happen? 17. Nov 19, 2005 wasteofo2 That's false. Non-defense spending fluxuated under Reagan; some years it dropped, some years it rose. At the end of his term, nondefense discretionary spending had risen by$32 billion. However, with Bush, non-defense spending has been constantly rising. If you'd care to remember, during his first term, Bush didn't veto a single bill, so anything congress wanted to spend money on, he was alright with. During 2000, Clinton's last year as President, the nondefense discretionary spending was $319 Billion. In 2003, Bush and the new Republican majority had hiked it up to$420 Billion.
This chart is entitled Outlays For Discretionary Spending Programs: 1962-2009, check it out. At the bottom there's a sumation of all the nondefense discretionary spending that took place in any one year. http://www.whitehouse.gov/omb/budget/fy2005/sheets/hist08z7.xls
Last edited: Nov 19, 2005
18. Nov 19, 2005
Art
Have a look at the link I posted. I can't speak to it's veracity but the article says they work on balanced budgets.
19. Nov 19, 2005
Smurf
Yeah, they work towards a balanced budget alright. :rofl: :rofl: They balance it by dumping the surplus off the side of the road in Quebec. :rofl: :rofl:
20. Nov 19, 2005
loseyourname
Staff Emeritus
Okay, so I really haven't paid much attention to what money is being spent on by the government. Sorry for making a false statement. I guess I could have built an even stronger case that neither of these guys is actually a supply-sider had I done some actual research.
|
Publication
Title
Quantification of crystalline and amorphous content in porous $TiO_{2}$ samples from electron energy loss spectroscopy
Author
Abstract
We present an efficient method for the quantification of crystalline versus amorphous phase content in mesoporous materials, making use of electron energy loss spectroscopy. The method is based on fitting a superposition of core-loss edges using the maximum likelihood method with measured reference spectra. We apply the method to mesoporous TiO2 samples. We show that the absolute amount of the crystalline phase can be determined with an accuracy below 5%. This method takes also the amorphous phase into account, where standard X-ray diffraction is only quantitative for crystalline phases and not for amorphous phase. (c) 2006 Elsevier B.V.. All rights reserved.
Language
English
Source (journal)
Ultramicroscopy. - Amsterdam
Publication
Amsterdam : 2006
ISSN
0304-3991
Volume/pages
106:7(2006), p. 630-635
ISI
000238479300011
Full text (Publisher's DOI)
Full text (publisher's version - intranet only)
UAntwerpen
Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address
External links
Web of Science
Record
Identification Creation 08.10.2008 Last edited 20.07.2017 To cite this reference
|
## Wednesday, February 10, 2016
### curved or straight?
please list as many experiments as possible that would allow you to distinguish between curvature and dark energy with an equation of state of $w=-1/3$.
|
HOMER html output compared with homerMotifs.all.motifs are different
0
0
Entering edit mode
10 months ago
testtube ▴ 20
When executing HOMER for motif discovery (with options -fasta -noreopp -noknown), 2 of the main files it outputs are homerResults.html and homerMotifs.all.motifs describing the results. I was expecting the results described in these 2 files to be identical, but the file homerMotifs.all.motifs often has more motifs than the html file. I cannot figure out if it's because a threshold is used and the html is a "filtered" file. Some motifs that are present in the all.motifs file are absent from the html file despite having better p-value and %of target ratio then others that are kept and shown. What explains this difference?
HOMER motif • 316 views
|
# Coset state of $3$-node graph isomorphism problem
The hidden subgroup representation of a $3$-node graph isomorphism problem is defined over the symmetric group, $G = S_6$. So, any hidden subgroup algorithm that wishes to solve the problem should start with constructing the state $|f\rangle$.
$$|f\rangle = \frac{1}{\sqrt{|G|}} \sum_{g \in G} |g\rangle |f\left(g\right)\rangle$$
I am trying to figure out the efficient way to construct $\frac{1}{\sqrt{|G|}} \sum_{g \in G} |g\rangle$. For a $3$-node graph isomorphsim problem, $|G| = 720$. It means, if we map each permutation of $G$ to an unique basis state, we need $720$ basis states. It entails that $|g\rangle$ has to be at least a $10$ qubit register.
My questions are:
1. Is there a more efficient way to construct the state with lesser number of qubits?
2. If we need at least 10 qubits, which $720$ of the $2^{10}$ states should I choose?
|
# Tag Info
5
Firstly, I presume this is not something you are going to use for protecting data in any kind of real life scenario, but are only asking out of curiosity. Secondly, just to get the terminology straight and avoid confusion, what gives an OTP cryptographic scheme information theoretic security is that it meets both of the following two criteria: The key ...
3
This is highly insecure. For instance, if you see the word guyk in the ciphertext, what could the corresponding plaintext word be? With your scheme (where each letter is enciphered by adding a number between 0..9 to it modulo 26), there are only 139 English words that could have led to it. (Those 139 possibilities are things like arse, blue, bore, both, ...
1
The adversary $\mathcal A$ is an entity (think of a computer program) designed to participate in the experiment $\operatorname{PrivK}^{\text{eav}}_{A,\Pi}$. So the adversary produces two messages, then is given the encryption of one of them, and has to guess which one it was. Of course, you can give the adversary other "ciphertexts" too, but this wouldn't ...
1
One problem is that keyboard keys may or may not be uniform. Look at the F and J keys - they may have a little dot of plastic to identify them with your fingertips. That little dot may make them heavier or lighter than other keys, affecting how they shake up in a hat. Some keyboards, like Das Keyboard, were built with different spring actions for different ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
nLab barycenter
Barycenter of a simplex
Definition
If $\sigma = \{ v_0, \ldots, v_q\} \in K_q$, the set of $q$-simplices of a simplicial complex, $K$, then its barycentre, $b(\sigma)$, is the point
$b(\sigma) = \sum_{0\leq i \leq q}\frac{1}{q + 1} v_i \in |K|.$
For the use of barycenters in the barycentric subdivision, see classical triangulation or
Last revised on December 1, 2010 at 21:00:49. See the history of this page for a list of all contributions to it.
|
Environ Eng Res > Volume 24(4); 2019 > Article
Cai, Tong, Zhang, Zheng, He, Lin, Chen, and Xiao: Characteristics of long-range transported PM2.5 at a coastal city using the single particle aerosol mass spectrometry
### Abstract
Air pollution has attracted ever-increasing attention because of its substantial influence on air quality and human health. To better understand the characteristics of long-range transported pollution, the single particle chemical composition and size were investigated by the single particle aerosol mass spectrometry in Fuzhou, China from 17th to 22nd January, 2016. The results showed that the haze was mainly caused by the transport of cold air mass under higher wind speed (10 m·s−1) from the Yangtze River Delta region to Fuzhou. The number concentration elevated from 1,000 to 4,500 #·h−1, and the composition of mobile source and secondary aerosol increased from 24.3% to 30.9% and from 16.0% to 22.5%, respectively. Then, the haze was eliminated by the clean air mass from the sea as indicated by a sharp decrease of particle number concentration from 4,500 to 1,000 #·h−1. The composition of secondary aerosol and mobile sources decreased from 29.3% to 23.5% and from 30.9% to 23.1%, respectively. The particles with the size ranging from 0.5 to 1.5 μm were mainly in the accumulation mode. The stationary source, mobile source, and secondary aerosol contributed to over 70% of the potential sources. These results will help to understand the physical and chemical characteristics of long-range transported pollutants.
### 1. Introduction
Atmospheric particulate matter (PM), particularly the fine particulate matter (PM2.5), has become a serious problem around the world, because of its numerous adverse effects on individuals and universal climate [1]. The World Health Organization (WHO, http://www.who.int) reported in 2016 that PM led to 3,700,000 deaths per year, most of which died of respiratory diseases. PM2.5 is a mixture of minor particles and liquid droplets. It consists of metals, organic chemicals, acids (such as nitrates and sulfates), and dust particles. The smaller particles are more harmful to human health [2]. The particles in the accumulation mode (0.1–2.0 μm) have the maximum extinction coefficient and the longest residence time, so they can be transported over long distances [3]. Pollution types can mainly be divided into two categories: transport type and cumulative type. The transport pollution type is characterized by direct external transport [4, 5].
As noted in previous research, off-line monitoring is a widely-used method for PM mass analyses [6]. However, this method has been found to have deficiencies in many aspects, such as low resolutions, the sampling pollution and a poor efficiency [7]. The aerosol mass spectrometers (AMS), the aerosol time-of-flight mass spectrometry (ATOFMS) and the single particle aerosol mass spectrometer (SPAMS) with high temporal resolution have already been used to monitor aerosols on-line to acquire the physicochemical characteristics of PM [8]. In recent years, the SPAMS has been broadly used in aerosol research in Asia based on the laser ablation and ionization methods [7, 9]. It can record size and mass spectral information on single particle. Therefore, the SPAMS can be used to reveal the PM formation process from Aitken mode [10], and to predict quantitatively the PM chemical mixture state [8]. Besides, it could benefit the determination of the PM potential sources [11, 12]. The mixture state can be influenced by both emission sources and meteorological conditions, such as cloud condensation nuclei, hygroscopicity, and optical scatter and absorption [13]. The source apportionment is a major achievement of SPAMS [14]. Many studies using the SPAMS indicated high PM2.5 mass concentration in the Yangtze [14] and Pearl River Delta regions [7], the Sichuan basin [15], the Beijing-Tianjin-Hebei region [16] and Central China [9].
In recent years, the SPAMS has been used to characterize the haze pollution in China based on the particle number concentrations. However, the research on the mixture state, size distribution and evolution processes of long-range transported aerosol particles are still insufficient. The main purpose of this study is to further investigate the rapid variations of chemical compositions, size and sources of fine particle during the formation and dissipation processes of haze, which was caused by the long-range transported pollutants. Therefore, it could help to improve the understanding on haze evolution.
### 2.1. Sampling Site
Fuzhou is the capital as well as the political and economic center of Fujian Province, China. The annual average mass concentration of PM2.5 in Fuzhou in 2015 was 29.2 μg·m−3 (https://www.aqistudy.cn/historydata/daydata.php). It was below the Grade II National Ambient Air Quality Standard (35 μg·m−3) for the annual mean concentration of PM2.5. However, the daily average mass concentration of PM2.5 was high, up to 94.1 μg·m−3 on January 19th in 2016 during a haze episode. The monitoring site is situated on the rooftop of an office block in Fujian Environmental Monitoring Station (119.29 ºE, 26.11 ºN), which is situated in a typical governmental and residential area with large residential and traffic sources (Fig. S1). More detailed information of this site has been mentioned in a previous study [17].
### 2.2. SPAMS and Meteorological Data
A commercial device of SPAMS (Hexin Analytical Instrument Co., Ltd., China), previously described by Li et al. [18], was utilized in this study. The particles with aerodynamic sizes of 0.2–2.0 μm from the ambient air can be acquired into the vacuum system through a critical orifice (~ 100 μm), and then through an aerodynamic lens, and gradually focused on the axis of the lens. They were finally quantified by two photomultiplier tubes (PMTs) based on the speed of the particles and their passing time. Sized particles (noted with ‘size’) are ionized by a pulsed Nd (266 nm): YAG laser (1.0 mJ). A bipolar TOF-MS was used to detect and quantify the positive and negative ions (noted as ‘mass’). However, the SPAMS has a low ionization efficiency for particles below 0.2 μm or above 2.5 μm.
A total of 1,314,931 particles (‘size’) and 169,687 of the particles (‘mass’) were collected by SPAMS during the haze episode in Fuzhou from 17th to 22nd January, 2016. PM2.5, sulfur dioxide (SO2) and nitrogen dioxide (NO2), were continuously monitored during 17th and 22nd January, 2016. Daily average PM2.5 mass concentration was calculated based on hourly average data. Meteorological data including air temperature (T), relative humidity (RH), visibility (Vis), and wind speed (WS)/wind direction (WD) was obtained from the website of the weather company (https://www.wunderground.com, an IBM Business, formerly WSI).
### 2.3. Analysis of Single Particle Data
A Matlab-based data analysis toolkit for SPAMS (COCO_P) was used to search and dispose of mass spectral features of particles. The particles were further categorized via neural network algorithm (ART-2a) based on the similarity of mass spectra with vigilance factor, learning rate, maximum iteration and range of mass spectra of 0.8, 0.05, 20 s, and 250, respectively [19]. The particles were manually clustered into six groups (95% of whole particles number), i.e. EC (element carbon) based on Cn± (n = 1,3,4,5 …), organic carbon (OC) based on high signal levels of 27, 43 (m/z) along with peaks near 50 (m/z) and peaks near 60 (m/z), K-rich based on high signal levels of 39 (m/z), heavy metals based on signal peak of Pb (206, 207, 208), Zn (64, 66, 68), Cu (63, 65), etc., dust based on signal peak of Ca+ (40) and SiO3 (−76), and secondary aerosol based on high signal levels of nitrate (−46, −62) and sulfate (−80, −97). The remaining unclassified particles were considered as others. All the particles should only be classified into one potential source.
### 2.4. Potential Source Contribution Function
The meteorological data used in the model on source determination were obtained from the NOAA FNL archives. A tracking time of 48 h was adopted in this study with hourly trajectories from 0:00 to 23:00 between 16th and 22nd January, 2016. The starting height of 500 m above the ground level (AGL) was used to lessen the effects of ground surface friction and to characterize the wind features in the lower boundary layer [20]. Trajectory clustering was performed with the geographic information system (GIS) based on the software of TrajStat [21]. Potential source contribution function (PSCF) methods were applied to study the potential source regions and the individual contributions to PM2.5 in Fuzhou in January, 2016. PSCF could reflect the proportion of pollution trajectory in a grid, and it is impossible to distinguish the contribution of the grid with the same PSCF value to the mass concentration of PM2.5 at the sampling site. The 48 h air mass back-trajectories arriving at the Fujian Environmental Monitoring Station were simulated using the NOAA HYSPLIT4 model based on the global data assimilation system (GDAS) data at 0.5°× 0.5° resolution. Back trajectories were generated hourly during the study period, and started at 500 m above ground level. PSCFij is defined as follows:
##### (1)
$PSCFij=mij/nij$
where nij is the total number of trajectory endpoints in the ijth grid cell and mij is the total number of trajectory endpoints in the same cell with the pollutant concentrations at the sampling site being higher than a criterion value. A weighting function was applied to reduce the PSCF values when the total number of the endpoints in a particular cell was less than about three times the average value of the end points per cell. Wij was defined as follows:
##### (2)
$Wij={1.0080
As shown in Fig. 1, the major potential source region (PSCF value > 0.4) contributing to the particles at the study site was mainly in the northwest, especially in the Yangtze River Delta (including Shanghai, Jiangsu, and Zhejiang), which was consistent with the results derived from the back-trajectory analyses (Fig. S2).
The Nested Air Quality Predicting Modeling System (NAQPMS) is a multi-scale air quality modeling system technologically advanced by the Institute of Atmospheric Physics, Chinese Academy of Sciences (IAP, CAS), and it aims to reproduce the transport and evolution of air pollutants. This system includes modules of real-time emission, dry and wet statement, aerosol, gaseous phase, and heterogeneous atmospheric chemical reaction [22]. The model is in nested domains, which covers East China with 15 km × 15 km horizontal resolution and has 180 grids in the latitudinal and longitudinal directions, respectively. The time from 16th to 22nd January was selected with a 5-min time step in this study. The species involved in PM2.5 included sulfate, nitrate, ammonium salt, black carbon (BC), and dust. The initial and boundary conditions were engaged in the results of a global model (MOZARTv2.4), which was jointly developed by the American Nation Center for Atmospheric Research (NCAR), the Max Planck Institute for meteorology, Germany (MPI), the Global Fluid mechanics research Laboratory (GFDL), and the American Nation Oceanic Atmosphere Administration (NOAA). Fig. 2 illustrated that the air masses were mainly originated from the north. It also showed the daily average concentration of PM2.5 in East China and its transport fluxes. It was found that the surface PM2.5 concentration remained high in Shanghai in the Yangtze River Delta region under northerly winds, with the level over 75 μg·m−3. The pollutants in Shanghai were transported southward to Zhejiang province (Fig. 2, and Fig. S4.). During this period (18th–20th January, 2016), the prevailing northerly winds in Shanghai and Zhejiang gradually brought PM2.5 northwardly to Fujian province along the coastline under a weak pressure system in the East China Sea (Fig. S3). Therefore, the pollution process could be identified as the transport type, and it could be inferred that the particulate matter was carried to Fuzhou from the north of the study site. This model could clearly show the locations of the pollution sources.
### 3.1. Overview of the Air Quality and Meteorological Parameters
The SPAMS results and meteorological parameters were presented in Fig. 3(a) showed that the T kept decreasing and the RH kept increasing during the formation (T1) and dissipating (T2) periods of aerosols, which created good conditions for a sharp increase of PM2.5 concentration [23]. Fig. 3(b) clearly showed that the WS increased quickly during T1 and then remained at a high level, while it rapidly decreased during later T2. Besides, the wind originated mainly from northwest of the sampling site during T1 and T2. Higher PM2.5 concentration was observed under relatively high WS Fig. 3(b) and 3(c), which differed from the results of Lu and Fang [24]. They found that the PM2.5 mass concentration was negatively correlated with WS during cumulative period. Long-range transported air pollutants might play a main part in the increase of PM2.5 concentration during the study period in Fuzhou, which was consistent with the previous research of Grange et al [25]. They discovered that the highest concentrations of PM2.5 were related with strong easterly winds during long-range transport, while elevated PM2.5 concentrations could also be observed under low WS. In other words, low WS could lead to the buildup of high local pollutant concentrations while strong ventilation with high WS could prevent the local build-up near the sources, but contribute to long-range transport of regional aerosol, especially under directionally persistent wind conditions [26].
The Vis and PM2.5 concentration presented inverse correlation (Fig. 3(c)), which was consistent with the previous research [27]. Low Vis (< 4 km) was observed when PM2.5 mass concentration was above 75 μg·m−3, indicating the occurrence of haze [28]. The highest PM2.5 mass concentration was up to 124 μg·m−3 during the episode. The number of ‘sized’ and ‘mass’ and the hourly hit rate of SPAMS were depicted in Fig. 3(d). The hit particles well tracked the PM2.5 mass concentration (Fig. 3(e)), which was in accordance with the results reported by Reche et al [29]. It suggested that the particle number concentration could correlate well with PM2.5 level and could be used as an indicator of pollution levels.
### 3.2. Major Particle Types from Mass Spectral Analysis
Mass spectra of the six groups with descriptions of each type were shown in Fig. 4 and Table S1. The relatively high intense signals at m/z 39 (K+), and m/z 37 (C3H+), which was organic fragment as the typical biomass burning (BB) marker, were collected in the positive ion signals this study. This was similar to the result reported by Tao et al. [30] BB particles collected were relatively aged due to high ion signals at m/z −46 (NO2), −62 (NO3) and −97 (HSO4), which suggested that they reached the sampling site with deeply aged state. The large fraction of biomass particles might be due to the pulluatant transportation from North area of the study region, which was consistent with the results based on back trajectory (Fig. S2) and model analyses (Fig. 2). As shown in Fig. S5, the correlation coefficients between BB and OC were more than 0.7, which suggested that BB and OC have a good correlation.
Dust particles showed relatively high nitrate −62 (NO3) and sulfate −97 (HSO4) signals, which suggested that they were aged particles. This was consistent in the result that more than 50% of dust particles contained aged particles shown in the Fig. S5. The positive ion signal at m/z 56 should be CaO+ instead of Fe+ due to high signal intensity at both m/z 40 (Ca+) and m/z 56.
The mass spectra of mobile sources are indicated by Cn+ (n = 1, 2…7) and CnHm+ (n = 1–5, m = 1–3) in the positive mass spectrum and strong secondary ions in the negative mass spectrum, especially the m/z −46 (NO2) and m/z −62 (NO3). This suggested that the aged EC particles were probably in a mixture state with nitrate. Furthermore, the organic nitrogen m/z 26 (CN) and m/z 39 (K+) were also observed.
Strong peaks of inorganic ions were observed at m/z 39 (K+) in the positive spectrum and at m/z −97 (HSO4), −80 (SO3), −64 (SO2) and −32 (S) in the negative spectrum, which might belong to aged particles as suggested by Lang et al. [31] Strong mass peaks of Cn± (n = 1, 2…7) were observed in the positive and negative mass spectrum as the stationary sources. As shown in Fig. S5, more than 80% of the stationary sources contained elemental carbon (EC), suggesting a good relation between stationary sources and EC.
Intense ion signals of m/z 206/207/208 (Pb+), 64/66/68 (Zn+), 63/65 (Cu+), and 24 (Mg+) were observed in industrial particles, with high peaks at extremely aged m/z −46 (NO2), −62 (NO3) and −97 (HSO4) in the negative mass spectra.
The secondary particles were characterized with intense 39 (K+) and other ions with low peaks in the positive spectra and −97 (HSO4), −46 (NO2), −62 (NO3) in the negative spectra. The particles of sulfate and nitrate are commonly considered as secondary type in spite of the particles were emitted from the primary sources, such as sea salt and industry [7].
### 3.3. Chemical Characteristics of Particles
As shown in Fig. 5, PM2.5 mass concentration rapidly increased from 35 to 124 μg·m−3 during T1 (formation period) but decreased quickly during T2 (dissipating period). During T1, obvious increases were observed in the fractions of mobile source (from 24.3% to 30.9%), and secondary aerosol (from 16.0% to 22.5%), and the number concentration also rapidly increased from 1,000 to 4,500 #·h−1. It could be seen from Fig. S3 that the particles with increased number concentration mainly came from north of the monitoring site. Based on these results, we concluded that the contributions to aerosols from both mobile source and secondary aerosol increased during T1. Combined with the backward trajectory analysis (Fig. S2), air quality modelling (Fig. 2) and wind direction and speed information (Fig. 3(b)), the increase in the fraction of mobile source was regarded to be caused by local vehicle emission or regional transport from northern China.
Fig. 5 and Fig. 3(b) showed that PM2.5 number concentration rapidly increased from 1,000 to 4,500 #·h−1 during T1 with relatively strong wind (maximum WS at 10 m·s−1), which was contrary to the general understanding that elevated particle concentration is usually associated with low WS [32]. This positive relationship between pollutant concentration and WS, which suggested the negative effect of pollutant transport on local air quality during the period of T1, had also been confirmed by the modeling result (Fig. 2) and seen from the Fig. S3. Therefore, the process during T1 was inferred to be a long-range transported process rather than an accumulation process.
During T2, the fraction of stationary sources decreased from 29.3% to 23.5% and the fraction of mobile sources decreased from 30.9% to 23.1%. In the meantime, the number concentration also rapidly decreased from 4,500 to 1,000 #·h−1. Therefore, the reduced contributions from both stationary and mobile sources were the main reason for the haze dissipation. Fig. 3(b) and 3(c) showed that sea breezes mostly blew from northeast with clean air, which could dilute the pollutants by carrying them to south. This indicated that meteorological factors were the main contributors to haze dissipation.
As shown in Fig. 6, the number concentrations of SO42− and NO3, and PM2.5 concentration decreased quickly since January 20. These might be due to the dilution effect of the relatively clean air originating from the sea (Fig. S4). However, during T3 period, when a slight change in wind direction was recorded, an obvious increasing trend was observed for SO2 (with a maximum value of 12 μg·m−3). Unlike other pollutants, SO2 was mainly emitted from stationary sources. The short distance between the sampling site and a local industrial park (Fig. S1) with small coal-fired power plants, which were likely strong sources for SO2, might account for the abnormal rise of SO2 during this period. More effort is still needed to confirm this in the future.
In summary, PM2.5 number concentration rapidly increased from 1,000 to 4,500 #·h−1 during T1 (formation period) with the increasing secondary aerosol, which might result from OH· photo-oxidation of particles and their homogeneous or heterogeneous reactions during long-distance transport. During T2, the number concentration rapidly decreased from 4,500 to 1,000 #·h−1 due to the dilution effect of clean sea breezes blowing mainly from northeast to south.
NO2 is often used as the marker for mobile/traffic emissions [25]. The mobile sources were plotted against the NO2 concentrations in Fig. S6, which showed a relatively good linear correlation with R of 0.73 for the whole study period. As the local emissions and long-range transport may have significantly different characteristics, these correlations may vary accordingly. Based on previous discussion, the air quality at the study site was regarded to be strongly influenced by the long-range transported air pollutants during the periods of T1 and T2, while it was mainly influenced by local emissions during other time periods. Therefore, the correlations were greatly improved, with R values increasing to 0.95 and 0.85 for periods when the sampling site was affected by the long-range transport and local emissions, respectively. These suggested that the source apportionment for the mobile source emissions was generally reliable.
### 3.4. Size and Number Distributions of the Particulate Matter
Fig. 7(a) and 7(b) illustrated that the number concentrations of mobile source, stationary source and secondary aerosol increased quickly during haze formation. During the dissipation period, the number concentrations of mobile source, stationary source, and secondary aerosol gradually decreased. During this haze episode, the total particle numerical proportion of the stationary source, mobile source and secondary aerosol was almost 80%, indicating that they were the main sources of pollution. This was in accordance with the study by Zhou et al [33]. As shown in Fig. 7(c), the particle of the pollution process was mainly distributed from 0.5 to 1.5 μm. Fig. 8 presented the size distribution of particle number concentration for different components during the study period. The six components, which accounted for more than 80% of the total particle number, were mainly distributed from 0.5 to 1.5 μm. The previous research [34] reported that the particles of the nuclear mode (0.005–0.1 μm) could be condensed and converted into those of the accumulative mode (0.1–2.0 μm) during long-range transport. The smaller the particles are, the slower they will deposit due to their longer retention time and transport distance in the atmosphere. Different compositions in different particle size have different proportions, which are closely related to the occurrence, disappearance, migration and transformation of atmospheric particulates. Also the pollution process during the study period had been confirmed to be the long-range transported type based on previous modeling analyses. It was shown that the proportions of secondary aerosols and mobile sources decreased with the increase of the particle size in the range of 0.6 to 1.8 μm (Fig. 7(d)). Based on the analyses of the size-fractioned aerosol samples collected by eight-stage Anderson samplers for four seasons, Zhang et al discovered that the size distributions of SO42− and NH4+ fraction were almost unimodal when the T was high during all seasons, in contrast, those of Mg2+ and Ca2+ appeared bimodal in the lower T [35]. The proportion of stationary emission sources gradually increased in the range of 0.6 to 1.8 μm, which was consistent with the previous research [36]. As shown in Fig. 7(d), the proportion of the three pollution sources was about 80% in this range, indicating that the three pollution sources were the main contributors to the haze formation. The dust accounted for high proportion of particles in the size of less than 0.5 μm, while the secondary and stationary aerosols accounted for relatively low proportions. Besides, the proportion of stationary sources was higher than that of dust in the size of 1.5–2.0 μm. These suggested that the fine particles might have been oxidized and changed in size during the long-range transport. This result could provide a basis for making air quality control policies and help to improve the air quality in a specific region.
### 4. Conclusions
This study investigated the single particle chemical composition and particle size of ambient aerosol using the SPAMS in Fuzhou, China from January 17th to 22nd, 2016. The haze formation was mainly caused by the long-range transported air pollutants from the Yangtze River Delta region to Fujian Province along the coastline. During the formation period of haze, the fraction of mobile source (secondary aerosol) increased from 24.3% to 30.9% (from 16.0% to 22.5%), and the number concentration elevated from 1,000 to 4,500 #·h−1. On the other hand, the haze was eliminated by the clean air mass from the sea, which led to the decrease of particle number concentration from 4,500 to 1,000 #·h−1. The fraction of secondary aerosol decreased from 29.3% to 23.5% and the fraction of mobile sources decreased from 30.9% to 23.1%. The six major components (K-rich, EC, OC, sulfate, ammonium, and nitrate) of the particles accounted for more than 80% of the total particle number. The particle size was mainly in the range of 0.5–1.5 μm, which belonged to the accumulation mode. Average positive and negative mass spectras of the six potential sources (dust, BB, mobile sources, stationary sources, industry, and secondary aerosol) were identified to gain a deep insight into the chemical composition and mixing state of each pollution source. The major potential sources were stationary source (23.51–30.39%), mobile source (23.06–30.91%) and secondary aerosol (16–23.49%), with the total proportion exceeding 70%. More than 80% of stationary sources, mobile sources, industry, and secondary aerosol contained K-rich, nitrate, and sulfate.
### Acknowledgments
Thanks to the National Key Research and Development Program (NO.2016YFE0112200) and the Chinese Academy of Sciences Interdisciplinary Innovation Team for funding the research.
### References
1. Sun K, Chen XL. Spatio-temporal distribution of localized aerosol loading in China: A satellite view. Atmos Environ. 2017;163:35–43.
2. Canagaratna MR, Jayne JT, Jimenez JL, et al. Chemical and microphysical characterization of ambient aerosols with the aerodyne aerosol mass spectrometer. Mass Spectrom Rev. 2007;26:185–222.
3. Kollanus V, Prank M, Gens A, et al. Mortality due to vegetation fire-originated PM2.5 exposure in Europe-assessment for the years 2005 and 2008. Environ Health Perspect. 2017;125:30–37.
4. Park RJ, Jacob DJ, Field BD, Yantosca RM, Chin M. Natural and transboundary pollution influences on sulfate-nitrate-ammonium aerosols in the United States: Implications for policy. J Geophys Res Atmos. 2003;109:15204
5. Tie X, Huang , Ru J, Cao J, et al. Severe pollution in China amplified by atmospheric moisture. Sci Rep. 2017;7:15760
6. Liu T, Zhuang G, Huang K, et al. A typical formation mechanism of heavy haze-fog induced by coal combustion in an inland city in north-western China. Aerosol Air Qual Res. 2017;17:98–107.
7. Bi X, Zhang G, Li L, et al. Mixing state of biomass burning particles by single particle aerosol mass spectrometer in the urban area of PRD, China. Atmos Environ. 2011;45:3447–3453.
8. Zhai J, Lu X, Li L, et al. Size-resolved chemical composition, effective density, and optical properties of biomass burning particles. Atmos Chem Phys. 2017;17:7481–7493.
9. Zhang H, Cheng C, Tao M, et al. Analysis of single particle aerosols in the North China Plain during haze periods. Res Environ Sci. 2017;30:1–9.
10. Novakov T, Penner JE. Large contribution of organic aerosols to cloud-condensation-nuclei concentrations. Nature. 1993;365:823–826.
11. Ng NL, Herndon SC, Trimborn A, et al. An aerosol chemical speciation monitor (ACSM) for routine monitoring of the composition and mass concentrations of ambient aerosol. Aerosol Sci Technol. 2011;45:780–794.
12. Squizzato S, Masiol M, Brunelli A, et al. Factors determining the formation of secondary inorganic aerosol: A case study in the Po Valley (Italy). Atmos Chem Phys Discuss. 2013;13:1927–1939.
13. Kawana K, Nakayama T, Kuba N, Michihiro Mochida. Hygroscopicity and cloud condensation nucleus activity of forest aerosol particles during summer in Wakayama, Japan. J Geophys Res Atmos. 2017;122:3042–3064.
14. Li YJ, Sun Y, Zhang Q, et al. Real-time chemical characterization of atmospheric particulate matter in China: A review. Atmos Environ. 2017;158:270–304.
15. Chen Y, Yang F, Mi T, et al. Characterizing the composition and evolution of and urban particles in Chongqing (China) during summertime. Atmos Res. 2016;187:84–94.
16. Cai J, Wang J, Zhang Y, et al. Source apportionment of Pb-containing particles in Beijing during January 2013. Environ Pollut. 2017;226:30–40.
17. Xu L, Chen X, Chen J, et al. Characterization of PM10 atmospheric aerosol at urban and urban background sites in Fuzhou city, China. Environ Sci Pollut Res. 2012;20:1443–1453.
18. Mei LI, Lei LI, Huang Z, Dong , Zhong FU. Preliminary study of mineral dust particle pollution using a single particle serosol mass spectrometer (SPAMS) in Guangzhou. Res Environ Sci. 2011;24:632–636.
19. Zhang G, Bi X, He J, et al. Variation of secondary coatings associated with elemental carbon by single particle analysis. Atmos Environ. 2014;92:162–170.
20. Begum BA, Kim E, Jeong C, Lee D, Hopke P. Evaluation of the potential source contribution function using the 2002 Quebec forest fire episode. Atmos Environ. 2005;39:3719–3724.
21. Wang YQ, Zhang XY, Draxler RR. TrajStat: GIS-based software that uses various trajectory statistical analysis methods to identify potential sources from long-term air pollution measurement data. Environ Modell Softw. 2009;24:938–939.
22. Wang T, Chen G, Qian Z, Yang GS, Qu JJ, Li DL. Situation of sand-dust storms and countermeasures in north China. J Desert Res. 2001;445:
23. Munir S, Habeebullah Turki M, Mohammed A, et al. Analysing PM2.5 and its association with PM10 and meteorology in the arid climate of Makkah, Saudi Arabia. Aerosol Air Qual Res. 2017;17:453–464.
24. Lu HC, Fang GC. Estimating the frequency distributions of PM10 and PM2.5 by the statistics of wind speed at Sha-Lu, Taiwan. Sci Total Environ. 2002;298:119–130.
25. Grange SK, Lewis AC, Carslaw DC. Source apportionment advances using polar plots of bivariate correlation and regression statistics. Atmos Environ. 2016;145:128–134.
26. Husar RB, Renard WP. Ozone as a function of local wind speed and direction: Evidence of local and regional transport. In : Conference: 91. annual meeting and exhibition of the Air and Waste Management Association; 14–18 June 1998; San Diego, CA (United States).
27. Han B, Zhang R, Yang W, et al. Heavy haze episodes in Beijing during January 2013: Inorganic ion chemistry and source analysis using highly time-resolved measurements from an urban site. Sci Total Environ. 2016;544:319–329.
28. Diapouli E, Popovicheva O, Kistler M, et al. Physicochemical characterization of aged biomass burning aerosol after long-range transport to Greece from large scale wildfires in Russia and surrounding regions, Summer 2010. Atmos Environ. 2014;96:393–404.
29. Reche C, Moreno T, Martins V, et al. Factors controlling particle number concentration and size at metro stations. Atmos Environ. 2017;156:169–181.
30. Tao S, Wang X, Chen H, et al. Single particle analysis of ambient aerosols in Shanghai during the World Exposition, 2010: Two case studies. Front Environ Sci Eng China. 2011;5:391
31. Lang Y, Wang W, Zhang L, et al. Characteristics of atmospheric single particles during haze periods in a typical urban area of Beijing: A case study in October, 2014. J Environ Sci. 2016;40:145–153.
32. Xingmin LI, Dong Z, Chen C, et al. Study of influence of aerosol on atmospheric visibility in Guanzhong region of Shaanxi Province. Plateau Meteorol. 2014;5:1289–1296.
33. Zhou J, Ren Y, Hong G, et al. Characteristics and formation mechanism of a multi-day haze in the winter of Shijiazhuang using a single particle aerosol mass spectrometer (SPAMS). Environ Sci. 2015;36:3972–3980.
34. Philip S, Martin R, Donkelaar A, et al. Global chemical composition of ambient fine particulate matter for exposure assessment. Environ Sci Technol. 2014;48:13060–13068.
35. Zhang J, Tong L, Huang Z, et al. Seasonal variation and size distributions of water-soluble inorganic ions and carbonaceous aerosols at a coastal site in Ningbo, China. Sci Total Environ. 2018;639:793–803.
36. Gupta A, Clercx H, Toschi F. Computational study of radial particle migration and stresslet distributions in particle-laden turbulent pipe flow. Eur Phys J E. 2018;41:34
##### Fig. 1
Air mass (based on the 72 h backward trajectories) calculated by the HYSPLIT4 model and TrajStat in Fuzhou (119.29 ºE, 26.11 ºN) at the height of 500 m AGL.
##### Fig. 2
The simulated result of NAQPMS, which clearly showed the daily average concentrations of PM2.5 and surface wind fields in the East China on 18th January, 2016.
##### Fig. 3
Time series of (a) T and RH; (b) WS and WD; (c) atmospheric Vis and PM2.5 mass concentration; (d) number of sized particles, hit count, and the hourly average hit rate of SPAMS; (e) mass concentration and the hourly average hit count from 16th to 22nd January, 2016.
##### Fig. 4
Average positive and negative mass spectra of the six groups: dust, BB, mobile sources, stationary sources, industry and secondary aerosol.
##### Fig. 5
Hourly resolved numbers of fine particles by SPAMS, and PM2.5 mass concentration fraction of chemical compositions during different periods in Fuzhou.
##### Fig. 6
Temporal variations of mass concentration of SO2, NO2 and PM2.5, and number concentration of nitrate and sulfate.
##### Fig. 7
Particle number and fraction of different sources with a, b) 1-h time resolution, and c, d) 0.25 μm resolution in particle size during the experimental period.
##### Fig. 8
Size distribution of particle number concentration for different components during the study period.
TOOLS
Full text via DOI
Supplement
E-Mail
Print
Share:
METRICS
1 Crossref
1 Scopus
1,967 View
|
+0
# hmm
+3
59
3
+85
How many cubic (i.e., third-degree) polynomials f(x) are there such that f(x) has nonnegative integer coefficients and f(1)=9?
Aug 18, 2022
edited by Doremy Aug 18, 2022
#1
+2446
+3
Here's my attempt:
Let the function be in the form \(ax^3 + bx^2 + cx + d = y\). Also note that \(f(1) = 9\) means that \((1,9)\) is a point on this graph. Subbing this in gives us \(a + b + c + d = 9\)
With stars and bars, we use 3 bars, so it is \({9 + 3 \choose 3} = 220\). But, this accounts for \(a = 0\), which we don't want; it has to be a cubic polynomial.
If we "remove a", there are 9 stars, but only 2 boxes, so there are \({9 + 2 \choose 2} = 55\) ways for \(a = 0\).
So, there are \(220 - 55 = \color{brown}\boxed{165}\) polynomials.
Aug 18, 2022
#2
+85
+5
Thank you. That was a very clear explanation.
Doremy Aug 18, 2022
#3
+2446
+2
You're welcome!!
BuilderBoi Aug 18, 2022
|
Wilcoxon signed-rank test is a non-parametric rank test to compare two paired samples, whether values in one are bigger than in the other. Can also be used to compare one sample to a fixed value. [The test NOT to be confused with the sign-test].
|
# zbMATH — the first resource for mathematics
Asymptotic behaviour of critical controlled branching processes with random control functions. (English) Zbl 1079.60073
Let $$Z_{n+1} = \sum^{\varphi_n (Z_n)} X_{n, j}$$ be a controlled branching process with i.i.d. random control functions $$\varphi_n$$. In the critical case, $$E(Z_{n+1}/Z_n\mid Z_n = k) \rightarrow_k 1$$, the authors establish results on extinction, nonextinction and of the limiting distribution under suitable normalization. The main tool are methods for growth processes $$Z_{n+1} = Z_n + g (Z_n) \xi_n$$ as in [G. Keller, G. Kersting and the reviewer, Ann. Prob. 15, 305–343 (1987; Zbl 0616.60079)].
Reviewer: Uwe Rösler (Kiel)
##### MSC:
60J80 Branching processes (Galton-Watson, birth-and-death, etc.)
##### Keywords:
extinction; weak limit; growth process
Full Text:
##### References:
[1] Bruss, F. T. (1980). A counterpart of the Borel–Cantelli lemma. J. Appl. Prob. 17 , 1094–1101. JSTOR: · Zbl 0443.60002 [2] Chow, Y. S. and Teicher, H. (1997). Probability Theory. Independence, Interchangeability, Martingales, 3rd edn. Springer, New York. · Zbl 0891.60002 [3] Dion, J.-P. and Essebbar, B. (1995). On the statistics of controlled branching processes. In Branching Processes (Lecture Notes Statist. 99 ), ed. C. C. Heyde, Springer, New York, pp. 14–21. · Zbl 0821.62046 [4] González, M., Molina, M. and del Puerto, I. (2002). On the controlled Galton–Watson process with random control function. J. Appl. Prob. 39 , 804–815. · Zbl 1032.60077 [5] González, M., Molina, M. and del Puerto, I. (2003). On the geometric growth in controlled branching processes with random control function. J. Appl. Prob. 40 , 995–1006. · Zbl 1054.60087 [6] González, M., Molina, M. and del Puerto, I. (2004). Limiting distribution for subcritical controlled branching processes with random control function. Statist. Prob. Lett. 67 , 277–284. · Zbl 1063.60121 [7] González, M., Molina, M. and del Puerto, I. (2005). On $$L^2$$-convergence of controlled branching processes with random control function. Bernoulli 11 , 37–46. · Zbl 1062.60088 [8] Höpfner, R. (1985). On some classes of population-size-dependent Galton–Watson processes. J. Appl. Prob. 22 , 25–36. JSTOR: · Zbl 0573.60079 [9] Höpfner, R. (1986). Some results on population-size-dependent Galton–Watson processes. J. Appl. Prob. 23 , 297–306. JSTOR: · Zbl 0598.60089 [10] Jagers, P. (1975). Branching Processes with Biological Applications . John Wiley, London. · Zbl 0356.60039 [11] Keller, G., Kersting, G. and Rösler, U. (1987). On the asymptotic behaviour of discrete time stochastic growth processes. Ann. Prob. 15 , 305–343. · Zbl 0616.60079 [12] Kersting, G. (1986). On recurrence and transience of growth models. J. Appl. Prob. 23 , 614–625. JSTOR: · Zbl 0611.60084 [13] Kersting, G. (1992). Asymptotic $$\Gamma$$-distribution for stochastic difference equations. Stoch. Process. Appl. 40 , 15–28. · Zbl 0747.60024 [14] Klebaner, F. (1989). Stochastic difference equations and generalized gamma distributions. Ann. Prob. 17 , 178–188. · Zbl 0674.60077 [15] Nakagawa, T. (1994). The $$L^\alpha$$ ($$1<\alpha\leq2$$) convergence of a controlled branching process in a random environment. Bull. Gen. Ed. Dokkyo Univ. School Medicine 17 , 17–24. [16] Yanev, G. P. and Yanev, N. M. (1995). Critical branching process with random migration. In Branching Processes (Lecture Notes Statist. 99 ), ed. C. C. Heyde, Springer, New York, pp. 36–46. · Zbl 0833.60086 [17] Yanev, G. P. and Yanev, N. M. (2004). A critical branching process with stationary-limiting distribution. Stoch. Anal. Appl. 22 , 721–738. · Zbl 1085.60062 [18] Yanev, G. P., Mitov, K. V. and Yanev, N. M. (2003). Critical branching regenerative process with random migration. J. Appl. Statist. Sci. 12 , 41–54. · Zbl 1052.60071 [19] Yanev, N. M. (1976). Conditions for degeneracy of $$\phi$$-branching processes with random $$\phi$$. Theory Prob. Appl. 20 , 421–428. · Zbl 0363.60072
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
1. index:: single: Program; Extf single: Extf
# 4.2.16. EXTF¶
This module calculates the contribution of an external force that is acting on the system. It applies the modification directly on the gradient and it needs to be called after the execution of ALASKA, in an optimization or molecular dynamics calculation. At present time, just the LINEAR keyword is present, that applies a constant linear force between two atoms [76].
## 4.2.16.1. Input¶
### 4.2.16.1.1. General keywords¶
LINEAR
This keyword works by specifying 4 parameters, each one in its own line after the keyword itself. First parameter (Integer) is the first atom number following the numeration of the geometry. Second parameter (Integer) is the second atom number. Third parameter is the force (Float) in nanonewton applied along the vector between the two atoms. Fourth parameter is 0 or 1 (Bool), where 0 indicates a repulsive force, and 1 is for an attractive force.
### 4.2.16.1.2. Input examples¶
The following input example is a semiclassical molecular dynamics with tully surface hop, where a linear force of 2.9 nN is applied between atom 1 and atom 2.
&Gateway
coord=$Project.xyz basis=6-31G* group=nosym >> EXPORT MOLCAS_MAXITER=400 >> DOWHILE &Seward &rasscf nactel = 6 0 0 inactive = 23 ras2 = 6 ciroot = 2 2 1 prwf = 0.0 mdrlxroot = 2 &alaska &surfacehop tully decoherence = 0.1 psub &Extf LINEAR 1 2 2.9 0 &Dynamix velver dt = 41.3 velo = 1 thermo = 0 >>> End Do This example shows an excited state CASSCF MD simulation of a methaniminium cation using the Tully Surface Hop algorithm. In the simulation, the carbon and the nitrogen are pulled apart with a constant force of 1.5 nN (nanonewton). Within the EXTF module the keyword LINEAR is used. Note EXTF needs to be called after the execution of ALASKA, inside the loop. The options are: 1: the atom number corresponding to the C atom, 2: the atom number corresponding to the N atom, 1.5: the force intensity, 0: to indicate a repulsive force. &GATEWAY COORD 6 Angstrom C 0.00031448 0.00000000 0.04334060 N 0.00062994 0.00000000 1.32317716 H 0.92882820 0.00000000 -0.49115611 H -0.92846597 0.00000000 -0.49069213 H -0.85725321 0.00000000 1.86103989 H 0.85877656 0.00000000 1.86062860 BASIS= 3-21G GROUP= nosym >> EXPORT MOLCAS_MAXITER=1000 >> DOWHILE &SEWARD >> IF ( ITER = 1 ) &RASSCF LUMORB FileOrb=$Project.GssOrb
Symmetry= 1
Spin= 1
nActEl= 2 0 0
Inactive= 7
RAS2= 2
CIroot= 3 3 1
>> COPY $Project.JobIph$Project.JobOld
>> ENDIF
&RASSCF
JOBIPH; CIRESTART
Symmetry= 1
Spin= 1
nActEl= 2 0 0
Inactive= 7
RAS2= 2
CIroot= 3 3 1
MDRLXR= 2
>> COPY $Project.JobIph$Project.JobOld
&surfacehop
TULLY
SUBSTEP = 200
DECOHERENCE = 0.1
PSUB
&extf
LINEAR
1
2
1.5
0
&Dynamix
VELVer
DT= 10.0
VELO= 3
THER= 2
TEMP=300
>> END DO
|
User neil dickson - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T02:21:26Z http://mathoverflow.net/feeds/user/8864 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/37127/can-convergence-radii-of-pade-approximants-always-be-made-infinite Can Convergence Radii of Padé Approximants Always Be Made Infinite? Neil Dickson 2010-08-30T08:09:42Z 2010-09-08T18:52:13Z <p>I've found (as have others), that for some analytic functions, a Padé approximant of it has an infinite convergence radius, whereas its associated Taylor series has a finite convergence radius. $f(x)=\sqrt{1+x^2}$ appears to be one such function. My questions are:</p> <p>1) Is there any function where the Taylor series has the largest convergence radius of all associated Padé approximants? If so, is the Taylor series radius strictly larger, or only equal to the convergence radius of other Padé approximants (i.e. excluding the Taylor series itself)?</p> <p>2) If not, is there any function that is analytic everywhere, and yet for which there is no (limit of) Padé approximant(s) that has an infinite convergence radius?</p> <p>It would be both very cool and very useful if there is always a (limit of) Padé approximant(s) that has an infinite convergence radius for any function that is analytic everywhere, though I haven't the slightest how one checks/analyzes convergence of Padé approximants if the degrees of numerator and denominator both approach infinity. :)</p> <p>One extra question, if there is always such a Padé approximant:</p> <p>3) Is there always a numerically stable method of computing this approximant up to a finite order?</p> http://mathoverflow.net/questions/34151/simple-efficient-representation-of-stirling-numbers-of-the-first-kind/37121#37121 Answer by Neil Dickson for Simple/efficient representation of Stirling numbers of the first kind Neil Dickson 2010-08-30T06:32:29Z 2010-08-30T06:32:29Z <p>If Stirling numbers of the first kind are the numbers associated with the Stirling series, if there is a "sufficiently simple-to-compute" representation of them, you can factor integers in time polynomial in the number of their bits, using a simple property presented in <a href="http://rjlipton.wordpress.com/2009/02/23/factoring-and-factorials/" rel="nofollow">a blog post by Richard Lipton</a> and a particular rational/exponential approximation to $n!$ that's based on the Stirling series. I spent some time looking for such a representation once, without any luck, though.</p> <p>It's believed by many that there is no such algorithm to factor integers, (although Richard has written several posts suggesting that it's still rather uncertain), so if they're right, there is no "sufficiently simple-to-compute" representation of the Stirling numbers of the first kind.</p> http://mathoverflow.net/questions/36995/asymptotic-growth-of-a-certain-integer-sequence/37042#37042 Answer by Neil Dickson for Asymptotic growth of a certain integer sequence Neil Dickson 2010-08-29T11:25:27Z 2010-08-29T11:25:27Z <p>While I have no idea how to put an upper bound on it, I seem to have at least found a loose linear lower bound. I started by noticing that it takes a sufficiently large $k$ for $k^n$ to be smaller than the sum of the smaller numbers in the set. If the entire rest of the set can't sum to equal that one element, there clearly can't be an equal partition.</p> <p>Since you've shown that there is always a solution, for a given $n$, there is some smallest integer $k^\star$ such that:</p> <p>$\sum_{i=1}^{k^\star-1} i^n \ge k^{\star n}$</p> <p>Since $k^\star$ is the smallest such integer,</p> <p>$\sum_{i=1}^{k^\star-2} i^n < (k^\star-1)^n$</p> <p>and therefore:</p> <p>$2 (k^\star-1)^{n} > \sum_{i=1}^{k^\star-1} i^n \ge k^{\star n}$</p> <p>For n>0, this gives:</p> <p>$2^{1/n} > \frac{k^*}{k^\star-1}$</p> <p>$2^{-1/n} < 1-\frac{1}{k^\star}$</p> <p>$k^\star > \frac{1}{1-2^{-1/n}} = \sum_{i=0}^{\infty}2^{-i/n} > \frac{n}{2} + 1$</p> <p>(My apologies if the latex doesn't parse correctly, the preview seems to show it only some of the time.)</p> http://mathoverflow.net/questions/38119/where-to-publish-a-paper-on-the-mafia-game Comment by Neil Dickson Neil Dickson 2010-09-09T07:58:01Z 2010-09-09T07:58:01Z While I completely agree that it is uncommon for undergraduates to publish any papers, and nobody here has explicitly discouraged publication of this work, I would strongly encourage wider publishing of undergraduate work. As with this case, undergraduates are often quite interdisciplinary in their research, which has been greatly lacking in some fields. For example, in quantum computing, many publications disregard basic concepts of physics, whereas many other publications disregard basic concepts of computer science. A fresh perspective could help bring fields together. http://mathoverflow.net/questions/37127/can-convergence-radii-of-pade-approximants-always-be-made-infinite Comment by Neil Dickson Neil Dickson 2010-09-09T07:23:21Z 2010-09-09T07:23:21Z Touché about the example I gave. It is quite odd and curious that I've been working lately with a ton of such functions whose Padé approximants appear to have infinite convergence along the real axis, but clearly do not converge infinitely along the imaginary axis. I suppose I shouldn't be <i>too</i> surprised, since they come from recursively applying a real, non-negative perturbation to rational functions. However, I was expecting that, like with the Taylor series of them, the rational perturbation would only be valid up to a finite value of the parameter. Anyway, I'm just rambling now. http://mathoverflow.net/questions/37127/can-convergence-radii-of-pade-approximants-always-be-made-infinite/38087#38087 Comment by Neil Dickson Neil Dickson 2010-09-09T07:09:05Z 2010-09-09T07:09:05Z Thanks for the insights! It seems reasonable that if M is fixed, there would be some function for which the approximant sequence doesn't fully converge. It's very interesting that it also happens for M=L on some function. http://mathoverflow.net/questions/34151/simple-efficient-representation-of-stirling-numbers-of-the-first-kind/37121#37121 Comment by Neil Dickson Neil Dickson 2010-08-31T05:40:03Z 2010-08-31T05:40:03Z Touché. :) <a href="<a href="http://en.wikipedia.org/wiki/Bernoulli_number#Connection_with_Stirling_numbers_of_the_first_kind">They" rel="nofollow">en.wikipedia.org/wiki/…</a> are related, of course,</a> but probably not enough to make my above assertion. http://mathoverflow.net/questions/37038/correct-general-form-for-kth-derivative-of-gx-fxgx/37045#37045 Comment by Neil Dickson Neil Dickson 2010-08-29T20:49:50Z 2010-08-29T20:49:50Z Yep, that's correct! The Lagrange-Bürmann formula specifically, but as you noted above, someone else already pointed that out on sci.math. Thanks a bunch anyway! :) I'll have to make my future contests harder, haha. http://mathoverflow.net/questions/37038/correct-general-form-for-kth-derivative-of-gx-fxgx Comment by Neil Dickson Neil Dickson 2010-08-29T20:35:30Z 2010-08-29T20:35:30Z While that is a general formula for <i>any</i> composition of two functions, I'd already worked that much out. What I'd like is proof whether or not the formula I listed above is equivalent to that for the case of the less general composition $f(xg(x))$. That's the part I spent a few days on before giving up. :)
|
Fundamental Theorem of Asset Pricing (Linear Algebra)
I saw this question in a textbook that I was recently reading and don't really know how to aprpoach this problem.
Let $H$ be a finite dimensional vector space with inner product ($\cdotp$, $\cdotp$). Suppose $C\subset H$ is a closed convex cone such that $C\cap(-C) = \{0\}.$ Then there exists a nonzero vector p such that ($p \cdot x$) $>0$ for all nonzero $x \in C$. The book suggests to use the proof of the fundamental theorem of asset pricing and make a substitution. Could anyone let me know how to answer this problem?
(Edit: This is the theorem in the textbook with the definition of present day value.
|
## Surface Area of 3D Surface z=f(x,y) By Integration Example
The surface area of a surface
$S$
is
$A= \int_S \sqrt{(\frac{\partial z}{\partial x})^2 +(\frac{\partial z}{\partial y})^2 +1} dS$
A cone has equation
$z=\sqrt{x^2 +y^2}$
.
The surface area of that part of the cone bounded by
$0 \leq x \leq 4, 1 \leq y \leq 6$
is
\begin{aligned} A &= \int^4_0 \int^6_1 \sqrt{(\frac{\partial z}{\partial x})^2 +(\frac{\partial z}{\partial y})^2 +1} dy dx \\ &= \int^4_0 \int^6_1 \sqrt{ (\frac{x}{\sqrt{x^2+y^2}})^2 +(\frac{y}{\sqrt{x^2+y^2}})^2 +1} dy dx \\ &= \int^4_0 \int^6_1 \sqrt{2} dy dx \\ &= \int^4_0 [y \sqrt{2}]^6_1 dx \\ &= \int^6_1 5 \sqrt{2} dx \\ &= [5x \sqrt{2}]^4_0 \\ &= 20 \sqrt{2} \end{aligned}
|
# queueing
Copyright © 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2016, 2018, 2020 Moreno Marzolla.
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions.
Next: , Up: (dir) [Contents][Index]
This manual documents how to install and run the Queueing package. It corresponds to version 1.2.7 of the package.
Next: , Previous: , Up: Top [Contents][Index]
## 1 Summary
Next: , Up: Summary [Contents][Index]
### 1.1 About the Queueing Package
This document describes the queueing package for GNU Octave (queueing in short). The queueing package, previously known as qnetworks toolbox, is a collection of functions for analyzing queueing networks and Markov chains written for GNU Octave. Specifically, queueing contains functions for analyzing Jackson networks, open, closed or mixed product-form BCMP networks, and computing performance bounds. The following algorithms are available
• Convolution for closed, single-class product-form networks with load-dependent service centers;
• Exact and approximate Mean Value Analysis (MVA) for single and multiple class product-form closed networks;
• MVA for mixed, multiple class product-form networks with load-independent service centers;
• Approximate MVA for closed, single-class networks with blocking (MVABLO algorithm by F. Akyildiz);
• Asymptotic Bounds, Balanced System Bounds and Geometric Bounds;
queueing provides functions for analyzing the following types of single-station queueing systems:
• M/M/1
• M/M/m
• M/M/\infty
• M/M/1/k single-server, finite capacity system
• M/M/m/k multiple-server, finite capacity system
• Asymmetric M/M/m
• M/G/1 (general service time distribution)
• M/H_m/1 (Hyperexponential service time distribution)
Functions for Markov chain analysis are also provided (discrete- and continuous-time chains are supported):
• Birth-death processes;
• Transient and stationary state occupancy probabilities;
• Mean time to absorption;
• Expected sojourn times and time-averaged sojourn times;
• Mean first passage times;
The queueing package is distributed under the terms of the GNU General Public License (GPL), version 3 or later (see Copying). You are encouraged to share this software with others, and improve this package by contributing additional functions and reporting bugs. See Contributing Guidelines.
If you use the queueing package in a technical paper, please cite it as:
Moreno Marzolla, The qnetworks Toolbox: A Software Package for Queueing Networks Analysis. Khalid Al-Begain, Dieter Fiems and William J. Knottenbelt, Editors, Proceedings 17th International Conference on Analytical and Stochastic Modeling Techniques and Applications (ASMTA 2010) Cardiff, UK, June 14–16, 2010, volume 6148 of Lecture Notes in Computer Science, Springer, pp. 102–116, ISBN 978-3-642-13567-5
If you use BibTeX, this is the citation block:
@inproceedings{queueing,
author = {Moreno Marzolla},
title = {The qnetworks Toolbox: A Software Package for Queueing
Networks Analysis},
booktitle = {Analytical and Stochastic Modeling Techniques and
Applications, 17th International Conference,
ASMTA 2010, Cardiff, UK, June 14-16, 2010. Proceedings},
editor = {Khalid Al-Begain and Dieter Fiems and William J. Knottenbelt},
year = {2010},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {6148},
pages = {102--116},
ee = {http://dx.doi.org/10.1007/978-3-642-13568-2_8},
isbn = {978-3-642-13567-5}
}
An early draft of the paper above is available as Technical Report UBLCS-2010-04, February 2010, Department of Computer Science, University of Bologna, Italy.
Next: , Previous: , Up: Summary [Contents][Index]
### 1.2 Contributing Guidelines
Contributions and bug reports are always welcome. If you want to contribute to the queueing package, here are some guidelines:
• If you are contributing a new function, please embed proper documentation within the function itself. The documentation must be in texinfo format, so that it can be extracted and included into the printable manual. See the existing functions for the documentation style.
• Make sure that each new function validates its input parameters. For example, a function accepting vectors should check whether the dimensions match.
• Provide bibliographic references for each new algorithm you contribute. Document any significant difference from the reference. Update the doc/references.txi file if appropriate.
• Include test and demo blocks. Test blocks are particularly important, since most algorithms are tricky to implement correctly. If appropriate, test blocks should also verify that the function fails on incorrect inputs.
Send your contribution to Moreno Marzolla ([email protected]). If you are a user of this package and find it useful, let me know by dropping me a line. Thanks.
Previous: , Up: Summary [Contents][Index]
### 1.3 Acknowledgments
The following people (listed alphabetically) contributed to the queueing package, either by providing feedback, reporting bugs or contributing code: Philip Carinhas, Phil Colbourn, Diego Didona, Yves Durand, Marco Guazzone, Dmitry Kolesnikov, Michele Mazzucco, Marco Paolieri.
Next: , Previous: , Up: Top [Contents][Index]
## 2 Installation and Getting Started
### 2.1 Installation through Octave package management system
The most recent version of queueing is 1.2.7 and can be downloaded from Octave-Forge
Additional information can be found at
To install queueing, follow these steps:
1. If you have a recent version of GNU Octave and a network connection, you can install queueing from Octave command prompt using this command:
octave:1> pkg install -forge queueing
The command above will download and install the latest version of the queueing package from Octave Forge, and install it on your machine.
If you do not have root access, you can perform a local install with:
octave:1> pkg install -local -forge queueing
This will install queueing in your home directory, and the package will be available to the current user only.
2. Alternatively, you can first download the queueing tarball from Octave-Forge; to install the package in the system-wide location issue this command at the Octave prompt:
octave:1> pkg install queueing-1.2.7.tar.gz
(you may need to start Octave as root in order to allow the installation to copy the files to the target locations). After this, all functions will be available each time Octave starts, without the need to tweak the search path.
If you do not have root access, you can do a local install using:
octave:1> pkg install -local queueing-1.2.7.tar.gz
3. Use the pkg list command at the Octave prompt to check that the queueing package has been succesfully installed; you should see something like:
octave:1>pkg list queueing
Package Name | Version | Installation directory
--------------+---------+-----------------------
queueing | 1.2.7 | /home/moreno/octave/queueing-1.2.7
4. Starting from version 1.1.1, queueing is no longer automatically loaded on Octave start. To make the functions available for use, you need to issue the command
octave:1>pkg load queueing
at the Octave prompt. To automatically load queueing each time Octave starts, you can add the command above to the startup script (usually, ~/.octaverc on Unix systems).
5. To completely remove queueing from your system, use the pkg uninstall command:
octave:1> pkg uninstall queueing
### 2.2 Manual installation
If you want to manually install queueing in a custom location, you can download the tarball and unpack it somewhere:
tar xvfz queueing-1.2.7.tar.gz
cd queueing-1.2.7/queueing/
Copy all .m files from the inst/ directory to some target location. Then, start Octave with the -p option to add the target location to the search path, so that Octave will find all queueing functions automatically:
octave -p /path/to/queueing
For example, if all queueing m-files are in /usr/local/queueing, you can start Octave as follows:
octave -p /usr/local/queueing
If you want, you can add the following line to ~/.octaverc:
addpath("/path/to/queueing");
so that the path /path/to/queueing is automatically added to the search path each time Octave is started, and you no longer need to specify the -p option on the command line.
Next: , Previous: , Up: Installation and Getting Started [Contents][Index]
### 2.3 Development sources
The source code of the queueing package can be found in the Mercurial repository at the URL:
The source distribution contains additional development files that are not present in the installation tarball. This section briefly describes the content of the source tree. This is only relevant for developers who want to modify the code or the documentation.
The source distribution contains the following directories:
doc/
Documentation sources. Most of the documentation is extracted from the comment blocks of function files from the inst/ directory.
inst/
This directory contains the m-files which implement the various algorithms provided by queueing. As a notational convention, the names of functions for Queueing Networks begin with the ‘qn’ prefix; the name of functions for Continuous-Time Markov Chains (CTMCs) begin with the ‘ctmc’ prefix, and the names of functions for Discrete-Time Markov Chains (DTMCs) begin with the ‘dtmc’ prefix.
test/
This directory contains the test scripts used to run all function tests.
devel/
This directory contains functions that are either not working properly, or need additional testing before they are moved to the inst/ directory.
The queueing package ships with a Makefile which can be used to produce the documentation (in PDF and HTML format), and automatically execute all function tests. The following targets are defined:
all
Running ‘make’ (or ‘make all’) on the top-level directory builds the programs used to extract the documentation from the comments embedded in the m-files, and then produce the documentation in PDF and HTML format (doc/queueing.pdf and doc/queueing.html, respectively).
check
Running ‘make check’ will execute all tests contained in the m-files. If you modify the code of any function in the inst/ directory, you should run the tests to ensure that no errors have been introduced. You are also encouraged to contribute new tests, especially for functions that are not adequately validated.
clean
distclean
dist
The ‘make clean’, ‘make distclean’ and ‘make dist’ commands are used to clean up the source directory and prepare the distribution archive in compressed tar format.
Next: , Previous: , Up: Installation and Getting Started [Contents][Index]
### 2.4 Naming Conventions
Most of the functions in the queueing package obey a common naming convention. Function names are made of several parts; the first part is a prefix which indicates the class of problems the function addresses:
ctmc-
Functions for continuous-time Markov chains
dtmc-
Functions for discrete-time Markov chains
qs-
Functions for analyzing single-station queueing systems (individual service centers)
qn-
Functions for analyzing queueing networks
Functions dealing with Markov chains start with either the ctmc or dtmc prefix; the prefix is optionally followed by an additional string which hints at what the function does:
-bd
Birth-Death process
-mtta
Mean Time to Absorption
-fpt
First Passage Times
-exps
Expected Sojourn Times
-taexps
Time-Averaged Expected Sojourn Times
For example, function ctmcbd returns the infinitesimal generator matrix for a continuous birth-death process, while dtmcbd returns the transition probability matrix for a discrete birth-death process. Note that there exist functions ctmc and dtmc (without any suffix) that compute steady-state and transient state occupancy probabilities for CTMCs and DTMCs, respectively. See Markov Chains.
Functions whose name starts with qs- deal with single station queueing systems. The suffix describes the type of system, e.g., qsmm1 for M/M/1, qnmmm for M/M/m and so on. See Single Station Queueing Systems.
Finally, functions whose name starts with qn- deal with queueing networks. The character that follows indicates whether the function handles open ('o') or closed ('c') networks, and whether there is a single customer class ('s') or multiple classes ('m'). The string mix indicates that the function supports mixed networks with both open and closed customer classes.
-os-
Open, single-class network: open network with a single class of customers
-om-
Open, multiclass network: open network with multiple job classes
-cs-
Closed, single-class network
-cm-
Closed, multiclass network
-mix-
Mixed network with open and closed classes of customers
The last part of the function name indicates the algorithm implemented by the function. See Queueing Networks.
-aba
Asymptotic Bounds Analysis
-bsb
Balanced System Bounds
-gb
Geometric Bounds
-pb
PB Bounds
-cb
Composite Bounds (CB)
-mva
Mean Value Analysis (MVA) algorithm
-cmva
Conditional MVA
-mvald
MVA with general load-dependent servers
-mvaap
Approximate MVA
-mvablo
MVABLO approximation for blocking queueing networks
-conv
Convolution algorithm
-convld
Convolution algorithm with general load-dependent servers
Some deprecated functions may be present in the queueing package; generally, these are functions that have been renamed, and the old name is kept for a while for backward compatibility. Deprecated functions are not documented and will be removed in future releases. Calling a deprecated functions displays a warning message that appears only once per session. The warning message can be turned off with the command:
octave:1> warning ("off", "qn:deprecated-function");
However, you are strongly recommended to update your code to the new API. To help catching usages of deprecated functions, you can transform warnings into errors so that your application stops immediately:
octave:1> warning ("error", "qn:deprecated-function");
Previous: , Up: Installation and Getting Started [Contents][Index]
### 2.5 Quick start Guide
You can use all functions by simply invoking their name with the appropriate parameters; an error is shown in case of missing/wrong parameters. Extensive documentation is provided for each function, and can be displayed with the help command. For example:
octave:2> help qncsmvablo
shows the documentation for the qncsmvablo function. Additional information can be found in the queueing manual, that is available in PDF format in doc/queueing.pdf and in HTML format in doc/queueing.html.
Many functions have demo blocks showing usage examples. To execute the demos for the qnclosed function, use the demo command:
octave:4> demo qnclosed
We now illustrate a few examples of how the queueing package can be used. More examples are provided in the manual.
Example 1 Compute the stationary state occupancy probabilities of a continuous-time Markov chain with infinitesimal generator matrix
/ -0.8 0.6 0.2 \
Q = | 0.3 -0.7 0.4 |
\ 0.2 0.2 -0.4 /
Q = [ -0.8 0.6 0.2; \
0.3 -0.7 0.4; \
0.2 0.2 -0.4 ];
q = ctmc(Q)
⇒ q = 0.23256 0.32558 0.44186
Example 2 Compute the transient state occupancy probability after n=3 transitions of a three state discrete-time birth-death process, with birth probabilities \lambda_{01} = 0.3 and \lambda_{12} = 0.5 and death probabilities \mu_{10} = 0.5 and \mu_{21} = 0.7, assuming that the system is initially in state zero (i.e., the initial state occupancy probabilities are [1, 0, 0]).
n = 3;
p0 = [1 0 0];
P = dtmcbd( [0.3 0.5], [0.5 0.7] );
p = dtmc(P,n,p0)
⇒ p = 0.55300 0.29700 0.15000
Example 3 Compute server utilization, response time, mean number of requests and throughput of a closed queueing network with N=4 requests and three M/M/1–FCFS queues with mean service times S = [1.0, 0.8, 1.4] and average number of visits V = [1.0, 0.8, 0.8]
S = [1.0 0.8 1.4];
V = [1.0 0.8 0.8];
N = 4;
[U R Q X] = qncsmva(N, S, V)
⇒
U = 0.70064 0.44841 0.78471
R = 2.1030 1.2642 3.2433
Q = 1.47346 0.70862 1.81792
X = 0.70064 0.56051 0.56051
Example 4 Compute server utilization, response time, mean number of requests and throughput of an open queueing network with three M/M/1–FCFS queues with mean service times S = [1.0, 0.8, 1.4] and average number of visits V = [1.0, 0.8, 0.8]. The overall arrival rate is \lambda = 0.8 requests/second.
S = [1.0 0.8 1.4];
V = [1.0 0.8 0.8];
lambda = 0.8;
[U R Q X] = qnos(lambda, S, V)
⇒
U = 0.80000 0.51200 0.89600
R = 5.0000 1.6393 13.4615
Q = 4.0000 1.0492 8.6154
X = 0.80000 0.64000 0.64000
Next: , Previous: , Up: Top [Contents][Index]
## 3 Markov Chains
Next: , Up: Markov Chains [Contents][Index]
### 3.1 Discrete-Time Markov Chains
Let X_0, X_1, …, X_n, … be a sequence of random variables defined over the discrete state space 1, 2, …. The sequence X_0, X_1, …, X_n, … is a stochastic process with discrete time 0, 1, 2, …. A Markov chain is a stochastic process {X_n, n=0, 1, …} which satisfies the following Markov property:
P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, …, X_0 = x_0) = P(X_{n+1} = x_{n+1} | X_n = x_n)
which basically means that the probability that the system is in a particular state at time n+1 only depends on the state the system was at time n.
The evolution of a Markov chain with finite state space {1, …, N} can be fully described by a stochastic matrix {\bf P}(n) = [ P_{i,j}(n) ] where P_{i, j}(n) = P( X_{n+1} = j\ |\ X_n = i ). If the Markov chain is homogeneous (that is, the transition probability matrix {\bf P}(n) is time-independent), we can write {\bf P} = [P_{i, j}], where P_{i, j} = P( X_{n+1} = j\ |\ X_n = i ) for all n=0, 1, ….
The transition probability matrix \bf P must be a stochastic matrix, meaning that it must satisfy the following two properties:
1. P_{i, j} ≥ 0 for all 1 ≤ i, j ≤ N;
2. \sum_{j=1}^N P_{i,j} = 1 for all i
Property 1 requires that all probabilities are nonnegative; property 2 requires that the outgoing transition probabilities from any state i sum to one.
Function File: [r err] = dtmcchkP (P)
Check whether P is a valid transition probability matrix.
If P is valid, r is the size (number of rows or columns) of P. If P is not a transition probability matrix, r is set to zero, and err to an appropriate error string.
A DTMC is irreducible if every state can be reached with non-zero probability starting from every other state.
Function File: [r s] = dtmcisir (P)
Check if P is irreducible, and identify Strongly Connected Components (SCC) in the transition graph of the DTMC with transition matrix P.
INPUTS
P(i,j)
transition probability from state i to state j. P must be an N \times N stochastic matrix.
OUTPUTS
r
1 if P is irreducible, 0 otherwise (scalar)
s(i)
strongly connected component (SCC) that state i belongs to (vector of length N). SCCs are numbered 1, 2, …. The number of SCCs is max(s). If the graph is strongly connected, then there is a single SCC and the predicate all(s == 1) evaluates to true
#### 3.1.1 State occupancy probabilities
Given a discrete-time Markov chain with state space {1, …, N}, we denote with {\bf \pi}(n) = \left[\pi_1(n), … \pi_N(n) \right] the state occupancy probability vector at step n = 0, 1, …. \pi_i(n) is the probability that the system is in state i after n transitions.
Given the transition probability matrix \bf P and the initial state occupancy probability vector {\bf \pi}(0) = \left[\pi_1(0), …, \pi_N(0)\right], {\bf \pi}(n) can be computed as:
\pi(n) = \pi(0) P^n
Under certain conditions, there exists a stationary state occupancy probability {\bf \pi} = \lim_{n \rightarrow +\infty} {\bf \pi}(n), which is independent from {\bf \pi}(0). The vector \bf \pi is the solution of the following linear system:
/
| \pi P = \pi
| \pi 1^T = 1
\
where \bf 1 is the row vector of ones, and ( \cdot )^T the transpose operator.
Function File: p = dtmc (P)
Function File: p = dtmc (P, n, p0)
Compute stationary or transient state occupancy probabilities for a discrete-time Markov chain.
With a single argument, compute the stationary state occupancy probabilities p(1), …, p(N) for a discrete-time Markov chain with finite state space {1, …, N} and with N \times N transition matrix P. With three arguments, compute the transient state occupancy probabilities p(1), …, p(N) that the system is in state i after n steps, given initial occupancy probabilities p0(1), …, p0(N).
INPUTS
P(i,j)
transition probabilities from state i to state j. P must be an N \times N irreducible stochastic matrix, meaning that the sum of each row must be 1 (\sum_{j=1}^N P_{i, j} = 1), and the rank of P must be N.
n
Number of transitions after which state occupancy probabilities are computed (scalar, n ≥ 0)
p0(i)
probability that at step 0 the system is in state i (vector of length N).
OUTPUTS
p(i)
If this function is called with a single argument, p(i) is the steady-state probability that the system is in state i. If this function is called with three arguments, p(i) is the probability that the system is in state i after n transitions, given the probabilities p0(i) that the initial state is i.
EXAMPLE
The following example is from GrSn97. Let us consider a maze with nine rooms, as shown in the following figure
+-----+-----+-----+
| | | |
| 1 2 3 |
| | | |
+- -+- -+- -+
| | | |
| 4 5 6 |
| | | |
+- -+- -+- -+
| | | |
| 7 8 9 |
| | | |
+-----+-----+-----+
A mouse is placed in one of the rooms and can wander around. At each step, the mouse moves from the current room to a neighboring one with equal probability. For example, if it is in room 1, it can move to room 2 and 4 with probability 1/2, respectively; if the mouse is in room 8, it can move to either 7, 5 or 9 with probability 1/3.
The transition probabilities P_{i, j} from room i to room j can be summarized in the following matrix:
/ 0 1/2 0 1/2 0 0 0 0 0 \
| 1/3 0 1/3 0 1/3 0 0 0 0 |
| 0 1/2 0 0 0 1/2 0 0 0 |
| 1/3 0 0 0 1/3 0 1/3 0 0 |
P = | 0 1/4 0 1/4 0 1/4 0 1/4 0 |
| 0 0 1/3 0 1/3 0 0 0 1/3 |
| 0 0 0 1/2 0 0 0 1/2 0 |
| 0 0 0 0 1/3 0 1/3 0 1/3 |
\ 0 0 0 0 0 1/2 0 1/2 0 /
The stationary state occupancy probabilities can then be computed with the following code:
P = zeros(9,9);
P(1,[2 4] ) = 1/2;
P(2,[1 5 3] ) = 1/3;
P(3,[2 6] ) = 1/2;
P(4,[1 5 7] ) = 1/3;
P(5,[2 4 6 8]) = 1/4;
P(6,[3 5 9] ) = 1/3;
P(7,[4 8] ) = 1/2;
P(8,[7 5 9] ) = 1/3;
P(9,[6 8] ) = 1/2;
p = dtmc(P);
disp(p)
⇒ 0.083333 0.125000 0.083333 0.125000
0.166667 0.125000 0.083333 0.125000
0.083333
#### 3.1.2 Birth-death process
Function File: P = dtmcbd (b, d)
Returns the transition probability matrix P for a discrete birth-death process over state space {1, …, N}. For each i=1, …, (N-1), b(i) is the transition probability from state i to (i+1), and d(i) is the transition probability from state (i+1) to i.
Matrix \bf P is defined as:
/ \
| 1-b(1) b(1) |
| d(1) (1-d(1)-b(2)) b(2) |
| d(2) (1-d(2)-b(3)) b(3) |
| |
| ... ... ... |
| |
| d(N-2) (1-d(N-2)-b(N-1)) b(N-1) |
| d(N-1) 1-d(N-1) |
\ /
where \lambda_i and \mu_i are the birth and death probabilities, respectively.
#### 3.1.3 Expected Number of Visits
Given a N state discrete-time Markov chain with transition matrix \bf P and an integer n ≥ 0, we let L_i(n) be the the expected number of visits to state i during the first n transitions. The vector {\bf L}(n) = \left[ L_1(n), …, L_N(n) \right] is defined as
n n
___ ___
\ \ i
L(n) = > pi(i) = > pi(0) P
/___ /___
i=0 i=0
where {\bf \pi}(i) = {\bf \pi}(0){\bf P}^i is the state occupancy probability after i transitions, and {\bf \pi}(0) = \left[\pi_1(0), …, \pi_N(0) \right] are the initial state occupancy probabilities.
If \bf P is absorbing, i.e., the stochastic process eventually enters a state with no outgoing transitions, then we can compute the expected number of visits until absorption \bf L. To do so, we first rearrange the states by rewriting \bf P as
/ Q | R \
P = |---+---|
\ 0 | I /
where the first t states are transient and the last r states are absorbing (t+r = N). The matrix {\bf N} = ({\bf I} - {\bf Q})^{-1} is called the fundamental matrix; N_{i,j} is the expected number of times the process is in the j-th transient state assuming it started in the i-th transient state. If we reshape \bf N to the size of \bf P (filling missing entries with zeros), we have that, for absorbing chains, {\bf L} = {\bf \pi}(0){\bf N}.
Function File: L = dtmcexps (P, n, p0)
Function File: L = dtmcexps (P, p0)
Compute the expected number of visits to each state during the first n transitions, or until abrosption.
INPUTS
P(i,j)
N \times N transition matrix. P(i,j) is the transition probability from state i to state j.
n
Number of steps during which the expected number of visits are computed (n ≥ 0). If n=0, returns p0. If n > 0, returns the expected number of visits after exactly n transitions.
p0(i)
Initial state occupancy probabilities; p0(i) is the probability that the system is in state i at step 0.
OUTPUTS
L(i)
When called with two arguments, L(i) is the expected number of visits to state i before absorption. When called with three arguments, L(i) is the expected number of visits to state i during the first n transitions.
REFERENCES
• Grinstead, Charles M.; Snell, J. Laurie (July 1997). Introduction to Probability, Ch. 11: Markov Chains. American Mathematical Society. ISBN 978-0821807491.
#### 3.1.4 Time-averaged expected sojourn times
Function File: M = dtmctaexps (P, n, p0)
Function File: M = dtmctaexps (P, p0)
Compute the time-averaged sojourn times M(i), defined as the fraction of time spent in state i during the first n transitions (or until absorption), assuming that the state occupancy probabilities at time 0 are p0.
INPUTS
P(i,j)
N \times N transition probability matrix.
n
Number of transitions during which the time-averaged expected sojourn times are computed (scalar, n ≥ 0). if n = 0, returns p0.
p0(i)
Initial state occupancy probabilities (vector of length N).
OUTPUTS
M(i)
If this function is called with three arguments, M(i) is the expected fraction of steps {0, …, n} spent in state i, assuming that the state occupancy probabilities at time zero are p0. If this function is called with two arguments, M(i) is the expected fraction of steps spent in state i until absorption. M is a vector of length N.
#### 3.1.5 Mean Time to Absorption
The mean time to absorption is defined as the average number of transitions that are required to enter an absorbing state, starting from a transient state or given initial state occupancy probabilities {\bf \pi}(0).
Let t_i be the expected number of transitions before being absorbed in any absorbing state, starting from state i. The vector {\bf t} = [t_1, …, t_N] can be computed from the fundamental matrix \bf N (see Expected number of visits (DTMC)) as
t = N c
where \bf c is a column vector of 1’s.
Let {\bf B} = [ B_{i, j} ] be a matrix where B_{i, j} is the probability of being absorbed in state j, starting from transient state i. Again, using matrices \bf N and \bf R (see Expected number of visits (DTMC)) we can write
B = N R
Function File: [t N B] = dtmcmtta (P)
Function File: [t N B] = dtmcmtta (P, p0)
Compute the expected number of steps before absorption for a DTMC with state space {1, …, N} and transition probability matrix P.
INPUTS
P(i,j)
N \times N transition probability matrix. P(i,j) is the transition probability from state i to state j.
p0(i)
Initial state occupancy probabilities (vector of length N).
OUTPUTS
t
t(i)
When called with a single argument, t is a vector of length N such that t(i) is the expected number of steps before being absorbed in any absorbing state, starting from state i; if i is absorbing, t(i) = 0. When called with two arguments, t is a scalar, and represents the expected number of steps before absorption, starting from the initial state occupancy probability p0.
N(i)
N(i,j)
When called with a single argument, N is the N \times N fundamental matrix for P. N(i,j) is the expected number of visits to transient state j before absorption, if the system started in transient state i. The initial state is counted if i = j. When called with two arguments, N is a vector of length N such that N(j) is the expected number of visits to transient state j before absorption, given initial state occupancy probability P0.
B(i)
B(i,j)
When called with a single argument, B is a N \times N matrix where B(i,j) is the probability of being absorbed in state j, starting from transient state i; if j is not absorbing, B(i,j) = 0; if i is absorbing, B(i,i) = 1 and B(i,j) = 0 for all i \neq j. When called with two arguments, B is a vector of length N where B(j) is the probability of being absorbed in state j, given initial state occupancy probabilities p0.
REFERENCES
• Grinstead, Charles M.; Snell, J. Laurie (July 1997). Introduction to Probability, Ch. 11: Markov Chains. American Mathematical Society. ISBN 978-0821807491.
Previous: , Up: Discrete-Time Markov Chains [Contents][Index]
#### 3.1.6 First Passage Times
The First Passage Time M_{i, j} is the average number of transitions needed to enter state j for the first time, starting from state i. Matrix \bf M satisfies the property
___
\
M_ij = 1 + > P_ij * M_kj
/___
k!=j
To compute {\bf M} = [ M_{i, j}] a different formulation is used. Let \bf W be the N \times N matrix having each row equal to the stationary state occupancy probability vector \bf \pi for \bf P; let \bf I be the N \times N identity matrix (i.e., the matrix of all ones). Define \bf Z as follows:
-1
Z = (I - P + W)
Then, we have that
Z_jj - Z_ij
M_ij = -----------
\pi_j
According to the definition above, M_{i,i} = 0. We arbitrarily set M_{i,i} to the mean recurrence time r_i for state i, that is the average number of transitions needed to return to state i starting from it. r_i is:
1
r_i = -----
\pi_i
Function File: M = dtmcfpt (P)
Compute mean first passage times and mean recurrence times for an irreducible discrete-time Markov chain over the state space {1, …, N}.
INPUTS
P(i,j)
transition probability from state i to state j. P must be an irreducible stochastic matrix, which means that the sum of each row must be 1 (\sum_{j=1}^N P_{i j} = 1), and the rank of P must be N.
OUTPUTS
M(i,j)
For all 1 ≤ i, j ≤ N, i \neq j, M(i,j) is the average number of transitions before state j is entered for the first time, starting from state i. M(i,i) is the mean recurrence time of state i, and represents the average time needed to return to state i.
REFERENCES
• Grinstead, Charles M.; Snell, J. Laurie (July 1997). Introduction to Probability, Ch. 11: Markov Chains. American Mathematical Society. ISBN 978-0821807491.
Previous: , Up: Markov Chains [Contents][Index]
### 3.2 Continuous-Time Markov Chains
A stochastic process {X(t), t ≥ 0} is a continuous-time Markov chain if, for all integers n, and for any sequence t_0, t_1 , …, t_n, t_{n+1} such that t_0 < t_1 < … < t_n < t_{n+1}, we have
P(X_{n+1} = x_{n+1} | X_n = x_n, X_{n-1} = x_{n-1}, ..., X_0 = x_0) = P(X_{n+1} = x_{n+1} | X_n = x_n)
A continuous-time Markov chain is defined according to an infinitesimal generator matrix {\bf Q} = [Q_{i,j}], where for each i \neq j, Q_{i, j} is the transition rate from state i to state j. The matrix \bf Q must satisfy the property that, for all i, \sum_{j=1}^N Q_{i, j} = 0.
Function File: [result err] = ctmcchkQ (Q)
If Q is a valid infinitesimal generator matrix, return the size (number of rows or columns) of Q. If Q is not an infinitesimal generator matrix, set result to zero, and err to an appropriate error string.
Similarly to the DTMC case, a CTMC is irreducible if every state is eventually reachable from every other state in finite time.
Function File: [r s] = ctmcisir (P)
Check if Q is irreducible, and identify Strongly Connected Components (SCC) in the transition graph of the DTMC with infinitesimal generator matrix Q.
INPUTS
Q(i,j)
Infinitesimal generator matrix. Q is a N \times N square matrix where Q(i,j) is the transition rate from state i to state j, for 1 ≤ i, j ≤ N, i \neq j.
OUTPUTS
r
1 if Q is irreducible, 0 otherwise.
s(i)
strongly connected component (SCC) that state i belongs to. SCCs are numbered 1, 2, …. If the graph is strongly connected, then there is a single SCC and the predicate all(s == 1) evaluates to true.
#### 3.2.1 State occupancy probabilities
Similarly to the discrete case, we denote with {\bf \pi}(t) = \left[\pi_1(t), …, \pi_N(t) \right] the state occupancy probability vector at time t. \pi_i(t) is the probability that the system is in state i at time t ≥ 0.
Given the infinitesimal generator matrix \bf Q and initial state occupancy probabilities {\bf \pi}(0) = \left[\pi_1(0), …, \pi_N(0)\right], the occupancy probabilities {\bf \pi}(t) at time t can be computed as:
\pi(t) = \pi(0) exp(Qt)
where \exp( {\bf Q} t ) is the matrix exponential of {\bf Q} t. Under certain conditions, there exists a stationary state occupancy probability {\bf \pi} = \lim_{t \rightarrow +\infty} {\bf \pi}(t) that is independent from {\bf \pi}(0). \bf \pi is the solution of the following linear system:
/
| \pi Q = 0
| \pi 1^T = 1
\
Function File: p = ctmc (Q)
Function File: p = ctmc (Q, t, p0)
Compute stationary or transient state occupancy probabilities for a continuous-time Markov chain.
With a single argument, compute the stationary state occupancy probabilities p(1), …, p(N) for a continuous-time Markov chain with finite state space {1, …, N} and N \times N infinitesimal generator matrix Q. With three arguments, compute the state occupancy probabilities p(1), …, p(N) that the system is in state i at time t, given initial state occupancy probabilities p0(1), …, p0(N) at time 0.
INPUTS
Q(i,j)
Infinitesimal generator matrix. Q is a N \times N square matrix where Q(i,j) is the transition rate from state i to state j, for 1 ≤ i \neq j ≤ N. Q must satisfy the property that \sum_{j=1}^N Q_{i, j} = 0
t
Time at which to compute the transient probability (t ≥ 0). If omitted, the function computes the steady state occupancy probability vector.
p0(i)
probability that the system is in state i at time 0.
OUTPUTS
p(i)
If this function is invoked with a single argument, p(i) is the steady-state probability that the system is in state i, i = 1, …, N. If this function is invoked with three arguments, p(i) is the probability that the system is in state i at time t, given the initial occupancy probabilities p0(1), …, p0(N).
EXAMPLE
Consider a two-state CTMC where all transition rates between states are equal to 1. The stationary state occupancy probabilities can be computed as follows:
Q = [ -1 1; ...
1 -1 ];
q = ctmc(Q)
⇒ q = 0.50000 0.50000
#### 3.2.2 Birth-Death Process
Function File: Q = ctmcbd (b, d)
Returns the infinitesimal generator matrix Q for a continuous birth-death process over the finite state space {1, …, N}. For each i=1, …, (N-1), b(i) is the transition rate from state i to state (i+1), and d(i) is the transition rate from state (i+1) to state i.
Matrix \bf Q is therefore defined as:
/ \
| -b(1) b(1) |
| d(1) -(d(1)+b(2)) b(2) |
| d(2) -(d(2)+b(3)) b(3) |
| |
| ... ... ... |
| |
| d(N-2) -(d(N-2)+b(N-1)) b(N-1) |
| d(N-1) -d(N-1) |
\ /
where \lambda_i and \mu_i are the birth and death rates, respectively.
#### 3.2.3 Expected Sojourn Times
Given a N state continuous-time Markov Chain with infinitesimal generator matrix \bf Q, we define the vector {\bf L}(t) = \left[L_1(t), …, L_N(t)\right] such that L_i(t) is the expected sojourn time in state i during the interval [0,t), assuming that the initial occupancy probabilities at time 0 were {\bf \pi}(0). {\bf L}(t) can be expressed as the solution of the following differential equation:
dL
--(t) = L(t) Q + pi(0), L(0) = 0
dt
Alternatively, {\bf L}(t) can also be expressed in integral form as:
/ t
L(t) = | pi(u) du
/ 0
where {\bf \pi}(t) = {\bf \pi}(0) \exp({\bf Q}t) is the state occupancy probability at time t; \exp({\bf Q}t) is the matrix exponential of {\bf Q}t.
If there are absorbing states, we can define the vector of expected sojourn times until absorption {\bf L}(\infty), where for each transient state i, L_i(\infty) is the expected total time spent in state i until absorption, assuming that the system started with given state occupancy probabilities {\bf \pi}(0). Let \tau be the set of transient (i.e., non absorbing) states; let {\bf Q}_\tau be the restriction of \bf Q to the transient sub-states only. Similarly, let {\bf \pi}_\tau(0) be the restriction of the initial state occupancy probability vector {\bf \pi}(0) to transient states \tau.
The expected time to absorption {\bf L}_\tau(\infty) is defined as the solution of the following equation:
L_T( inf ) Q_T = -pi_T(0)
Function File: L = ctmcexps (Q, t, p )
Function File: L = ctmcexps (Q, p)
With three arguments, compute the expected times L(i) spent in each state i during the time interval [0,t], assuming that the initial occupancy vector is p. With two arguments, compute the expected time L(i) spent in each transient state i until absorption.
Note: In its current implementation, this function requires that an absorbing state is reachable from any non-absorbing state of Q.
INPUTS
Q(i,j)
N \times N infinitesimal generator matrix. Q(i,j) is the transition rate from state i to state j, 1 ≤ i, j ≤ N, i \neq j. The matrix Q must also satisfy the condition \sum_{j=1}^N Q_{i,j} = 0 for every i=1, …, N.
t
If given, compute the expected sojourn times in [0,t]
p(i)
Initial occupancy probability vector; p(i) is the probability the system is in state i at time 0, i = 1, …, N
OUTPUTS
L(i)
If this function is called with three arguments, L(i) is the expected time spent in state i during the interval [0,t]. If this function is called with two arguments L(i) is the expected time spent in transient state i until absorption; if state i is absorbing, L(i) is zero.
EXAMPLE
Let us consider a 4-states pure birth continuous process where the transition rate from state i to state (i+1) is \lambda_i = i \lambda (i=1, 2, 3), with \lambda = 0.5. The following code computes the expected sojourn time for each state i, given initial occupancy probabilities {\bf \pi}_0=[1, 0, 0, 0].
lambda = 0.5;
N = 4;
b = lambda*[1:N-1];
d = zeros(size(b));
Q = ctmcbd(b,d);
t = linspace(0,10,100);
p0 = zeros(1,N); p0(1)=1;
L = zeros(length(t),N);
for i=1:length(t)
L(i,:) = ctmcexps(Q,t(i),p0);
endfor
plot( t, L(:,1), ";State 1;", "linewidth", 2, ...
t, L(:,2), ";State 2;", "linewidth", 2, ...
t, L(:,3), ";State 3;", "linewidth", 2, ...
t, L(:,4), ";State 4;", "linewidth", 2 );
legend("location","northwest"); legend("boxoff");
xlabel("Time");
ylabel("Expected sojourn time");
#### 3.2.4 Time-Averaged Expected Sojourn Times
Function File: M = ctmctaexps (Q, t, p0)
Function File: M = ctmctaexps (Q, p0)
Compute the time-averaged sojourn time M(i), defined as the fraction of the time interval [0,t] (or until absorption) spent in state i, assuming that the state occupancy probabilities at time 0 are p.
INPUTS
Q(i,j)
Infinitesimal generator matrix. Q(i,j) is the transition rate from state i to state j, 1 ≤ i,j ≤ N, i \neq j. The matrix Q must also satisfy the condition \sum_{j=1}^N Q_{i,j} = 0
t
Time. If omitted, the results are computed until absorption.
p0(i)
initial state occupancy probabilities. p0(i) is the probability that the system is in state i at time 0, i = 1, …, N
OUTPUTS
M(i)
When called with three arguments, M(i) is the expected fraction of the interval [0,t] spent in state i assuming that the state occupancy probability at time zero is p. When called with two arguments, M(i) is the expected fraction of time until absorption spent in state i; in this case the mean time to absorption is sum(M).
EXAMPLE
lambda = 0.5;
N = 4;
birth = lambda*linspace(1,N-1,N-1);
death = zeros(1,N-1);
Q = diag(birth,1)+diag(death,-1);
Q -= diag(sum(Q,2));
t = linspace(1e-5,30,100);
p = zeros(1,N); p(1)=1;
M = zeros(length(t),N);
for i=1:length(t)
M(i,:) = ctmctaexps(Q,t(i),p);
endfor
clf;
plot(t, M(:,1), ";State 1;", "linewidth", 2, ...
t, M(:,2), ";State 2;", "linewidth", 2, ...
t, M(:,3), ";State 3;", "linewidth", 2, ...
t, M(:,4), ";State 4 (absorbing);", "linewidth", 2 );
legend("location","east"); legend("boxoff");
xlabel("Time");
ylabel("Time-averaged Expected sojourn time");
#### 3.2.5 Mean Time to Absorption
Function File: t = ctmcmtta (Q, p)
Compute the Mean-Time to Absorption (MTTA) of the CTMC described by the infinitesimal generator matrix Q, starting from initial occupancy probabilities p. If there are no absorbing states, this function fails with an error.
INPUTS
Q(i,j)
N \times N infinitesimal generator matrix. Q(i,j) is the transition rate from state i to state j, i \neq j. The matrix Q must satisfy the condition \sum_{j=1}^N Q_{i,j} = 0
p(i)
probability that the system is in state i at time 0, for each i=1, …, N
OUTPUTS
t
Mean time to absorption of the process represented by matrix Q. If there are no absorbing states, this function fails.
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998.
EXAMPLE
Let us consider a simple model of redundant disk array. We assume that the array is made of 5 independent disks and can tolerate up to 2 disk failures without losing data. If three or more disks break, the array is dead and unrecoverable. We want to estimate the Mean-Time-To-Failure (MTTF) of the disk array.
We model this system as a 4 states continuous Markov chain with state space { 2, 3, 4, 5 }. In state i there are exactly i active (i.e., non failed) disks; state 2 is absorbing. Let \mu be the failure rate of a single disk. The system starts in state 5 (all disks are operational). We use a pure death process, where the death rate from state i to state (i-1) is \mu i, for i = 3, 4, 5).
The MTTF of the disk array is the MTTA of the Markov Chain, and can be computed as follows:
mu = 0.01;
death = [ 3 4 5 ] * mu;
birth = 0*death;
Q = ctmcbd(birth,death);
t = ctmcmtta(Q,[0 0 0 1])
⇒ t = 78.333
Previous: , Up: Continuous-Time Markov Chains [Contents][Index]
#### 3.2.6 First Passage Times
Function File: M = ctmcfpt (Q)
Function File: m = ctmcfpt (Q, i, j)
Compute mean first passage times for an irreducible continuous-time Markov chain.
INPUTS
Q(i,j)
Infinitesimal generator matrix. Q is a N \times N square matrix where Q(i,j) is the transition rate from state i to state j, for 1 ≤ i, j ≤ N, i \neq j. Transition rates must be nonnegative, and \sum_{j=1}^N Q_{i,j} = 0
i
Initial state.
j
Destination state.
OUTPUTS
M(i,j)
average time before state j is visited for the first time, starting from state i. We let M(i,i) = 0.
m
m is the average time before state j is visited for the first time, starting from state i.
Next: , Previous: , Up: Top [Contents][Index]
## 4 Single Station Queueing Systems
Single Station Queueing Systems contain a single station, and can usually be analyzed easily. The queueing package contains functions for handling the following types of queues:
### 4.1 The M/M/1 System
The M/M/1 system contains a single server connected to an unbounded FCFS queue. Requests arrive according to a Poisson process with rate \lambda; the service time is exponentially distributed with average service rate \mu. The system is stable if \lambda < \mu.
Function File: [U, R, Q, X, p0] = qsmm1 (lambda, mu)
Function File: pk = qsmm1 (lambda, mu, k)
Compute utilization, response time, average number of requests and throughput for a M/M/1 queue.
INPUTS
lambda
Arrival rate (lambda ≥ 0).
mu
Service rate (mu > lambda).
k
Number of requests in the system (k ≥ 0).
OUTPUTS
U
Server utilization
R
Server response time
Q
Average number of requests in the system
X
Server throughput. If the system is ergodic (mu > lambda), we always have X = lambda
p0
Steady-state probability that there are no requests in the system.
pk
Steady-state probability that there are k requests in the system. (including the one being served).
If this function is called with less than three input parameters, lambda and mu can be vectors of the same size. In this case, the results will be vectors as well.
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, Section 6.3
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.2 The M/M/m System
The M/M/m system is similar to the M/M/1 system, except that there are m \geq 1 identical servers connected to a shared FCFS queue. Thus, at most m requests can be served at the same time. The M/M/m system can be seen as a single server with load-dependent service rate \mu(n), which is a function of the number n of requests in the system:
mu(n) = min(m,n)*mu
where \mu is the service rate of each individual server.
Function File: [U, R, Q, X, p0, pm] = qsmmm (lambda, mu)
Function File: [U, R, Q, X, p0, pm] = qsmmm (lambda, mu, m)
Function File: pk = qsmmm (lambda, mu, m, k)
Compute utilization, response time, average number of requests in service and throughput for a M/M/m queue, a queueing system with m identical servers connected to a single FCFS queue.
INPUTS
lambda
Arrival rate (lambda>0).
mu
Service rate (mu>lambda).
m
Number of servers (m ≥ 1). Default is m=1.
k
Number of requests in the system (k ≥ 0).
OUTPUTS
U
Service center utilization, U = \lambda / (m \mu).
R
Service center mean response time
Q
Average number of requests in the system
X
Service center throughput. If the system is ergodic, we will always have X = lambda
p0
Steady-state probability that there are 0 requests in the system
pm
Steady-state probability that an arriving request has to wait in the queue
pk
Steady-state probability that there are k requests in the system (including the one being served).
If this function is called with less than four parameters, lambda, mu and m can be vectors of the same size. In this case, the results will be vectors as well.
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, Section 6.5
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.3 The Erlang-B Formula
Function File: B = erlangb (A, m)
Compute the steady-state blocking probability in the Erlang loss model.
The Erlang-B formula E_B(A, m) gives the probability that an open system with m identical servers, arrival rate \lambda, individual service rate \mu and offered load A = \lambda / \mu has all servers busy. This corresponds to the rejection probability of an M/M/m/0 system with m servers and no queue.
INPUTS
A
Offered load, defined as A = \lambda / \mu where \lambda is the mean arrival rate and \mu the mean service rate of each individual server (real, A > 0).
m
Number of identical servers (integer, m ≥ 1). Default m = 1
OUTPUTS
B
The value E_B(A, m)
A or m can be vectors, and in this case, the results will be vectors as well.
REFERENCES
• G. Zeng, Two common properties of the Erlang-B function, Erlang-C function, and Engset blocking function, Mathematical and Computer Modelling, Volume 37, Issues 12-13, June 2003, Pages 1287-1296
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.4 The Erlang-C Formula
Function File: C = erlangc (A, m)
Compute the steady-state probability of delay in the Erlang delay model.
The Erlang-C formula E_C(A, m) gives the probability that an open queueing system with m identical servers, infinite wating space, arrival rate \lambda, individual service rate \mu and offered load A = \lambda / \mu has all the servers busy. This is the waiting probability in an M/M/m/\infty system with m servers and an infinite queue.
INPUTS
A
Offered load. A = \lambda / \mu where \lambda is the mean arrival rate and \mu the mean service rate of each individual server (real, 0 < A < m).
m
Number of identical servers (integer, m ≥ 1). Default m = 1
OUTPUTS
B
The value E_C(A, m)
A or m can be vectors, and in this case, the results will be vectors as well.
REFERENCES
• G. Zeng, Two common properties of the Erlang-B function, Erlang-C function, and Engset blocking function, Mathematical and Computer Modelling, Volume 37, Issues 12-13, June 2003, Pages 1287-1296
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.5 The Engset Formula
Function File: B = engset (A, m, n)
Evaluate the Engset loss formula.
The Engset formula computes the blocking probability P_b(A,m,n) for a system with a finite population of n users, m identical servers, no queue, individual service rate \mu, individual arrival rate \lambda (i.e., the time until a user tries to request service is exponentially distributed with mean 1/\lambda), and offered load A=\lambda/\mu.
INPUTS
A
Offered load, defined as A = \lambda / \mu where \lambda is the mean arrival rate and \mu the mean service rate of each individual server (real, A > 0).
m
Number of identical servers (integer, m ≥ 1). Default m = 1
n
Number of requests (integer, n ≥ 1). Default n = 1
OUTPUTS
B
The value P_b(A, m, n)
A, m or n can be vectors, and in this case, the results will be vectors as well.
REFERENCES
• G. Zeng, Two common properties of the Erlang-B function, Erlang-C function, and Engset blocking function, Mathematical and Computer Modelling, Volume 37, Issues 12-13, June 2003, Pages 1287-1296
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.6 The M/M/inf System
The M/M/\infty system is a special case of M/M/m system with infinitely many identical servers (i.e., m = \infty). Each new request is always assigned to a new server, so that queueing never occurs. The M/M/\infty system is always stable.
Function File: [U, R, Q, X, p0] = qsmminf (lambda, mu)
Function File: pk = qsmminf (lambda, mu, k)
Compute utilization, response time, average number of requests and throughput for an infinite-server queue.
The M/M/\infty system has an infinite number of identical servers. Such a system is always stable (i.e., the mean queue length is always finite) for any arrival and service rates.
INPUTS
lambda
Arrival rate (lambda>0).
mu
Service rate (mu>0).
k
Number of requests in the system (k ≥ 0).
OUTPUTS
U
Traffic intensity (defined as \lambda/\mu). Note that this is different from the utilization, which in the case of M/M/\infty centers is always zero.
R
Service center response time.
Q
Average number of requests in the system (which is equal to the traffic intensity \lambda/\mu).
X
Throughput (which is always equal to X = lambda).
p0
Steady-state probability that there are no requests in the system
pk
Steady-state probability that there are k requests in the system (including the one being served).
If this function is called with less than three arguments, lambda and mu can be vectors of the same size. In this case, the results will be vectors as well.
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, Section 6.4
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.7 The M/M/1/K System
In a M/M/1/K finite capacity system there is a single server, and there can be at most K jobs at any time (including the job currently in service), K > 1. If a new request tries to join the system when there are already K other requests, the request is lost. The queue has K-1 slots. The M/M/1/K system is always stable, regardless of the arrival and service rates.
Function File: [U, R, Q, X, p0, pK] = qsmm1k (lambda, mu, K)
Function File: pn = qsmm1k (lambda, mu, K, n)
Compute utilization, response time, average number of requests and throughput for a M/M/1/K finite capacity system.
In a M/M/1/K queue there is a single server and a queue with finite capacity: the maximum number of requests in the system (including the request being served) is K, and the maximum queue length is therefore K-1.
INPUTS
lambda
Arrival rate (lambda>0).
mu
Service rate (mu>0).
K
Maximum number of requests allowed in the system (K ≥ 1).
n
Number of requests in the (0 ≤ n ≤ K).
OUTPUTS
U
Service center utilization, which is defined as U = 1-p0
R
Service center response time
Q
Average number of requests in the system
X
Service center throughput
p0
Steady-state probability that there are no requests in the system
pK
Steady-state probability that there are K requests in the system (i.e., that the system is full)
pn
Steady-state probability that there are n requests in the system (including the one being served).
If this function is called with less than four arguments, lambda, mu and K can be vectors of the same size. In this case, the results will be vectors as well.
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.8 The M/M/m/K System
The M/M/m/K finite capacity system is similar to the M/M/1/k system except that the number of servers is m, where 1 \leq m \leq K. The queue has K-m slots. The M/M/m/K system is always stable.
Function File: [U, R, Q, X, p0, pK] = qsmmmk (lambda, mu, m, K)
Function File: pn = qsmmmk (lambda, mu, m, K, n)
Compute utilization, response time, average number of requests and throughput for a M/M/m/K finite capacity system. In a M/M/m/K system there are m \geq 1 identical service centers sharing a fixed-capacity queue. At any time, at most K ≥ m requests can be in the system, including those being served. The maximum queue length is K-m. This function generates and solves the underlying CTMC.
INPUTS
lambda
Arrival rate (lambda>0)
mu
Service rate (mu>0)
m
Number of servers (m ≥ 1)
K
Maximum number of requests allowed in the system, including those being served (K ≥ m)
n
Number of requests in the (0 ≤ n ≤ K).
OUTPUTS
U
Service center utilization
R
Service center response time
Q
Average number of requests in the system
X
Service center throughput
p0
Steady-state probability that there are no requests in the system.
pK
Steady-state probability that there are K requests in the system (i.e., probability that the system is full).
pn
Steady-state probability that there are n requests in the system (including those being served).
If this function is called with less than five arguments, lambda, mu, m and K can be either scalars, or vectors of the same size. In this case, the results will be vectors as well.
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, Section 6.6
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.9 The Asymmetric M/M/m System
The Asymmetric M/M/m system contains m servers connected to a single queue. Differently from the M/M/m system, in the asymmetric M/M/m each server may have a different service time.
Function File: [U, R, Q, X] = qsammm (lambda, mu)
Compute approximate utilization, response time, average number of requests in service and throughput for an asymmetric M/M/m queue. In this type of system there are m different servers connected to a single queue. Each server has its own (possibly different) service rate. If there is more than one server available, requests are routed to a randomly-chosen one.
INPUTS
lambda
Arrival rate (lambda>0)
mu
mu(i) is the service rate of server i, 1 ≤ i ≤ m. The system must be ergodic (lambda < sum(mu)).
OUTPUTS
U
Approximate service center utilization, U = \lambda / ( \sum_{i=1}^m \mu_i ).
R
Approximate service center response time
Q
Approximate number of requests in the system
X
Approximate system throughput. If the system is ergodic, X = lambda
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998
Next: , Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.10 The M/G/1 System
Function File: [U, R, Q, X, p0] = qsmg1 (lambda, xavg, x2nd)
Compute utilization, response time, average number of requests and throughput for a M/G/1 system. The service time distribution is described by its mean xavg, and by its second moment x2nd. The computations are based on results from L. Kleinrock, Queuing Systems, Wiley, Vol 2, and Pollaczek-Khinchine formula.
INPUTS
lambda
Arrival rate
xavg
Average service time
x2nd
Second moment of service time distribution
OUTPUTS
U
Service center utilization
R
Service center response time
Q
Average number of requests in the system
X
Service center throughput
p0
Probability that there is not any request at system
lambda, xavg, t2nd can be vectors of the same size. In this case, the results will be vectors as well.
Previous: , Up: Single Station Queueing Systems [Contents][Index]
### 4.11 The M/H_m/1 System
Function File: [U, R, Q, X, p0] = qsmh1 (lambda, mu, alpha)
Compute utilization, response time, average number of requests and throughput for a M/H_m/1 system. In this system, the customer service times have hyper-exponential distribution:
___ m
\
B(x) = > alpha(j) * (1-exp(-mu(j)*x)) x>0
/__
j=1
where \alpha_j is the probability that the request is served at phase j, in which case the average service rate is \mu_j. After completing service at phase j, for some j, the request exits the system.
INPUTS
lambda
Arrival rate
mu
mu(j) is the phase j service rate. The total number of phases m is length(mu).
alpha
alpha(j) is the probability that a request is served at phase j. alpha must have the same size as mu.
OUTPUTS
U
Service center utilization
R
Service center response time
Q
Average number of requests in the system
X
Service center throughput
Next: , Previous: , Up: Top [Contents][Index]
## 5 Queueing Networks
Next: , Up: Queueing Networks [Contents][Index]
### 5.1 Introduction to QNs
Queueing Networks (QN) are a simple modeling notation that can be used to analyze many kinds of systems. In its simplest form, a QN is made of K service centers; center k has a queue connected to m_k (usually identical) servers. Arriving customers (requests) join the queue if there is at least one slot available. Requests are served according to a (de)queueing policy (e.g., FIFO). After service completes, requests leave the server and can join another queue or exit from the system.
Service centers where m_k = \infty are called delay centers or infinite servers. In this kind of centers, there is always one available server, so that queueing never occurs.
Requests join the queue according to a queueing policy, such as:
FCFS
First-Come-First-Served
LCFS-PR
Last-Come-First-Served, Preemptive Resume
PS
Processor Sharing
IS
Infinite Server (m_k = \infty).
Queueing networks can be open or closed. In open networks there is an infinite population of requests; new customers are generated outside the system, and eventually leave the network. In closed networks there is a fixed population of request that never leave the system.
Queueing models can have a single request class (single class models), meaning that all requests behave in the same way (e.g., they spend the same average time on each particular server). In multiple class models there are multiple request classes, each with its own parameters (e.g., with different service times or different routing probabilities). Furthermore, in multiclass models there can be open and closed chains of requests at the same time.
A particular class of QN models, product-form networks, is of particular interest. Product-form networks fulfill the following assumptions:
• The network can consist of open and closed job classes.
• The following queueing disciplines are allowed: FCFS, PS, LCFS-PR and IS.
• Service times for FCFS nodes must be exponentially distributed and class-independent. Service centers at PS, LCFS-PR and IS nodes can have any kind of service time distribution with a rational Laplace transform. Furthermore, for PS, LCFS-PR and IS nodes, different classes of customers can have different service times.
• The service rate of an FCFS node is only allowed to depend on the number of jobs at this node; in a PS, LCFS-PR and IS node the service rate for a particular job class can also depend on the number of jobs of that class at the node.
• In open networks two kinds of arrival processes are allowed: i) the arrival process is Poisson, with arrival rate \lambda that can depend on the number of jobs in the network. ii) the arrival process consists of C independent Poisson arrival streams where the C job sources are assigned to the C chains; the arrival rate can be load dependent.
Product-form networks are attractive because steady-state performance measures can be efficiently computed.
Next: , Previous: , Up: Queueing Networks [Contents][Index]
### 5.2 Single Class Models
In single class models, all requests are indistinguishable and belong to the same class. This means that every request has the same average service time, and all requests move through the system with the same routing probabilities.
Model Inputs
{lambda}_k
(Open models only) External arrival rate to service center k.
lambda
(Open models only) Overall external arrival rate to the system as a whole: \lambda = \sum_k \lambda_k.
N
(Closed models only) Total number of requests in the system.
S_k
Mean service time at center k. S_k is the average time elapsed from service start to service completion at center k.
P_{i, j}
Routing probability matrix. {\bf P} = [P_{i, j}] is a K \times K matrix where P_{i, j} is the probability that a request completing service at center i is routed to center j. The probability that a request leaves the system after being served at center i is \left(1-\sum_{j=1}^K P_{i, j}\right).
V_k
Mean number of visits to center k (also called visit ratio or relative arrival rate).
Model Outputs
U_k
Utilization of service center k. The utilization is defined as the fraction of time in which the resource is busy (i.e., the server is processing requests). If center k is a single-server or multiserver node, then 0 ≤ U_k ≤ 1. If center k is an infinite server node (delay center), then U_k denotes the traffic intensity and is defined as U_k = X_k S_k; in this case the utilization may be greater than one.
R_k
Average response time of service center k, defined as the mean time between the arrival of a request in the queue and service completion of the same request.
Q_k
Average number of requests in center k; this includes both the requests in the queue and those being served.
X_k
Throughput of service center k. The throughput is the rate of job completions, i.e., the average number of jobs completed over a given time interval.
Given the output parameters above, additional performance measures can be computed:
X
System throughput, X = X_k / V_k for any k for which V_k \neq 0
R
System response time, R = \sum_{k=1}^K R_k V_k
Q
Average number of requests in the system, Q = \sum_{k=1}^K Q_k; for closed systems, this can be written as Q = N-XZ;
For open, single class models, the scalar \lambda denotes the external arrival rate of requests to the system. The average number of visits V_j satisfy the following equation:
K
___
\
V_j = P_(0, j) + > V_i P_(i, j) j=1,...,K
/___
i=1
where P_{0, j} is the probability that an external request goes to center j. If we denote with \lambda_j the external arrival rate to center j, and \lambda = \sum_j \lambda_j the overall external arrival rate, then P_{0, j} = \lambda_j / \lambda.
For closed models, the visit ratios satisfy the following equation:
/
| K
| ___
| \
| V_j = > V_i P_(i, j) j=1,...,K
| /___
| i=1
|
| V_r = 1 for a selected reference station r
\
Note that the set of traffic equations V_j = \sum_{i=1}^K V_i P_{i, j} alone can only be solved up to a multiplicative constant; to get a unique solution we impose an additional constraint V_r = 1 for some 1 ≤ r ≤ K. This constraint is equivalent to defining station r as the reference station; the default is r=1, see doc-qncsvisits. A job that returns to the reference station is assumed to have completed its activity cycle. The network throughput is set to the throughput of the reference station.
Function File: V = qncsvisits (P)
Function File: V = qncsvisits (P, r)
Compute the mean number of visits to the service centers of a single class, closed network with K service centers.
INPUTS
P(i,j)
probability that a request which completed service at center i is routed to center j (K \times K matrix). For closed networks it must hold that sum(P,2)==1. The routing graph must be strongly connected, meaning that each node must be reachable from every other node.
r
Index of the reference station, r \in {1, …, K}; Default r=1. The traffic equations are solved by imposing the condition V(r) = 1. A request returning to the reference station completes its activity cycle.
OUTPUTS
V(k)
average number of visits to service center k, assuming r as the reference station.
Function File: V = qnosvisits (P, lambda)
Compute the average number of visits to the service centers of a single class open Queueing Network with K service centers.
INPUTS
P(i,j)
is the probability that a request which completed service at center i is routed to center j (K \times K matrix).
lambda(k)
external arrival rate to center k.
OUTPUTS
V(k)
average number of visits to server k.
EXAMPLE
Figure 5.1: Closed network with a single class of requests
Figure 5.1 shows a closed queueing network with a single class of requests. The network has three service centers, labeled CPU, Disk1 and Disk2, and is known as a central server model of a computer system. Requests spend some time at the CPU, which is represented by a PS (Processor Sharing) node. After that, requests are routed to Disk1 with probability 0.3, and to Disk2 with probability 0.7. Both Disk1 and Disk2 are FCFS nodes.
If we label the servers as CPU=1, Disk1=2, Disk2=3, we can define the routing matrix as follows:
/ 0 0.3 0.7 \
P = | 1 0 0 |
\ 1 0 0 /
The visit ratios V, using station 1 as the reference station, can be computed with:
P = [0 0.3 0.7; ...
1 0 0 ; ...
1 0 0 ];
V = qncsvisits(P)
⇒ V = 1.00000 0.30000 0.70000
EXAMPLE
Figure 5.2: Open Queueing Network with a single class of requests
Figure 5.2 shows a open QN with a single class of requests. The network has the same structure as the one in Figure 5.1, with the difference that here we have a stream of jobs arriving from outside the system, at a rate \lambda. After service completion at the CPU, a job can leave the system with probability 0.2, or be transferred to other nodes with the probabilities shown in the figure.
The routing matrix is
/ 0 0.3 0.5 \
P = | 1 0 0 |
\ 1 0 0 /
If we let \lambda = 1.2, we can compute the visit ratios V as follows:
p = 0.3;
lambda = 1.2
P = [0 0.3 0.5; ...
1 0 0 ; ...
1 0 0 ];
V = qnosvisits(P,[1.2 0 0])
⇒ V = 5.0000 1.5000 2.5000
Function qnosvisits expects a vector with K elements as a second parameter, for open networks only. The vector contains the arrival rates at each individual node; since in our example external arrivals exist only for node S_1 with rate \lambda = 1.2, the second parameter is [1.2, 0, 0].
#### 5.2.1 Open Networks
Jackson networks satisfy the following conditions:
• There is only one job class in the network; the total number of jobs in the system is unbounded.
• There are K service centers in the network. Each service center may have Poisson arrivals from outside the system. A job can leave the system from any node.
• Arrival rates as well as routing probabilities are independent from the number of nodes in the network.
• External arrivals and service times at the service centers are exponentially distributed, and in general can be load-dependent.
• Service discipline at each node is FCFS
We define the joint probability vector \pi(n_1, …, n_K) as the steady-state probability that there are n_k requests at service center k, for all k=1, …, N. Jackson networks have the property that the joint probability is the product of the marginal probabilities \pi_k:
joint_prob = prod( pi )
where \pi_k(n_k) is the steady-state probability that there are n_k requests at service center k.
Function File: [U, R, Q, X] = qnos (lambda, S, V)
Function File: [U, R, Q, X] = qnos (lambda, S, V, m)
Analyze open, single class BCMP queueing networks with K service centers.
This function works for a subset of BCMP single-class open networks satisfying the following properties:
• The allowed service disciplines at network nodes are: FCFS, PS, LCFS-PR, IS (infinite server);
• Service times are exponentially distributed and load-independent;
• Center k can consist of m(k) ≥ 1 identical servers.
• Routing is load-independent
INPUTS
lambda
Overall external arrival rate (lambda>0).
S(k)
average service time at center k (S(k)>0).
V(k)
average number of visits to center k (V(k) ≥ 0).
m(k)
number of servers at center i. If m(k) < 1, enter k is a delay center (IS); otherwise it is a regular queueing center with m(k) servers. Default is m(k) = 1 for all k.
OUTPUTS
U(k)
If k is a queueing center, U(k) is the utilization of center k. If k is an IS node, then U(k) is the traffic intensity defined as X(k)*S(k).
R(k)
center k average response time.
Q(k)
average number of requests at center k.
X(k)
center k throughput.
REFERENCES
• G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998
From the results computed by this function, it is possible to derive other quantities of interest as follows:
• System Response Time: The overall system response time can be computed as R_s = dot(V,R);
• Average number of requests: The average number of requests in the system can be computed as: Q_avg = sum(Q)
EXAMPLE
lambda = 3;
V = [16 7 8];
S = [0.01 0.02 0.03];
[U R Q X] = qnos( lambda, S, V );
R_s = dot(R,V) # System response time
N = sum(Q) # Average number in system
-| R_s = 1.4062
-| N = 4.2186
#### 5.2.2 Closed Networks
Function File: [U, R, Q, X, G] = qncsmva (N, S, V)
Function File: [U, R, Q, X, G] = qncsmva (N, S, V, m)
Function File: [U, R, Q, X, G] = qncsmva (N, S, V, m, Z)
Analyze closed, single class queueing networks using the exact Mean Value Analysis (MVA) algorithm.
The following queueing disciplines are supported: FCFS, LCFS-PR, PS and IS (Infinite Server). This function supports fixed-rate service centers or multiple server nodes. For general load-dependent service centers, use the function qncsmvald instead.
Additionally, the normalization constant G(n), n=0, …, N is computed; G(n) can be used in conjunction with the BCMP theorem to compute steady-state probabilities.
INPUTS
N
Population size (number of requests in the system, N ≥ 0). If N == 0, this function returns U = R = Q = X = 0
S(k)
mean service time at center k (S(k) ≥ 0).
V(k)
average number of visits to service center k (V(k) ≥ 0).
Z
External delay for customers (Z ≥ 0). Default is 0.
m(k)
number of servers at center k (if m is a scalar, all centers have that number of servers). If m(k) < 1, center k is a delay center (IS); otherwise it is a regular queueing center (FCFS, LCFS-PR or PS) with m(k) servers. Default is m(k) = 1 for all k (each service center has a single server).
OUTPUTS
U(k)
If k is a FCFS, LCFS-PR or PS node (m(k) ≥ 1), then U(k) is the utilization of center k, 0 ≤ U(k) ≤ 1. If k is an IS node (m(k) < 1), then U(k) is the traffic intensity defined as X(k)*S(k). In this case the value of U(k) may be greater than one.
R(k)
center k response time. The Residence Time at center k is R(k) * V(k). The system response time Rsys can be computed either as Rsys = N/Xsys - Z or as Rsys = dot(R,V)
Q(k)
average number of requests at center k. The number of requests in the system can be computed either as sum(Q), or using the formula N-Xsys*Z.
X(k)
center K throughput. The system throughput Xsys can be computed as Xsys = X(1) / V(1)
G(n)
Normalization constants. G(n+1) contains the value of the normalization constant G(n), n=0, …, N as array indexes in Octave start from 1. G(n) can be used in conjunction with the BCMP theorem to compute steady-state probabilities.
NOTES
In presence of load-dependent servers (i.e., if m(k)>1 for some k), the MVA algorithm is known to be numerically unstable. Generally, this issue manifests itself as negative values for the response times or utilizations. This is not a problem of the queueing toolbox, but of the MVA algorithm, and has currently no known solution. This function prints a warning if numerical problems are detected; the warning can be disabled with the command warning("off", "qn:numerical-instability").
REFERENCES
• M. Reiser and S. S. Lavenberg, Mean-Value Analysis of Closed Multichain Queuing Networks, Journal of the ACM, vol. 27, n. 2, April 1980, pp. 313–322. 10.1145/322186.322195
This implementation is described in R. Jain , The Art of Computer Systems Performance Analysis, Wiley, 1991, p. 577. Multi-server nodes are treated according to G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, Section 8.2.1, "Single Class Queueing Networks".
EXAMPLE
S = [ 0.125 0.3 0.2 ];
V = [ 16 10 5 ];
N = 20;
m = ones(1,3);
Z = 4;
[U R Q X] = qncsmva(N,S,V,m,Z);
X_s = X(1)/V(1); # System throughput
R_s = dot(R,V); # System response time
printf("\t Util Qlen RespT Tput\n");
printf("\t-------- -------- -------- --------\n");
for k=1:length(S)
printf("Dev%d\t%8.4f %8.4f %8.4f %8.4f\n", k, U(k), Q(k), R(k), X(k) );
endfor
printf("\nSystem\t %8.4f %8.4f %8.4f\n\n", N-X_s*Z, R_s, X_s );
Function File: [U, R, Q, X] = qncsmvald (N, S, V)
Function File: [U, R, Q, X] = qncsmvald (N, S, V, Z)
Mean Value Analysis algorithm for closed, single class queueing networks with K service centers and load-dependent service times. This function supports FCFS, LCFS-PR, PS and IS nodes. For networks with only fixed-rate centers and multiple-server nodes, the function qncsmva is more efficient.
INPUTS
N
Population size (number of requests in the system, N ≥ 0). If N == 0, this function returns U = R = Q = X = 0
S(k,n)
mean service time at center k where there are n requests, 1 ≤ n ≤ N. S(k,n) = 1 / \mu_{k}(n), where \mu_{k}(n) is the service rate of center k when there are n requests.
V(k)
average number of visits to service center k (V(k) ≥ 0).
Z
external delay ("think time", Z ≥ 0); default 0.
OUTPUTS
U(k)
utilization of service center k. The utilization is defined as the probability that service center k is not empty, that is, U_k = 1-\pi_k(0) where \pi_k(0) is the steady-state probability that there are 0 jobs at service center k.
R(k)
response time on service center k.
Q(k)
average number of requests in service center k.
X(k)
throughput of service center k.
NOTES
In presence of load-dependent servers, the MVA algorithm is known to be numerically unstable. Generally this problem manifests itself as negative response times or utilization.
REFERENCES
• M. Reiser and S. S. Lavenberg, Mean-Value Analysis of Closed Multichain Queuing Networks, Journal of the ACM, vol. 27, n. 2, April 1980, pp. 313–322. 10.1145/322186.322195
This implementation is described in G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, Section 8.2.4.1, “Networks with Load-Dependent Service: Closed Networks”.
Function File: [U, R, Q, X] = qncscmva (N, S, Sld, V)
Function File: [U, R, Q, X] = qncscmva (N, S, Sld, V, Z)
Conditional MVA (CMVA) algorithm, a numerically stable variant of MVA. This function supports a network of M ≥ 1 service centers and a single delay center. Servers 1, …, (M-1) are load-independent; server M is load-dependent.
INPUTS
N
Number of requests in the system, N ≥ 0. If N == 0, this function returns U = R = Q = X = 0
S(k)
mean service time on server k = 1, …, (M-1) (S(k) > 0). If there are no fixed-rate servers, then S = []
Sld(n)
inverse service rate at server M (the load-dependent server) when there are n requests, n=1, …, N. Sld(n) = 1 / \mu(n).
V(k)
average number of visits to service center k=1, …, M, where V(k) ≥ 0. V(1:M-1) are the visit rates to the fixed rate servers; V(M) is the visit rate to the load dependent server.
Z
External delay for customers (Z ≥ 0). Default is 0.
OUTPUTS
U(k)
center k utilization (k=1, …, M)
R(k)
response time of center k (k=1, …, M). The system response time Rsys can be computed as Rsys = N/Xsys - Z
Q(k)
average number of requests at center k (k=1, …, M).
X(k)
center k throughput (k=1, …, M).
REFERENCES
• G. Casale. A note on stable flow-equivalent aggregation in closed networks. Queueing Syst. Theory Appl., 60:193â-202, December 2008, 10.1007/s11134-008-9093-6
Function File: [U, R, Q, X] = qncsmvaap (N, S, V)
Function File: [U, R, Q, X] = qncsmvaap (N, S, V, m)
Function File: [U, R, Q, X] = qncsmvaap (N, S, V, m, Z)
Function File: [U, R, Q, X] = qncsmvaap (N, S, V, m, Z, tol)
Function File: [U, R, Q, X] = qncsmvaap (N, S, V, m, Z, tol, iter_max)
Analyze closed, single class queueing networks using the Approximate Mean Value Analysis (MVA) algorithm. This function is based on approximating the number of customers seen at center k when a new request arrives as Q_k(N) \times (N-1)/N. This function only handles single-server and delay centers; if your network contains general load-dependent service centers, use the function qncsmvald instead.
INPUTS
N
Population size (number of requests in the system, N > 0).
S(k)
mean service time on server k (S(k)>0).
V(k)
average number of visits to service center k (V(k) ≥ 0).
m(k)
number of servers at center k (if m is a scalar, all centers have that number of servers). If m(k) < 1, center k is a delay center (IS); if m(k) == 1, center k is a regular queueing center (FCFS, LCFS-PR or PS) with one server (default). This function does not support multiple server nodes (m(k) > 1).
Z
External delay for customers (Z ≥ 0). Default is 0.
tol
Stopping tolerance. The algorithm stops when the maximum relative difference between the new and old value of the queue lengths Q becomes less than the tolerance. Default is 10^{-5}.
iter_max
Maximum number of iterations (iter_max>0. The function aborts if convergenge is not reached within the maximum number of iterations. Default is 100.
OUTPUTS
U(k)
If k is a FCFS, LCFS-PR or PS node (m(k) == 1), then U(k) is the utilization of center k. If k is an IS node (m(k) < 1), then U(k) is the traffic intensity defined as X(k)*S(k).
R(k)
response time at center k. The system response time Rsys can be computed as Rsys = N/Xsys - Z
Q(k)
average number of requests at center k. The number of requests in the system can be computed either as sum(Q), or using the formula N-Xsys*Z.
X(k)
center k throughput. The system throughput Xsys can be computed as Xsys = X(1) / V(1)
REFERENCES
This implementation is based on Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 6.4.2.2 ("Approximate Solution Techniques").
According to the BCMP theorem, the state probability of a closed single class queueing network with K nodes and N requests can be expressed as:
n = [n1, … nK]; population vector
p = 1/G(N+1) \prod F(k,k);
Here \pi(n_1, …, n_K) is the joint probability of having n_k requests at node k, for all k=1, …, K; we have that \sum_{k=1}^K n_k = N
The convolution algorithms computes the normalization constants {\bf G} = \left[G(0), …, G(N)\right] for single-class, closed networks with N requests. The normalization constants are returned as vector G=[G(1), … G(N+1)] where G(i+1) is the value of G(i) (remember that Octave uses 1-base vectors). The normalization constant can be used to compute all performance measures of interest (utilization, average response time and so on).
queueing implements the convolution algorithm, in the function qncsconv and qncsconvld. The first one supports single-station nodes, multiple-station nodes and IS nodes. The second one supports networks with general load-dependent service centers.
Function File: [U, R, Q, X, G] = qncsconv (N, S, V)
Function File: [U, R, Q, X, G] = qncsconv (N, S, V, m)
Analyze product-form, single class closed networks with K service centers using the convolution algorithm.
Load-independent service centers, multiple servers (M/M/m queues) and IS nodes are supported. For general load-dependent service centers, use qncsconvld instead.
INPUTS
N
Number of requests in the system (N>0).
S(k)
average service time on center k (S(k) ≥ 0).
V(k)
visit count of service center k (V(k) ≥ 0).
m(k)
number of servers at center k. If m(k) < 1, center k is a delay center (IS); if m(k) ≥ 1, center k it is a regular M/M/m queueing center with m(k) identical servers. Default is m(k) = 1 for all k.
OUTPUT
U(k)
center k utilization. For IS nodes, U(k) is the traffic intensity X(k) * S(k).
R(k)
average response time of center k.
Q(k)
average number of customers at center k.
X(k)
throughput of center k.
G(n)
Vector of normalization constants. G(n+1) contains the value of the normalization constant with n requests G(n), n=0, …, N.
NOTE
For a network with K service centers and N requests, this implementation of the convolution algorithm has time and space complexity O(NK).
REFERENCES
• Jeffrey P. Buzen, Computational Algorithms for Closed Queueing Networks with Exponential Servers, Communications of the ACM, volume 16, number 9, September 1973, pp. 527–531. 10.1145/362342.362345
This implementation is based on G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, pp. 313–317.
EXAMPLE
The normalization constant G can be used to compute the steady-state probabilities for a closed single class product-form Queueing Network with K nodes and N requests. Let n = [n_1, …, n_K] be a valid population vector, \sum_{k=1}^K n_k = N. Then, the steady-state probability p(k) to have n(k) requests at service center k can be computed as:
n = [1 2 0];
N = sum(n); # Total population size
S = [ 1/0.8 1/0.6 1/0.4 ];
m = [ 2 3 1 ];
V = [ 1 .667 .2 ];
[U R Q X G] = qncsconv( N, S, V, m );
p = [0 0 0]; # initialize p
# Compute the probability to have n(k) jobs at service center k
for k=1:3
p(k) = (V(k)*S(k))^n(k) / G(N+1) * ...
(G(N-n(k)+1) - V(k)*S(k)*G(N-n(k)) );
printf("Prob( n(%d) = %d )=%f\n", k, n(k), p(k) );
endfor
-| Prob( n(1) = 1 ) = 0.17975
-| Prob( n(2) = 2 ) = 0.48404
-| Prob( n(3) = 0 ) = 0.52779
(recall that G(N+1) represents G(N), since in Octave array indices start at one).
Function File: [U, R, Q, X, G] = qncsconvld (N, S, V)
Convolution algorithm for product-form, single-class closed queueing networks with K general load-dependent service centers.
This function computes steady-state performance measures for single-class, closed networks with load-dependent service centers using the convolution algorithm; the normalization constants are also computed. The normalization constants are returned as vector G=[G(1), …, G(N+1)] where G(i+1) is the value of G(i).
INPUTS
N
Number of requests in the system (N>0).
S(k,n)
mean service time at center k where there are n requests, 1 ≤ n ≤ N. S(k,n) = 1 / \mu_{k,n}, where \mu_{k,n} is the service rate of center k when there are n requests.
V(k)
visit count of service center k (V(k) ≥ 0). The length of V is the number of servers K in the network.
OUTPUT
U(k)
center k utilization.
R(k)
average response time at center k.
Q(k)
average number of requests in center k.
X(k)
center k throughput.
G(n)
Normalization constants (vector). G(n+1) corresponds to G(n), as array indexes in Octave start from 1.
REFERENCES
• Herb Schwetman, Some Computational Aspects of Queueing Network Models, Technical Report CSD-TR-354, Department of Computer Sciences, Purdue University, February 1981 (revised).
• M. Reiser, H. Kobayashi, On The Convolution Algorithm for Separable Queueing Networks, In Proceedings of the 1976 ACM SIGMETRICS Conference on Computer Performance Modeling Measurement and Evaluation (Cambridge, Massachusetts, United States, March 29–31, 1976). SIGMETRICS ’76. ACM, New York, NY, pp. 109–117. 10.1145/800200.806187
This implementation is based on G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998, pp. 313–317. Function qncsconvld is slightly different from the version described in Bolch et al. because it supports general load-dependent centers (while the version in the book does not). The modification is in the definition of function F() in qncsconvld which has been made similar to function f_i defined in Schwetman, Some Computational Aspects of Queueing Network Models.
#### 5.2.3 Non Product-Form QNs
Function File: [U, R, Q, X] = qncsmvablo (N, S, M, P )
Approximate MVA algorithm for closed queueing networks with blocking.
INPUTS
N
number of requests in the system. N must be strictly greater than zero, and less than the overall network capacity: 0 < N < sum(M).
S(k)
average service time on server k (S(k) > 0).
M(k)
capacity of center k. The capacity is the maximum number of requests in a service center, including the request in service (M(k) ≥ 1).
P(i,j)
probability that a request which completes service at server i will be transferred to server j.
OUTPUTS
U(k)
center k utilization.
R(k)
average response time of service center k.
Q(k)
average number of requests in service center k (including the request in service).
X(k)
center k throughput.
REFERENCES
• Ian F. Akyildiz, Mean Value Analysis for Blocking Queueing Networks, IEEE Transactions on Software Engineering, vol. 14, n. 2, april 1988, pp. 418–428. 10.1109/32.4663
Function File: [U, R, Q, X] = qnmarkov (lambda, S, C, P)
Function File: [U, R, Q, X] = qnmarkov (lambda, S, C, P, m)
Function File: [U, R, Q, X] = qnmarkov (N, S, C, P)
Function File: [U, R, Q, X] = qnmarkov (N, S, C, P, m)
Compute utilization, response time, average queue length and throughput for open or closed queueing networks with finite capacity and a single class of requests. Blocking type is Repetitive-Service (RS). This function explicitly generates and solve the underlying Markov chain, and thus might require a large amount of memory.
More specifically, networks which can me analyzed by this function have the following properties:
• There exists only a single class of customers.
• The network has K service centers. Center k \in {1, …, K} has m_k > 0 servers, and has a total (finite) capacity of C_k \geq m_k which includes both buffer space and servers. The buffer space at service center k is therefore C_k - m_k.
• The network can be open, with external arrival rate to center k equal to \lambda_k, or closed with fixed population size N. For closed networks, the population size N must be strictly less than the network capacity: N < \sum_{k=1}^K C_k.
• Average service times are load-independent.
• P_{i, j} is the probability that requests completing execution at center i are transferred to center j, i \neq j. For open networks, a request may leave the system from any node i with probability 1-\sum_{j=1}^K P_{i, j}.
• Blocking type is Repetitive-Service (RS). Service center j is saturated if the number of requests is equal to its capacity C_j. Under the RS blocking discipline, a request completing service at center i which is being transferred to a saturated server j is put back at the end of the queue of i and will receive service again. Center i then processes the next request in queue. External arrivals to a saturated servers are dropped.
INPUTS
lambda(k)
N
If the first argument is a vector lambda, it is considered to be the external arrival rate lambda(k) ≥ 0 to service center k of an open network. If the first argument is a scalar, it is considered as the population size N of a closed network; in this case N must be strictly less than the network capacity: N < sum(C).
S(k)
average service time at service center k
C(k)
capacity of service center k. The capacity includes both the buffer and server space m(k). Thus the buffer space is C(k)-m(k).
P(i,j)
transition probability from service center i to service center j.
m(k)
number of servers at service center k. Note that m(k) ≥ C(k) for each k. If m is omitted, all service centers are assumed to have a single server (m(k) = 1 for all k).
OUTPUTS
U(k)
center k utilization.
R(k)
response time on service center k.
Q(k)
average number of customers in the service center k, including the request in service.
X(k)
throughput of service center k.
NOTES
The space complexity of this implementation is O(\prod_{k=1}^K (C_k + 1)^2). The time complexity is dominated by the time needed to solve a linear system with \prod_{k=1}^K (C_k + 1) unknowns.
Next: , Previous: , Up: Queueing Networks [Contents][Index]
### 5.3 Multiple Class Models
In multiple class queueing models, we assume that there exist C different classes of requests. Each request from class c spends on average time S_{c, k} in service at center k. For open models, we denote with {\bf \lambda} = \lambda_{c, k} the arrival rates, where \lambda_{c, k} is the external arrival rate of class c requests at center k. For closed models, we denote with {\bf N} = \left[N_1, …, N_C\right] the population vector, where N_c is the number of class c requests in the system.
The transition probability matrix for multiple class networks is a C \times K \times C \times K matrix {\bf P} = [P_{r, i, s, j}] where P_{r, i, s, j} is the probability that a class r request which completes service at center i will join server j as a class s request.
Model input and outputs can be adjusted by adding additional indexes for the customer classes.
Model Inputs
lambda_{c, k}
(open networks) External arrival rate of class-c requests to service center k
lambda
(open networks) Overall external arrival rate to the whole system: \lambda = \sum_{c=1}^C \sum_{k=1}^K \lambda_{c, k}
N_c
(closed networks) Number of class c requests in the system.
S_{c, k}
Average service time. S_{c, k} is the average service time on service center k for class c requests.
P_{r, i, s, j}
Routing probability matrix. {\bf P} = [P_{r, i, s, j}] is a C \times K \times C \times K matrix such that P_{r, i, s, j} is the probability that a class r request which completes service at server i will move to server j as a class s request.
V_{c, k}
Mean number of visits of class c requests to center k.
Model Outputs
U_{c, k}
Utilization of service center k by class c requests. The utilization is defined as the fraction of time in which the resource is busy (i.e., the server is processing requests). If center k is a single-server or multiserver node, then 0 ≤ U_{c, k} ≤ 1. If center k is an infinite server node (delay center), then U_{c, k} denotes the traffic intensity and is defined as U_{c, k} = X_{c, k} S_{c, k}; in this case the utilization may be greater than one.
R_{c, k}
Average response time experienced by class c requests on service center k. The average response time is defined as the average time between the arrival of a customer in the queue, and the completion of service.
Q_{c, k}
Average number of class c requests on service center k. This includes both the requests in the queue, and the request being served.
X_{c, k}
Throughput of service center k for class c requests. The throughput is defined as the rate of completion of class c requests.
It is possible to define aggregate performance measures as follows:
U_k
Utilization of service center k: Uk = sum(U,k);
R_c
System response time for class c requests: Rc = sum( V.*R, 1 );
Q_c
Average number of class c requests in the system: Qc = sum( Q, 2 );
X_c
Class c throughput: X(c) = X(c,k) ./ V(c,k); for any k for which V(c,k) != 0
For closed networks, we can define the visit ratios V_{s, j} for class s customers at service center j as follows:
V_sj = sum_r sum_i V_ri P_risj s=1,...,C, j=1,...,K V_s r_s = 1 s=1,...,C
where r_s is the class s reference station. Similarly to single class models, the traffic equation for closed multiclass networks can be solved up to multiplicative constants unless we choose one reference station for each closed class and set its visit ratio to 1.
For open networks the traffic equations are as follows:
V_sj = P_0sj + sum_r sum_i V_ri P_risj s=1,...,C, j=1,...,K
where P_{0, s, j} is the probability that an external arrival goes to service center j as a class-s request. If \lambda_{s, j} is the external arrival rate of class s requests to service center j, and \lambda = \sum_s \sum_j \lambda_{s, j} is the overall external arrival rate, then P_{0, s, j} = \lambda_{s, j} / \lambda.
Function File: [V ch] = qncmvisits (P)
Function File: [V ch] = qncmvisits (P, r)
Compute the average number of visits for the nodes of a closed multiclass network with K service centers and C customer classes.
INPUTS
P(r,i,s,j)
probability that a class r request which completed service at center i is routed to center j as a class s request. Class switching is allowed.
r(c)
index of class c reference station, r(c) \in {1, …, K}, 1 ≤ c ≤ C. The class c visit count to server r(c) (V(c,r(c))) is conventionally set to 1. The reference station serves two purposes: (i) its throughput is assumed to be the system throughput, and (ii) a job returning to the reference station is assumed to have completed one cycle. Default is to consider station 1 as the reference station for all classes.
OUTPUTS
V(c,i)
number of visits of class c requests at center i.
ch(c)
chain number that class c belongs to. Different classes can belong to the same chain. Chains are numbered sequentially starting from 1 (1, 2, …). The total number of chains is max(ch).
Function File: V = qnomvisits (P, lambda)
Compute the visit ratios to the service centers of an open multiclass network with K service centers and C customer classes.
INPUTS
P(r,i,s,j)
probability that a class r request which completed service at center i is routed to center j as a class s request. Class switching is supported.
lambda(r,i)
external arrival rate of class r requests to center i.
OUTPUTS
V(r,i)
visit ratio of class r requests at center i.
#### 5.3.1 Open Networks
Function File: [U, R, Q, X] = qnom (lambda, S, V)
Function File: [U, R, Q, X] = qnom (lambda, S, V, m)
Function File: [U, R, Q, X] = qnom (lambda, S, P)
Function File: [U, R, Q, X] = qnom (lambda, S, P, m)
Exact analysis of open, multiple-class BCMP networks. The network can be made of single-server queueing centers (FCFS, LCFS-PR or PS) or delay centers (IS). This function assumes a network with K service centers and C customer classes.
INPUTS
lambda(c)
If this function is invoked as qnom(lambda, S, V, …), then lambda(c) is the external arrival rate of class c customers (lambda(c) ≥ 0). If this function is invoked as qnom(lambda, S, P, …), then lambda(c,k) is the external arrival rate of class c customers at center k (lambda(c,k) ≥ 0).
S(c,k)
mean service time of class c customers on the service center k (S(c,k)>0). For FCFS nodes, mean service times must be class-independent.
V(c,k)
visit ratio of class c customers to service center k (V(c,k) ≥ 0 ). If you pass this argument, class switching is not allowed
P(r,i,s,j)
probability that a class r job completing service at center i is routed to center j as a class s job. If you pass argument P, class switching is allowed; however, all servers must be fixed-rate or infinite-server nodes (m(k) ≤ 1 for all k).
m(k)
number of servers at center k. If m(k) < 1, enter k is a delay center (IS); otherwise it is a regular queueing center with m(k) servers. Default is m(k) = 1 for all k.
OUTPUTS
U(c,k)
If k is a queueing center, then U(c,k) is the class c utilization of center k. If k is an IS node, then U(c,k) is the class c traffic intensity defined as X(c,k)*S(c,k).
R(c,k)
class c response time at center k. The system response time for class c requests can be computed as dot(R, V, 2).
Q(c,k)
average number of class c requests at center k. The average number of class c requests in the system Qc can be computed as Qc = sum(Q, 2)
X(c,k)
class c throughput at center k.
NOTES
If the function call specifies the visit ratios V, class switching is not allowed. If the function call specifies the routing probability matrix P, then class switching is allowed; however, all nodes are restricted to be fixed rate servers or delay centers: multiple-server and general load-dependent centers are not supported. Note that the meaning of parameter lambda is different from one case to the other (see below).
REFERENCES
• Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 7.4.1 ("Open Model Solution Techniques").
#### 5.3.2 Closed Networks
Function File: pop_mix = qncmpopmix (k, N)
Return the set of population mixes for a closed multiclass queueing network with exactly k customers. Specifically, given a closed multiclass QN with C customer classes, where there are N(c) class c requests, c = 1, …, C a k-mix M is a vector of length C with the following properties:
• 0 ≤ M_c ≤ N(c) for all c = 1, …, C;
• \sum_{c=1}^C M_c = k
In other words, a k-mix is an allocation of k requests to C classes such that the number of requests assigned to class c does not exceed the maximum value N(c).
pop_mix is a matrix with C columns, such that each row represents a valid mix.
INPUTS
k
Size of the requested mix (scalar, k ≥ 0).
N(c)
number of class c requests (k ≤ sum(N)).
OUTPUTS
pop_mix(i,c)
number of class c requests in the i-th population mix. The number of mixes is rows(pop_mix).
If you are interested in the number of k-mixes only, you can use the funcion qnmvapop.
REFERENCES
• Herb Schwetman, Implementing the Mean Value Algorithm for the Solution of Queueing Network Models, Technical Report 80-355, Department of Computer Sciences, Purdue University, revised February 15, 1982.
The slightly different problem of enumerating all tuples k_1, …, k_N such that \sum_i k_i = k and k_i ≥ 0, for a given k ≥ 0 has been described in S. Santini, Computing the Indices for a Complex Summation, unpublished report, available at http://arantxa.ii.uam.es/~ssantini/writing/notes/s668_summation.pdf
EXAMPLE
Let us consider a multiclass network with C=2 customer classes; the maximum number of class 1 requests is 2, and the maximum number of class 2 requests is 3. How is it possible to allocate 3 requests to the two classes so that the maximum number of requests per class is not exceeded?
N = [2 3];
mix = qncmpopmix(3, N)
-| mix = [ [2 1] [1 2] [0 3] ]
Function File: H = qncmnpop (N)
Given a network with C customer classes, this function computes the number of k-mixes H(r,k) that can be constructed by the multiclass MVA algorithm by allocating k customers to the first r classes. See doc-qncmpopmix for the definition of k-mix.
INPUTS
N(c)
number of class-c requests in the system. The total number of requests in the network is sum(N).
OUTPUTS
H(r,k)
is the number of k mixes that can be constructed allocating k customers to the first r classes.
REFERENCES
• Zahorjan, J. and Wong, E. The solution of separable queueing network models using mean value analysis. SIGMETRICS Perform. Eval. Rev. 10, 3 (Sep. 1981), 80-85. DOI 10.1145/1010629.805477
Function File: [U, R, Q, X] = qncmmva (N, S )
Function File: [U, R, Q, X] = qncmmva (N, S, V)
Function File: [U, R, Q, X] = qncmmva (N, S, V, m)
Function File: [U, R, Q, X] = qncmmva (N, S, V, m, Z)
Function File: [U, R, Q, X] = qncmmva (N, S, P)
Function File: [U, R, Q, X] = qncmmva (N, S, P, r)
Function File: [U, R, Q, X] = qncmmva (N, S, P, r, m)
Compute steady-state performance measures for closed, multiclass queueing networks using the Mean Value Analysys (MVA) algorithm.
Queueing policies at service centers can be any of the following:
FCFS
(First-Come-First-Served) customers are served in order of arrival; multiple servers are allowed. For this kind of queueing discipline, average service times must be class-independent.
PS
(Processor Sharing) customers are served in parallel by a single server, each customer receiving an equal share of the service rate.
LCFS-PR
(Last-Come-First-Served, Preemptive Resume) customers are served in reverse order of arrival by a single server and the last arrival preempts the customer in service who will later resume service at the point of interruption.
IS
(Infinite Server) customers are delayed independently of other customers at the service center (there is effectively an infinite number of servers).
INPUTS
N(c)
number of class c requests; N(c) ≥ 0. If class c has no requests (N(c) == 0), then for all k, this function returns U(c,k) = R(c,k) = Q(c,k) = X(c,k) = 0
S(c,k)
mean service time for class c requests at center k (S(c,k) ≥ 0). If the service time at center k is class-dependent, then center k is assumed to be of type -/G/1–PS (Processor Sharing). If center k is a FCFS node (m(k)>1), then the service times must be class-independent, i.e., all classes must have the same service time.
V(c,k)
average number of visits of class c requests at center k; V(c,k) ≥ 0, default is 1. If you pass this argument, class switching is not allowed
P(r,i,s,j)
probability that a class r request completing service at center i is routed to center j as a class s request; the reference stations for each class are specified with the paramter r. If you pass argument P, class switching is allowed; however, you can not specify any external delay (i.e., Z must be zero) and all servers must be fixed-rate or infinite-server nodes (m(k) ≤ 1 for all k).
r(c)
reference station for class c. If omitted, station 1 is the reference station for all classes. See qncmvisits.
m(k)
If m(k)<1, then center k is assumed to be a delay center (IS node -/G/\infty). If m(k)==1, then service center k is a regular queueing center (M/M/1–FCFS, -/G/1–LCFS-PR or -/G/1–PS). Finally, if m(k)>1, center k is a M/M/m–FCFS center with m(k) identical servers. Default is m(k)=1 for each k.
Z(c)
class c external delay (think time); Z(c) ≥ 0. Default is 0. This parameter can not be used if you pass a routing matrix as the second parameter of qncmmva.
OUTPUTS
U(c,k)
If k is a FCFS, LCFS-PR or PS node (m(k) ≥ 1), then U(c,k) is the class c utilization at center k, 0 ≤ U(c,k) ≤ 1. If k is an IS node, then U(c,k) is the class c traffic intensity at center k, defined as U(c,k) = X(c,k)*S(c,k). In this case the value of U(c,k) may be greater than one.
R(c,k)
class c response time at center k. The class c residence time at center k is R(c,k) * C(c,k). The total class c system response time is dot(R, V, 2).
Q(c,k)
average number of class c requests at center k. The total number of requests at center k is sum(Q(:,k)). The total number of class c requests in the system is sum(Q(c,:)).
X(c,k)
class c throughput at center k. The class c throughput can be computed as X(c,1) / V(c,1).
NOTES
If the function call specifies the visit ratios V, then class switching is not allowed. If the function call specifies the routing probability matrix P, then class switching is allowed; however, in this case all nodes are restricted to be fixed rate servers or delay centers: multiple-server and general load-dependent centers are not supported.
In presence of load-dependent servers (e.g., if m(i)>1 for some i), the MVA algorithm is known to be numerically unstable. Generally this problem shows up as negative values for the computed response times or utilizations. This is not a problem with the queueing package, but with the MVA algorithm; as such, there is no known workaround at the moment (aoart from using a different solution technique, if available). This function prints a warning if it detects numerical problems; you can disable the warning with the command warning("off", "qn:numerical-instability").
Given a network with K service centers, C job classes and population vector {\bf N}=\left[N_1, …, N_C\right], the MVA algorithm requires space O(C \prod_i (N_i + 1)). The time complexity is O(CK\prod_i (N_i + 1)). This implementation is slightly more space-efficient (see details in the code). While the space requirement can be mitigated by using some optimizations, the time complexity can not. If you need to analyze large closed networks you should consider the qncmmvaap function, which implements the approximate MVA algorithm. Note however that qncmmvaap will only provide approximate results.
REFERENCES
• M. Reiser and S. S. Lavenberg, Mean-Value Analysis of Closed Multichain Queuing Networks, Journal of the ACM, vol. 27, n. 2, April 1980, pp. 313–322. 10.1145/322186.322195
This implementation is based on G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998 and Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 7.4.2.1 ("Exact Solution Techniques").
Function File: [U, R, Q, X] = qncmmvabs (N, S, V)
Function File: [U, R, Q, X] = qncmmvabs (N, S, V, m)
Function File: [U, R, Q, X] = qncmmvabs (N, S, V, m, Z)
Function File: [U, R, Q, X] = qncmmvabs (N, S, V, m, Z, tol)
Function File: [U, R, Q, X] = qncmmvabs (N, S, V, m, Z, tol, iter_max)
Approximate Mean Value Analysis (MVA) for closed, multiclass queueing networks with K service centers and C customer classes.
This implementation uses Bard and Schweitzer approximation. It is based on the assumption that the queue length at service center k with population set {\bf N}-{\bf 1}_c is approximated with
Q_k(N-1c) ~ (n-1)/n Q_k(N)
where \bf N is a valid population mix, {\bf N}-{\bf 1}_c is the population mix \bf N with one class c customer removed, and n = \sum_c N_c is the total number of requests.
This implementation works for networks with infinite server (IS) and single-server nodes only.
INPUTS
N(c)
number of class c requests in the system (N(c) ≥ 0).
S(c,k)
mean service time for class c customers at center k (S(c,k) ≥ 0).
V(c,k)
average number of visits of class c requests to center k (V(c,k) ≥ 0).
m(k)
number of servers at center k. If m(k) < 1, then the service center k is assumed to be a delay center (IS). If m(k) == 1, service center k is a regular queueing center (FCFS, LCFS-PR or PS) with a single server node. If omitted, each service center has a single server. Note that multiple server nodes are not supported.
Z(c)
class c external delay (Z ≥ 0). Default is 0.
tol
Stopping tolerance (tol>0). The algorithm stops if the queue length computed on two subsequent iterations are less than tol. Default is 10^{-5}.
iter_max
Maximum number of iterations (iter_max>0. The function aborts if convergenge is not reached within the maximum number of iterations. Default is 100.
OUTPUTS
U(c,k)
If k is a FCFS, LCFS-PR or PS node, then U(c,k) is the utilization of class c requests on service center k. If k is an IS node, then U(c,k) is the class c traffic intensity at device k, defined as U(c,k) = X(c)*S(c,k)
R(c,k)
response time of class c requests at service center k.
Q(c,k)
average number of class c requests at service center k.
X(c,k)
class c throughput at service center k.
REFERENCES
• Y. Bard, Some Extensions to Multiclass Queueing Network Analysis, proc. 4th Int. Symp. on Modelling and Performance Evaluation of Computer Systems, Feb 1979, pp. 51–62.
• P. Schweitzer, Approximate Analysis of Multiclass Closed Networks of Queues, Proc. Int. Conf. on Stochastic Control and Optimization, jun 1979, pp. 25–29.
This implementation is based on Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 7.4.2.2 ("Approximate Solution Techniques"). This implementation is slightly different from the one described above, as it computes the average response times R instead of the residence times.
#### 5.3.3 Mixed Networks
Function File: [U, R, Q, X] = qnmix (lambda, N, S, V, m)
Mean Value Analysis for mixed queueing networks. The network consists of K service centers (single-server or delay centers) and C independent customer chains. Both open and closed chains are possible. lambda is the vector of per-chain arrival rates (open classes); N is the vector of populations for closed chains.
Class switching is not allowed. Each customer class must correspond to an independent chain.
If the network is made of open or closed classes only, then this function calls qnom or qncmmva respectively, and prints a warning message.
INPUTS
lambda(c)
N(c)
For each customer chain c:
• if c is a closed chain, then N(c)>0 is the number of class c requests and lambda(c) must be zero;
• If c is an open chain, lambda(c)>0 is the arrival rate of class c requests and N(c) must be zero;
In other words, for each class c the following must hold:
(lambda(c)>0 && N(c)==0) || (lambda(c)==0 && N(c)>0)
S(c,k)
mean class c service time at center k, S(c,k) ≥ 0. For FCFS nodes, service times must be class-independent.
V(c,k)
average number of visits of class c customers to center k (V(c,k) ≥ 0).
m(k)
number of servers at center k. Only single-server (m(k)==1) or IS (Infinite Server) nodes (m(k)<1) are supported. If omitted, each center is assumed to be of type M/M/1-FCFS. Queueing discipline for single-server nodes can be FCFS, PS or LCFS-PR.
OUTPUTS
U(c,k)
class c utilization at center k.
R(c,k)
class c response time at center k.
Q(c,k)
average number of class c requests at center k.
X(c,k)
class c throughput at center k.
REFERENCES
• Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 7.4.3 ("Mixed Model Solution Techniques"). Note that in this function we compute the mean response time R instead of the mean residence time as in the reference.
• Herb Schwetman, Implementing the Mean Value Algorithm for the Solution of Queueing Network Models, Technical Report CSD-TR-355, Department of Computer Sciences, Purdue University, revised Feb 15, 1982.
Next: , Previous: , Up: Queueing Networks [Contents][Index]
### 5.4 Generic Algorithms
The queueing package provides a high-level function qnsolve for analyzing QN models. qnsolve takes as input a high-level description of the queueing model, and delegates the actual solution of the model to one of the lower-level function. qnsolve supports single or multiclass models, but at the moment only product-form networks can be analyzed. For non product-form networks See Non Product-Form QNs.
qnsolve accepts two input parameters. The first one is the list of nodes, encoded as an Octave cell array. The second parameter is the vector of visit ratios V, which can be either a vector (for single-class models) or a two-dimensional matrix (for multiple-class models).
Individual nodes in the network are structures build using the qnmknode function.
Function File: Q = qnmknode ("m/m/m-fcfs", S)
Function File: Q = qnmknode ("m/m/m-fcfs", S, m)
Function File: Q = qnmknode ("m/m/1-lcfs-pr", S)
Function File: Q = qnmknode ("-/g/1-ps", S)
Function File: Q = qnmknode ("-/g/1-ps", S, s2)
Function File: Q = qnmknode ("-/g/inf", S)
Function File: Q = qnmknode ("-/g/inf", S, s2)
Creates a node; this function can be used together with qnsolve. It is possible to create either single-class nodes (where there is only one customer class), or multiple-class nodes (where the service time is given per-class). Furthermore, it is possible to specify load-dependent service times. String literals are case-insensitive, so for example "-/g/inf", "-/G/inf" and "-/g/INF" are all equivalent.
INPUTS
S
Mean service time.
• If S is a scalar, it is assumed to be a load-independent, class-independent service time.
• If S is a column vector, then S(c) is assumed to the the load-independent service time for class c customers.
• If S is a row vector, then S(n) is assumed to be the class-independent service time at the node, when there are n requests.
• Finally, if S is a two-dimensional matrix, then S(c,n) is assumed to be the class c service time when there are n requests at the node.
m
Number of identical servers at the node. Default is m=1.
s2
Squared coefficient of variation for the service time. Default is 1.0.
The returned struct Q should be considered opaque to the client.
After the network has been defined, it is possible to solve it using qnsolve.
Function File: [U, R, Q, X] = qnsolve ("closed", N, QQ, V)
Function File: [U, R, Q, X] = qnsolve ("closed", N, QQ, V, Z)
Function File: [U, R, Q, X] = qnsolve ("open", lambda, QQ, V)
Function File: [U, R, Q, X] = qnsolve ("mixed", lambda, N, QQ, V)
High-level function for analyzing QN models.
• For closed networks, the following server types are supported: M/M/m–FCFS, -/G/\infty, -/G/1–LCFS-PR, -/G/1–PS and load-dependent variants.
• For open networks, the following server types are supported: M/M/m–FCFS, -/G/\infty and -/G/1–PS. General load-dependent nodes are not supported. Multiclass open networks do not support multiple server M/M/m nodes, but only single server M/M/1–FCFS.
• For mixed networks, the following server types are supported: M/M/1–FCFS, -/G/\infty and -/G/1–PS. General load-dependent nodes are not supported.
INPUTS
N
N(c)
Number of requests in the system for closed networks. For single-class networks, N must be a scalar. For multiclass networks, N(c) is the population size of closed class c.
lambda
lambda(c)
External arrival rate (scalar) for open networks. For single-class networks, lambda must be a scalar. For multiclass networks, lambda(c) is the class c overall arrival rate.
QQ{i}
List of queues in the network. This must be a cell array with N elements, such that QQ{i} is a struct produced by the qnmknode function.
Z
External delay ("think time") for closed networks. Default 0.
OUTPUTS
U(k)
If k is a FCFS node, then U(k) is the utilization of service center k. If k is an IS node, then U(k) is the traffic intensity defined as X(k)*S(k).
R(k)
average response time of service center k.
Q(k)
average number of customers in service center k.
X(k)
throughput of service center k.
Note that for multiclass networks, the computed results are per-class utilization, response time, number of customers and throughput: U(c,k), R(c,k), Q(c,k), X(c,k).
String literals are case-insensitive, so "closed", "Closed" and "CLoSEd" are all equivalent.
EXAMPLE
Let us consider a closed, multiclass network with C=2 classes and K=3 service center. Let the population be M=(2, 1) (class 1 has 2 requests, and class 2 has 1 request). The nodes are as follows:
• Node 1 is a M/M/1–FCFS node, with load-dependent service times. Service times are class-independent, and are defined by the matrix [0.2 0.1 0.1; 0.2 0.1 0.1]. Thus, S(1,2) = 0.2 means that service time for class 1 customers where there are 2 requests in 0.2. Note that service times are class-independent;
• Node 2 is a -/G/1–PS node, with service times S_{1, 2} = 0.4 for class 1, and S_{2, 2} = 0.6 for class 2 requests;
• Node 3 is a -/G/\infty node (delay center), with service times S_{1, 3}=1 and S_{2, 3}=2 for class 1 and 2 respectively.
After defining the per-class visit count V such that V(c,k) is the visit count of class c requests to service center k. We can define and solve the model as follows:
QQ = { qnmknode( "m/m/m-fcfs", [0.2 0.1 0.1; 0.2 0.1 0.1] ), ...
qnmknode( "-/g/1-ps", [0.4; 0.6] ), ...
qnmknode( "-/g/inf", [1; 2] ) };
V = [ 1 0.6 0.4; ...
1 0.3 0.7 ];
N = [ 2 1 ];
[U R Q X] = qnsolve( "closed", N, QQ, V );
Function File: [U, R, Q, X] = qnclosed (N, S, V, …)
This function computes steady-state performance measures of closed queueing networks using the Mean Value Analysis (MVA) algorithm. The qneneing network is allowed to contain fixed-capacity centers, delay centers or general load-dependent centers. Multiple request classes are supported.
This function dispatches the computation to one of qncsemva, qncsmvald or qncmmva.
• If N is a scalar, the network is assumed to have a single class of requests; in this case, the exact MVA algorithm is used to analyze the network. If S is a vector, then S(k) is the average service time of center k, and this function calls qncsmva which supports load-independent service centers. If S is a matrix, S(k,i) is the average service time at center k when i=1, …, N jobs are present; in this case, the network is analyzed with the qncmmvald function.
• If N is a vector, the network is assumed to have multiple classes of requests, and is analyzed using the exact multiclass MVA algorithm as implemented in the qncmmva function.
EXAMPLE
P = [0 0.3 0.7; 1 0 0; 1 0 0]; # Transition probability matrix
S = [1 0.6 0.2]; # Average service times
m = ones(size(S)); # All centers are single-server
Z = 2; # External delay
N = 15; # Maximum population to consider
V = qncsvisits(P); # Compute number of visits
X_bsb_lower = X_bsb_upper = X_ab_lower = X_ab_upper = X_mva = zeros(1,N);
for n=1:N
[X_bsb_lower(n) X_bsb_upper(n)] = qncsbsb(n, S, V, m, Z);
[X_ab_lower(n) X_ab_upper(n)] = qncsaba(n, S, V, m, Z);
[U R Q X] = qnclosed( n, S, V, m, Z );
X_mva(n) = X(1)/V(1);
endfor
close all;
plot(1:N, X_ab_lower,"g;Asymptotic Bounds;", ...
1:N, X_bsb_lower,"k;Balanced System Bounds;", ...
1:N, X_mva,"b;MVA;", "linewidth", 2, ...
1:N, X_bsb_upper,"k", 1:N, X_ab_upper,"g" );
axis([1,N,0,1]); legend("location","southeast"); legend("boxoff");
xlabel("Number of Requests n"); ylabel("System Throughput X(n)");
Function File: [U, R, Q, X] = qnopen (lambda, S, V, …)
Compute utilization, response time, average number of requests in the system, and throughput for open queueing networks. If lambda is a scalar, the network is considered a single-class QN and is solved using qnopensingle. If lambda is a vector, the network is considered as a multiclass QN and solved using qnopenmulti.
Next: , Previous: , Up: Queueing Networks [Contents][Index]
### 5.5 Bounds Analysis
Function File: [Xl, Xu, Rl, Ru] = qnosaba (lambda, D)
Function File: [Xl, Xu, Rl, Ru] = qnosaba (lambda, S, V)
Function File: [Xl, Xu, Rl, Ru] = qnosaba (lambda, S, V, m)
Compute Asymptotic Bounds for open, single-class networks with K service centers.
INPUTS
lambda
Arrival rate of requests (scalar, lambda ≥ 0).
D(k)
service demand at center k. (vector of length K, D(k) ≥ 0).
S(k)
mean service time at center k. (vector of length K, S(k) ≥ 0).
V(k)
mean number of visits to center k. (vector of length K, V(k) ≥ 0).
m(k)
number of servers at center k. This function only supports M/M/1 queues, therefore m must be ones(size(S)).
OUTPUTS
Xl
Xu
Lower and upper bounds on the system throughput. Xl is always set to 0 since there can be no lower bound on the throughput of open networks (scalar).
Rl
Ru
Lower and upper bounds on the system response time. Ru is always set to +inf since there can be no upper bound on the throughput of open networks (scalar).
Function File: [Xl, Xu, Rl, Ru] = qnomaba (lambda, D)
Function File: [Xl, Xu, Rl, Rl] = qnomaba (lambda, S, V)
Compute Asymptotic Bounds for open, multiclass networks with K service centers and C customer classes.
INPUTS
lambda(c)
class c arrival rate to the system (vector of length C, lambda(c) > 0).
D(c, k)
class c service demand at center k (C \times K matrix, D(c, k) ≥ 0).
S(c, k)
mean service time of class c requests at center k (C \times K matrix, S(c, k) ≥ 0).
V(c, k)
mean number of visits of class c requests at center k (C \times K matrix, V(c, k) ≥ 0).
OUTPUTS
Xl(c)
Xu(c)
lower and upper bounds of class c throughput. Xl(c) is always 0 since there can be no lower bound on the throughput of open networks (vector of length C).
Rl(c)
Ru(c)
lower and upper bounds of class c response time. Ru(c) is always +inf since there can be no upper bound on the response time of open networks (vector of length C).
Function File: [Xl, Xu, Rl, Ru] = qncsaba (N, D)
Function File: [Xl, Xu, Rl, Ru] = qncsaba (N, S, V)
Function File: [Xl, Xu, Rl, Ru] = qncsaba (N, S, V, m)
Function File: [Xl, Xu, Rl, Ru] = qncsaba (N, S, V, m, Z)
Compute Asymptotic Bounds for the system throughput and response time of closed, single-class networks with K service centers.
Single-server and infinite-server nodes are supported. Multiple-server nodes and general load-dependent servers are not supported.
INPUTS
N
number of requests in the system (scalar, N>0).
D(k)
service demand at center k (D(k) ≥ 0).
S(k)
mean service time at center k (S(k) ≥ 0).
V(k)
average number of visits to center k (V(k) ≥ 0).
m(k)
number of servers at center k (if m is a scalar, all centers have that number of servers). If m(k) < 1, center k is a delay center (IS); if m(k) = 1, center k is a M/M/1-FCFS server. This function does not support multiple-server nodes. Default is 1.
Z
External delay (scalar, Z ≥ 0). Default is 0.
OUTPUTS
Xl
Xu
Lower and upper bounds on the system throughput.
Rl
Ru
Lower and upper bounds on the system response time.
Function File: [Xl, Xu, Rl, Ru] = qncmaba (N, D)
Function File: [Xl, Xu, Rl, Ru] = qncmaba (N, S, V)
Function File: [Xl, Xu, Rl, Ru] = qncmaba (N, S, V, m)
Function File: [Xl, Xu, Rl, Ru] = qncmaba (N, S, V, m, Z)
Compute Asymptotic Bounds for closed, multiclass networks with K service centers and C customer classes. Single-server and infinite-server nodes are supported. Multiple-server nodes and general load-dependent servers are not supported.
INPUTS
N(c)
number of class c requests in the system (vector of length C, N(c) ≥ 0).
D(c, k)
class c service demand at center k (C \times K matrix, D(c,k) ≥ 0).
S(c, k)
mean service time of class c requests at center k (C \times K matrix, S(c,k) ≥ 0).
V(c,k)
average number of visits of class c requests to center k (C \times K matrix, V(c,k) ≥ 0).
m(k)
number of servers at center k (if m is a scalar, all centers have that number of servers). If m(k) < 1, center k is a delay center (IS); if m(k) = 1, center k is a M/M/1-FCFS server. This function does not support multiple-server nodes. Default is 1.
Z(c)
class c external delay (vector of length C, Z(c) ≥ 0). Default is 0.
OUTPUTS
Xl(c)
Xu(c)
Lower and upper bounds for class c throughput.
Rl(c)
Ru(c)
Lower and upper bounds for class c response time.
REFERENCES
• Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 5.2 ("Asymptotic Bounds").
Function File: [Xl, Xu, Rl, Ru] = qnosbsb (lambda, D)
Function File: [Xl, Xu, Rl, Ru] = qnosbsb (lambda, S, V)
Compute Balanced System Bounds for single-class, open networks with K service centers.
INPUTS
lambda
overall arrival rate to the system (scalar, lambda ≥ 0).
D(k)
service demand at center k (D(k) ≥ 0).
S(k)
service time at center k (S(k) ≥ 0).
V(k)
mean number of visits at center k (V(k) ≥ 0).
m(k)
number of servers at center k. This function only supports M/M/1 queues, therefore m must be ones(size(S)).
OUTPUTS
Xl
Xu
Lower and upper bounds on the system throughput. Xl is always set to 0, since there can be no lower bound on open networks throughput.
Rl
Ru
Lower and upper bounds on the system response time.
Function File: [Xl, Xu, Rl, Ru] = qncsbsb (N, D)
Function File: [Xl, Xu, Rl, Ru] = qncsbsb (N, S, V)
Function File: [Xl, Xu, Rl, Ru] = qncsbsb (N, S, V, m)
Function File: [Xl, Xu, Rl, Ru] = qncsbsb (N, S, V, m, Z)
Compute Balanced System Bounds on system throughput and response time for closed, single-class networks with K service centers.
INPUTS
N
number of requests in the system (scalar, N ≥ 0).
D(k)
service demand at center k (D(k) ≥ 0).
S(k)
mean service time at center k (S(k) ≥ 0).
V(k)
average number of visits to center k (V(k) ≥ 0). Default is 1.
m(k)
number of servers at center k. This function supports m(k) = 1 only (single-eserver FCFS nodes); this parameter is only for compatibility with qncsaba. Default is 1.
Z
External delay (Z ≥ 0). Default is 0.
OUTPUTS
Xl
Xu
Lower and upper bound on the system throughput.
Rl
Ru
Lower and upper bound on the system response time.
REFERENCES
• Edward D. Lazowska, John Zahorjan, G. Scott Graham, and Kenneth C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. http://www.cs.washington.edu/homes/lazowska/qsp/. In particular, see section 5.4 ("Balanced Systems Bounds").
Function File: [Xl, Xu, Rl, Ru] = qncmbsb (N, D)
Function File: [Xl, Xu, Rl, Ru] = qncmbsb (N, S, V)
Compute Balanced System Bounds for closed, multiclass networks with K service centers and C customer classes. Only single-server nodes are supported.
INPUTS
N(c)
number of class c requests in the system (vector of length C).
D(c, k)
class c service demand at center k (C \times K matrix, D(c,k) ≥ 0).
S(c, k)
mean service time of class c requests at center k (C \times K matrix, S(c,k) ≥ 0).
V(c,k)
average number of visits of class c requests to center k (C \times K matrix, V(c,k) ≥ 0).
OUTPUTS
Xl(c)
Xu(c)
Lower and upper class c throughput bounds (vector of length C).
Rl(c)
Ru(c)
Lower and upper class c response time bounds (vector of length C).
Function File: [Xl, Xu, Rl, Ru] = qncmcb (N, D)
Function File: [Xl, Xu, Rl, Ru] = qncmcb (N, S, V)
Compute Composite Bounds (CB) on system throughput and response time for closed multiclass networks.
INPUTS
N(c)
number of class c requests in the system.
D(c, k)
class c service demand at center k (S(c,k) ≥ 0).
S(c, k)
mean service time of class c requests at center k (S(c,k) ≥ 0).
V(c,k)
average number of visits of class c requests to center k (V(c,k) ≥ 0).
OUTPUTS
Xl(c)
Xu(c)
Lower and upper bounds on class c throughput.
Rl(c)
Ru(c)
Lower and upper bounds on class c response time.
REFERENCES
• Teemu Kerola, The Composite Bound Method (CBM) for Computing Throughput Bounds in Multiple Class Environments, Performance Evaluation Vol. 6, Issue 1, March 1986, DOI 10.1016/0166-5316(86)90002-7. Also available as Technical Report CSD-TR-475, Department of Computer Sciences, Purdue University, mar 13, 1984 (Revised Aug 27, 1984).
Function File: [Xl, Xu, Rl, Ru] = qncspb (N, D )
Function File: [Xl, Xu, Rl, Ru] = qncspb (N, S, V )
Function File: [Xl, Xu, Rl, Ru] = qncspb (N, S, V, m )
Function File: [Xl, Xu, Rl, Ru] = qncspb (N, S, V, m, Z )
Compute PB Bounds (C. H. Hsieh and S. Lam, 1987) for single-class, closed networks with K service centers.
INPUTS
number of requests in the system (scalar, N > 0).
D(k)
service demand of service center k (D(k) ≥ 0).
S(k)
mean service time at center k (S(k) ≥ 0).
V(k)
visit ratio to center k (V(k) ≥ 0).
m(k)
number of servers at center k. This function only supports M/M/1 queues, therefore m must be ones(size(S)).
Z
external delay (think time, Z ≥ 0). Default 0.
OUTPUTS
Xl
Xu
Lower and upper bounds on the system throughput.
Rl
Ru
Lower and upper bounds on the system response time.
REFERENCES
• C. H. Hsieh and S. Lam, Two classes of performance bounds for closed queueing networks, Performance Evaluation, Vol. 7 Issue 1, pp. 3–30, February 1987, DOI 10.1016/0166-5316(87)90054-X. Also available as Technical Report TR-85-09, Department of Computer Science, University of Texas at Austin, June 1985
This function implements the non-iterative variant described in G. Casale, R. R. Muntz, G. Serazzi, Geometric Bounds: a Non-Iterative Analysis Technique for Closed Queueing Networks, IEEE Transactions on Computers, 57(6):780-794, June 2008.
Function File: [Xl, Xu, Rl, Ru, Ql, Qu] = qncsgb (N, D)
Function File: [Xl, Xu, Rl, Ru, Ql, Qu] = qncsgb (N, S, V)
Function File: [Xl, Xu, Rl, Ru, Ql, Qu] = qncsgb (N, S, V, m)
Function File: [Xl, Xu, Rl, Ru, Ql, Qu] = qncsgb (N, S, V, m, Z)
Compute Geometric Bounds (GB) on system throughput, system response time and server queue lenghts for closed, single-class networks with K service centers and N requests.
INPUTS
N
number of requests in the system (scalar, N > 0).
D(k)
service demand of service center k (vector of length K, D(k) ≥ 0).
S(k)
mean service time at center k (vector of length K, S(k) ≥ 0).
V(k)
visit ratio to center k (vector of length K, V(k) ≥ 0).
m(k)
number of servers at center k. This function only supports M/M/1 queues, therefore m must be ones(size(S)).
Z
external delay (think time, Z ≥ 0, scalar). Default is 0.
OUTPUTS
Xl
Xu
Lower and upper bound on the system throughput. If Z>0, these bounds are computed using Geometric Square-root Bounds (GSB). If Z==0, these bounds are computed using Geometric Bounds (GB)
Rl
Ru
Lower and upper bound on the system response time. These bounds are derived from Xl and Xu using Little’s Law: Rl = N / Xu - Z, Ru = N / Xl - Z
Ql(k)
Qu(k)
lower and upper bounds of center K queue length.
REFERENCES
• G. Casale, R. R. Muntz, G. Serazzi, Geometric Bounds: a Non-Iterative Analysis Technique for Closed Queueing Networks, IEEE Transactions on Computers, 57(6):780-794, June 2008. 10.1109/TC.2008.37
In this implementation we set X^+ and X^- as the upper and lower Asymptotic Bounds as computed by the qncsab function, respectively.
Previous: , Up: Queueing Networks [Contents][Index]
### 5.6 QN Analysis Examples
In this section we illustrate with a few examples how the queueing package can be used to analyze queueing network models. Further examples can be found in the functions demo blocks, and can be inspected with the demo function Octave command.
#### 5.6.1 Closed, Single Class Network
Let us consider again the network shown in Figure 5.1. We denote with S_k the average service time at center k, k=1, 2, 3. Let the service times be S_1 = 1.0, S_2 = 2.0 and S_3 = 0.8. The routing of jobs within the network is described with a routing probability matrix \bf P: a request completing service at center i is enqueued at center j with probability P_{i, j}. We use the following routing matrix:
/ 0 0.3 0.7 \
P = | 1 0 0 |
\ 1 0 0 /
The network above can be analyzed with the qnclosed function see doc-qnclosed. qnclosed requires the following parameters:
N
Number of requests in the network (since we are considering a closed network, the number of requests is fixed)
S
Array of average service times at the centers: S(k) is the average service time at center k.
V
Array of visit ratios: V(k) is the average number of visits to center k.
We can compute V_k from the routing probability matrix P_{i, j} using the qncsvisits function see doc-qncsvisits. Therefore, we can analyze the network for a given population size N (e.g., N=10) as follows:
N = 10;
S = [1 2 0.8];
P = [0 0.3 0.7; 1 0 0; 1 0 0];
V = qncsvisits(P);
[U R Q X] = qnclosed( N, S, V )
⇒ U = 0.99139 0.59483 0.55518
⇒ R = 7.4360 4.7531 1.7500
⇒ Q = 7.3719 1.4136 1.2144
⇒ X = 0.99139 0.29742 0.69397
The output of qnclosed includes the vectors of utilizations U_k at center k, response time R_k, average number of customers Q_k and throughput X_k. In our example, the throughput of center 1 is X_1 = 0.99139, and the average number of requests in center 3 is Q_3 = 1.2144. The utilization of center 1 is U_1 = 0.99139, which is the highest among the service centers. Thus, center 1 is the bottleneck device.
This network can also be analyzed with the qnsolve function see doc-qnsolve. qnsolve can handle open, closed or mixed networks, and allows the network to be described in a very flexible way. First, let Q1, Q2 and Q3 be the variables describing the service centers. Each variable is instantiated with the qnmknode function.
Q1 = qnmknode( "m/m/m-fcfs", 1 );
Q2 = qnmknode( "m/m/m-fcfs", 2 );
Q3 = qnmknode( "m/m/m-fcfs", 0.8 );
The first parameter of qnmknode is a string describing the type of the node; "m/m/m-fcfs" denotes a M/M/m–FCFS center (this parameter is case-insensitive). The second parameter gives the average service time. An optional third parameter can be used to specify the number m of service centers. If omitted, it is assumed m=1 (single-server node).
Now, the network can be analyzed as follows:
N = 10;
V = [1 0.3 0.7];
[U R Q X] = qnsolve( "closed", N, { Q1, Q2, Q3 }, V )
⇒ U = 0.99139 0.59483 0.55518
⇒ R = 7.4360 4.7531 1.7500
⇒ Q = 7.3719 1.4136 1.2144
⇒ X = 0.99139 0.29742 0.69397
#### 5.6.2 Open, Single Class Network
Let us consider an open network with K=3 service centers and the following routing probabilities:
/ 0 0.3 0.5 \
P = ! 1 0 0 |
\ 1 0 0 /
In this network, requests can leave the system from center 1 with probability 1-(0.3+0.5) = 0.2. We suppose that external jobs arrive at center 1 with rate \lambda_1 = 0.15; there are no arrivals at centers 2 and 3.
Similarly to closed networks, we first compute the visit counts V_k to center k, k = 1, 2, 3. We use the qnosvisits function as follows:
P = [0 0.3 0.5; 1 0 0; 1 0 0];
lambda = [0.15 0 0];
V = qnosvisits(P, lambda)
⇒ V = 5.00000 1.50000 2.50000
where lambda(k) is the arrival rate at center k, and \bf P is the routing matrix. Assuming the same service times as in the previous example, the network can be analyzed with the qnopen function see doc-qnopen, as follows:
S = [1 2 0.8];
[U R Q X] = qnopen( sum(lambda), S, V )
⇒ U = 0.75000 0.45000 0.30000
⇒ R = 4.0000 3.6364 1.1429
⇒ Q = 3.00000 0.81818 0.42857
⇒ X = 0.75000 0.22500 0.37500
The first parameter of the qnopen function is the (scalar) aggregate arrival rate.
Again, it is possible to use the qnsolve high-level function:
Q1 = qnmknode( "m/m/m-fcfs", 1 );
Q2 = qnmknode( "m/m/m-fcfs", 2 );
Q3 = qnmknode( "m/m/m-fcfs", 0.8 );
lambda = [0.15 0 0];
[U R Q X] = qnsolve( "open", sum(lambda), { Q1, Q2, Q3 }, V )
⇒ U = 0.75000 0.45000 0.30000
⇒ R = 4.0000 3.6364 1.1429
⇒ Q = 3.00000 0.81818 0.42857
⇒ X = 0.75000 0.22500 0.37500
#### 5.6.3 Closed Multiclass Network/1
The following example is taken from Herb Schwetman, Implementing the Mean Value Algorithm for the Solution of Queueing Network Models, Technical Report CSD-TR-355, Department of Computer Sciences, Purdue University, Feb 15, 1982.
Let us consider the following multiclass QN with three servers and two classes
Figure 5.3
Servers 1 and 2 (labeled APL and IMS, respectively) are infinite server nodes; server 3 (labeled SYS) is Processor Sharing (PS). Mean service times are given in the following table:
APLIMSSYS
Class 11-0.025
Class 2-150.500
There is no class switching. If we assume a population of 15 requests for class 1, and 5 requests for class 2, then the model can be analyzed as follows:
S = [1 0 .025; 0 15 .5];
P = zeros(2,3,2,3);
P(1,1,1,3) = P(1,3,1,1) = 1;
P(2,2,2,3) = P(2,3,2,2) = 1;
V = qncmvisits(P,[3 3]); # reference station is station 3
N = [15 5];
m = [-1 -1 1];
[U R Q X] = qncmmva(N,S,V,m)
⇒
U =
14.32312 0.00000 0.35808
0.00000 4.70699 0.15690
R =
1.00000 0.00000 0.04726
0.00000 15.00000 0.93374
Q =
14.32312 0.00000 0.67688
0.00000 4.70699 0.29301
X =
14.32312 0.00000 14.32312
0.00000 0.31380 0.31380
#### 5.6.4 Closed Multiclass Network/2
The following example is from M. Marzolla, The qnetworks Toolbox: A Software Package for Queueing Networks Analysis, Technical Report UBLCS-2010-04, Department of Computer Science, University of Bologna, Italy, February 2010.
Figure 5.4: Three-tier enterprise system model
The model shown in Figure 5.4 shows a three-tier enterprise system with K=6 service centers. The first tier contains the Web server (node 1), which is responsible for generating Web pages and transmitting them to clients. The application logic is implemented by nodes 2 and 3, and the storage tier is made of nodes 4–6.The system is subject to two workload classes, both represented as closed populations of N_1 and N_2 requests, respectively. Let D_{c, k} denote the service demand of class c requests at center k. We use the parameter values:
Serv. no.NameClass 1Class 2
1Web Server122
2App. Server 11420
3App. Server 22314
4DB Server 12090
5DB Server 28030
6DB Server 33133
We set the total number of requests to 100, that is N_1 + N_2 = N = 100, and we study how different population mixes (N_1, N_2) affect the system throughput and response time. Let 0 < \beta_1 < 1 denote the fraction of class 1 requests: N_1 = \beta_1 N, N_2 = (1-\beta_1)N. The following Octave code defines the model for \beta_1 = 0.1:
N = 100; # total population size
beta1 = 0.1; # fraction of class 1 reqs.
S = [12 14 23 20 80 31; \
2 20 14 90 30 33 ];
V = ones(size(S));
pop = [fix(beta1*N) N-fix(beta1*N)];
[U R Q X] = qncmmva(pop, S, V);
The qncmmva(pop, S, V) function invocation uses the multiclass MVA algorithm to compute per-class utilizations U_{c, k}, response times R_{c,k}, mean queue lengths Q_{c,k} and throughputs X_{c,k} at each service center k, given a population vector pop, mean service times S and visit ratios V. Since we are given the service demands D_{c, k} = S_{c, k} V_{c,k}, but function qncmmva requires separate service times and visit ratios, we set the service times equal to the demands, and all visit ratios equal to one. Overall class and system throughputs and response times can also be computed:
X1 = X(1,1) / V(1,1) # class 1 throughput
⇒ X1 = 0.0044219
X2 = X(2,1) / V(2,1) # class 2 throughput
⇒ X2 = 0.010128
XX = X1 + X2 # system throughput
⇒ XX = 0.014550
R1 = dot(R(1,:), V(1,:)) # class 1 resp. time
⇒ R1 = 2261.5
R2 = dot(R(2,:), V(2,:)) # class 2 resp. time
⇒ R2 = 8885.9
RR = N / XX # system resp. time
⇒ RR = 6872.7
dot(X,Y) computes the dot product of two vectors. R(1,:) is the first row of matrix R and V(1,:) is the first row of matrix V, so dot(R(1,:), V(1,:)) computes \sum_k R_{1,k} V_{1,k}.
Figure 5.5: Throughput and Response Times as a function of the population mix
We can also compute the system power \Phi = X / R, which defines how efficiently resources are being used: high values of \Phi denote the desirable situation of high throughput and low response time. Figure 5.6 shows \Phi as a function of \beta_1. We observe a “plateau” of the global system power, corresponding to values of \beta_1 which approximately lie between 0.3 and 0.7. The per-class power exhibits an interesting (although not completely surprising) pattern, where the class with higher population exhibits worst efficiency as it produces higher contention on the resources.
Figure 5.6: System Power as a function of the population mix
#### 5.6.5 Closed Multiclass Network/3
We now consider an example of multiclass network with class switching. The example is taken from Sch82, and is shown in Figure Figure 5.7.
Figure 5.7: Multiclass Model with Class Switching
The system consists of three devices and two job classes. The CPU node is a PS server, while the two nodes labeled I/O are FCFS. Class 1 mean service time at the CPU is 0.01; class 2 mean service time at the CPU is 0.05. The mean service time at node 2 is 0.1, and is class-independent. Similarly, the mean service time at node 3 is 0.07. Jobs in class 1 leave the CPU and join class 2 with probability 0.1; jobs of class 2 leave the CPU and join class 1 with probability 0.2. There are N=3 jobs, which are initially allocated to class 1. However, note that since class switching is allowed, the total number of jobs in each class does not remain constant; however the total number of jobs does.
C = 2; K = 3;
S = [.01 .07 .10; ...
.05 .07 .10 ];
P = zeros(C,K,C,K);
P(1,1,1,2) = .7; P(1,1,1,3) = .2; P(1,1,2,1) = .1;
P(2,1,2,2) = .3; P(2,1,2,3) = .5; P(2,1,1,1) = .2;
P(1,2,1,1) = P(2,2,2,1) = 1;
P(1,3,1,1) = P(2,3,2,1) = 1;
N = [3 0];
[U R Q X] = qncmmva(N, S, P)
⇒
U =
0.12609 0.61784 0.25218
0.31522 0.13239 0.31522
R =
0.014653 0.133148 0.163256
0.073266 0.133148 0.163256
Q =
0.18476 1.17519 0.41170
0.46190 0.25183 0.51462
X =
12.6089 8.8262 2.5218
6.3044 1.8913 3.1522
Next: , Previous: , Up: Top [Contents][Index]
## 6 References
[Aky88]
Ian F. Akyildiz, Mean Value Analysis for Blocking Queueing Networks, IEEE Transactions on Software Engineering, vol. 14, n. 2, april 1988, pp. 418–428. DOI 10.1109/32.4663
[Bar79]
Y. Bard, Some Extensions to Multiclass Queueing Network Analysis, proc. 4th Int. Symp. on Modelling and Performance Evaluation of Computer Systems, feb. 1979, pp. 51–62.
[BCMP75]
F. Baskett, K. Mani Chandy, R. R. Muntz, and F. G. Palacios. 1975. Open, Closed, and Mixed Networks of Queues with Different Classes of Customers. J. ACM 22, 2 (April 1975), 248â260, DOI 10.1145/321879.321887
[BGMT98]
G. Bolch, S. Greiner, H. de Meer and K. Trivedi, Queueing Networks and Markov Chains: Modeling and Performance Evaluation with Computer Science Applications, Wiley, 1998.
[Buz73]
J. P. Buzen, Computational Algorithms for Closed Queueing Networks with Exponential Servers, Communications of the ACM, volume 16, number 9, september 1973, pp. 527–531. DOI 10.1145/362342.362345
[C08]
G. Casale, A note on stable flow-equivalent aggregation in closed networks. Queueing Syst. Theory Appl., 60:193â-202, December 2008, DOI 10.1007/s11134-008-9093-6
[CMS08]
G. Casale, R. R. Muntz, G. Serazzi, Geometric Bounds: a Non-Iterative Analysis Technique for Closed Queueing Networks, IEEE Transactions on Computers, 57(6):780-794, June 2008. DOI 10.1109/TC.2008.37
[GrSn97]
C. M. Grinstead, J. L. Snell, (July 1997). Introduction to Probability. American Mathematical Society. ISBN 978-0821807491; this excellent textbook is available in PDF format and can be used under the terms of the GNU Free Documentation License (FDL)
[Jac04]
J. R. Jackson, Jobshop-Like Queueing Systems, Vol. 50, No. 12, Ten Most Influential Titles of "Management Science’s" First Fifty Years (Dec., 2004), pp. 1796-1802, available online
[Jai91]
R. Jain, The Art of Computer Systems Performance Analysis, Wiley, 1991, p. 577.
[HsLa87]
C. H. Hsieh and S. Lam, Two classes of performance bounds for closed queueing networks, PEVA, vol. 7, n. 1, pp. 3–30, 1987
[Ker84]
T. Kerola, The Composite Bound Method (CBM) for Computing Throughput Bounds in Multiple Class Environments, Performance Evaluation, Vol. 6 Isue 1, March 1986, DOI 10.1016/0166-5316(86)90002-7; also available as Technical Report CSD-TR-475, Department of Computer Sciences, Purdue University, mar 13, 1984 (Revised aug 27, 1984).
[LZGS84]
E. D. Lazowska, J. Zahorjan, G. Scott Graham, and K. C. Sevcik, Quantitative System Performance: Computer System Analysis Using Queueing Network Models, Prentice Hall, 1984. available online.
[ReKo76]
M. Reiser, H. Kobayashi, On The Convolution Algorithm for Separable Queueing Networks, In Proceedings of the 1976 ACM SIGMETRICS Conference on Computer Performance Modeling Measurement and Evaluation (Cambridge, Massachusetts, United States, March 29–31, 1976). SIGMETRICS ’76. ACM, New York, NY, pp. 109–117. DOI 10.1145/800200.806187
[ReLa80]
M. Reiser and S. S. Lavenberg, Mean-Value Analysis of Closed Multichain Queuing Networks, Journal of the ACM, vol. 27, n. 2, April 1980, pp. 313–322. DOI 10.1145/322186.322195
[Sch79]
P. Schweitzer, Approximate Analysis of Multiclass Closed Networks of Queues, Proc. Int. Conf. on Stochastic Control and Optimization, jun 1979, pp. 25â29
[Sch80]
H. D. Schwetman, Testing Network-of-Queues Software, Technical Report CSD-TR 330, Department of computer Sciences, Purdue University, 1980
[Sch81]
H. D. Schwetman, Some Computational Aspects of Queueing Network Models, Technical Report CSD-TR-354, Department of Computer Sciences, Purdue University, feb, 1981 (revised).
[Sch82]
H. D. Schwetman, Implementing the Mean Value Algorithm for the Solution of Queueing Network Models, Technical Report CSD-TR-355, Department of Computer Sciences, Purdue University, feb 15, 1982.
[Sch84]
T. Kerola, H. D. Schwetman, Performance Bounds for Multiclass Models, Technical Report CSD-TR-479, Department of Computer Sciences, Purdue University, 1984.
[Tij03]
H. C. Tijms, A first course in stochastic models, John Wiley and Sons, 2003, ISBN 0471498807, ISBN 9780471498803, DOI 10.1002/047001363X
[ZaWo81]
J. Zahorjan and E. Wong, The solution of separable queueing network models using mean value analysis. SIGMETRICS Perform. Eval. Rev. 10, 3 (Sep. 1981), 80-85. DOI 10.1145/1010629.805477
[Zeng03]
G. Zeng, Two common properties of the erlang-B function, erlang-C function, and Engset blocking function, Mathematical and Computer Modelling, Volume 37, Issues 12-13, June 2003, Pages 1287-1296 DOI 10.1016/S0895-7177(03)90040-9
Next: , Previous: , Up: Top [Contents][Index]
## Appendix A GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. http://fsf.org/
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
### Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program—to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers’ and authors’ protection, the GPL clearly explains that there is no warranty for this free software. For both users’ and authors’ sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users’ freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
### TERMS AND CONDITIONS
1. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
2. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work’s System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
3. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
4. Protecting Users’ Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work’s users, your or third parties’ legal rights to forbid circumvention of technological measures.
5. Conveying Verbatim Copies.
You may convey verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
6. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
1. The work must carry prominent notices stating that you modified it, and giving a relevant date.
2. The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
3. You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
4. If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation’s users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
7. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
1. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
2. Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
3. Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
4. Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
5. Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
1. Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
2. Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
3. Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
4. Limiting the use for publicity purposes of names of licensors or authors of the material; or
5. Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
6. Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
9. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
10. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
11. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party’s predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
12. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor’s “contributor version”.
A contributor’s “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor’s essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient’s use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
13. No Surrender of Others’ Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
14. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
15. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
16. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
17. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
18. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
### How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
one line to give the program's name and a brief idea of what it does.
Copyright (C) year name of author
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
program Copyright (C) year name of author
This program comes with ABSOLUTELY NO WARRANTY; for details type ‘show w’.
This is free software, and you are welcome to redistribute it
under certain conditions; type ‘show c’ for details.
The hypothetical commands ‘show w’ and ‘show c’ should show the appropriate parts of the General Public License. Of course, your program’s commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see http://www.gnu.org/licenses/.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read http://www.gnu.org/philosophy/why-not-lgpl.html.
Next: , Previous: , Up: Top [Contents][Index]
## Concept Index
Jump to: A B C D E F G I L M N O P Q R S T U W
Jump to: A B C D E F G I L M N O P Q R S T U W
Previous: , Up: Top [Contents][Index]
|
# If A = <7 ,9 ,4 >, B = <6 ,-9 ,-3 > and C=A-B, what is the angle between A and C?
Jan 27, 2018
$\theta = {\cos}^{-} 1 \left(\frac{197}{\sqrt{146} \sqrt{374}}\right) \approx 0.568$ radians or ${32.54}^{\circ}$
#### Explanation:
$\vec{c} = \left[7 - 6 , 9 - \left(- 9\right) , 4 - \left(- 3\right)\right] = \left[1 , 18 , 7\right]$
The angle between vectors is given by:
$\theta = {\cos}^{-} 1 \left(\frac{\vec{a} \cdot \vec{c}}{| | \vec{a} | | | | \vec{c} | |}\right)$
Calculating individual parts:
$\vec{a} \cdot \vec{c} = 7 \cdot 1 + 9 \cdot 18 + 4 \cdot 7 = 197$
$| | \vec{a} | | = \sqrt{{7}^{2} + {9}^{2} + {4}^{2}} = \sqrt{146}$
$| | \vec{c} | | = \sqrt{{1}^{2} + {18}^{2} + {7}^{2}} = \sqrt{374}$
So:
$\theta = {\cos}^{-} 1 \left(\frac{197}{\sqrt{146} \sqrt{374}}\right)$, which is about as good as it gets, but approximations are:
$\theta \approx 0.568$ radians or $\approx {32.54}^{\circ}$
|
# zbMATH — the first resource for mathematics
Empirical likelihood ratio confidence regions. (English) Zbl 0712.62040
X$${}_ 1,X_ 2,..$$. are independent p-dimensional random variables with common cumulative distribution function $$F_ 0$$. $$F_ n$$ denotes the empirical cumulative distribution function. L(F) denotes the likelihood function that $$F_ n$$ maximizes, and R(F) denotes the empirical likelihood ratio function $$L(F)/L(F_ n)$$. If T is a statistical functional, R(F) is used to construct nonparametric confidence regions and tests for $$T(F_ 0)$$. Most of the discussion is for the mean. The notation $$F\ll F_ n$$ means that F is supported in the sample. The following theorem is proved.
Let $$X,X_ 1,X_ 2,..$$. be i.i.d. p-dimensional random variables, with $$u_ 0$$ denoting the mean vector of X. Suppose the rank of the covariance matrix of X is a positive value q. Let Y be a scalar random variable with a chi-square distribution with q degrees of freedom. For r in (0,1) let C(r,n) denote $$\{\int X dF |$$ $$F\ll F_ n$$, R(F)$$\geq r\}$$. Then C(r,n) is a convex set and $$\lim_{n\to \infty}P(C(r,n)$$ contains $$u_ 0)=P(Y\leq -2 \log r)$$. If $$E(\| X\|^ 4)$$ is finite, $| P(C(r,n)\quad contains\quad u_ 0)-P(Y\leq -2 \log r)| =O(n^{-1/2}).$ Analogous results are given for other functionals, and algorithms are described.
Reviewer: L.Weiss
##### MSC:
62G20 Asymptotic properties of nonparametric inference 62H99 Multivariate analysis 62E20 Asymptotic distribution theory in statistics 62G30 Order statistics; empirical distribution functions 62G15 Nonparametric tolerance and confidence regions 62H12 Estimation in multivariate analysis 62H15 Hypothesis testing in multivariate analysis
Full Text:
|
Approximate pi to 24 digits via keyboard
02-01-2015, 05:34 PM
Post: #15
Dieter Senior Member Posts: 2,397 Joined: Dec 2013
RE: Approximate pi to 24 digits via keyboard
(02-01-2015 04:27 PM)Thomas Klemm Wrote: Isn't the result given by the HP-41C or HP-15C more appropriate than returning 12 digits like the HP-48G? Since we can't enter more than 12 digits why should the calculator assume that next 12 digits are all 0? But that's exactly what happens:
Yes, that's right. But in all other cases the calculator also assumes that the input is an exact 10 or 12 digit value. The only other option is returning an interval, i.e. the result for x–0,5 ULP and x+0,5 ULP.
Consider this:
Code:
1003 e^x => 3,95699361019 E+435 1003,00000001 e^x => 3,95699364976 E+435 1002,99999999 e^x => 3,95699357062 E+435
So should the calculator drop the last four digits and return 3,9569936 E+435 for e1003 instead?
Dieter
« Next Oldest | Next Newest »
Messages In This Thread Approximate pi to 24 digits via keyboard - Rick314 - 01-30-2015, 06:35 PM RE: Approximate pi to 24 digits via keyboard - PANAMATIK - 01-30-2015, 07:22 PM RE: Approximate pi to 24 digits via keyboard - Rick314 - 01-30-2015, 07:53 PM RE: Approximate pi to 24 digits via keyboard - Gilles - 01-30-2015, 08:38 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 01-31-2015, 07:11 AM RE: Approximate pi to 24 digits via keyboard - Gilles - 01-31-2015, 11:25 AM RE: Approximate pi to 24 digits via keyboard - Dieter - 01-31-2015, 06:17 PM RE: Approximate pi to 24 digits via keyboard - Gerson W. Barbosa - 02-02-2015, 02:57 AM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-02-2015, 06:56 AM RE: Approximate pi to 24 digits via keyboard - Dieter - 01-31-2015, 07:06 AM RE: Approximate pi to 24 digits via keyboard - Rick314 - 02-01-2015, 12:21 AM RE: Approximate pi to 24 digits via keyboard - J-F Garnier - 02-01-2015, 08:41 AM RE: Approximate pi to 24 digits via keyboard - Rick314 - 02-01-2015, 05:44 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-01-2015, 02:12 PM RE: Approximate pi to 24 digits via keyboard - EdS2 - 02-01-2015, 08:08 PM RE: Approximate pi to 24 digits via keyboard - Thomas Klemm - 02-01-2015, 11:07 AM RE: Approximate pi to 24 digits via keyboard - J-F Garnier - 02-01-2015, 11:34 AM RE: Approximate pi to 24 digits via keyboard - Han - 02-02-2015, 11:01 PM RE: Approximate pi to 24 digits via keyboard - Mark Hardman - 02-03-2015, 12:01 AM RE: Approximate pi to 24 digits via keyboard - rprosperi - 02-03-2015, 12:33 AM RE: Approximate pi to 24 digits via keyboard - Thomas Klemm - 02-01-2015, 04:27 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-01-2015 05:34 PM RE: Approximate pi to 24 digits via keyboard - Thomas Klemm - 02-01-2015, 07:24 PM RE: Approximate pi to 24 digits via keyboard - Rick314 - 02-01-2015, 09:17 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-01-2015, 10:25 PM RE: Approximate pi to 24 digits via keyboard - Gerson W. Barbosa - 02-02-2015, 02:43 AM RE: Approximate pi to 24 digits via keyboard - Rick314 - 02-02-2015, 06:46 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-05-2015, 02:32 PM RE: Approximate pi to 24 digits via keyboard - Gerson W. Barbosa - 02-05-2015, 10:59 PM RE: Approximate pi to 24 digits via keyboard - Claudio L. - 02-06-2015, 03:03 PM RE: Approximate pi to 24 digits via keyboard - Marcus von Cube - 02-06-2015, 07:16 PM RE: Approximate pi to 24 digits via keyboard - PANAMATIK - 02-06-2015, 04:10 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-06-2015, 06:34 PM RE: Approximate pi to 24 digits via keyboard - Werner - 02-06-2015, 11:16 AM RE: Approximate pi to 24 digits via keyboard - Werner - 02-07-2015, 06:25 AM RE: Approximate pi to 24 digits via keyboard - Marcus von Cube - 02-07-2015, 11:15 PM RE: Approximate pi to 24 digits via keyboard - Dieter - 02-08-2015, 05:00 PM
User(s) browsing this thread: 1 Guest(s)
|
Browse Questions
# Potentiometer wire is of cross sectional area $8 \times 10^{-6}m^2$ and has resistivity $40 \times 10^{-8}$. If $0.2$ amp current flows through it its potential gradient is
$(a)\;10^{-2} v/m \\(b)\;2 \times 10^{-2}\;v/m \\(c)\;10^{-3} \;v/m \\(d)\;3 \times 10^{-3}\;v/m$
Can you answer this question?
Potential gradient is $\large\frac{V}{L}$ where V is potential across wire of length L.
$R= \large\frac{\rho L}{A}$
$\qquad= \large\frac{40 \times 10^{-8} \times L}{8 \times 10^{-6}}$
$\qquad= 5 \times 10^{-2}\;L$
$V= I \times R$
$\qquad= 5 \times 10^{-2} \times L \times 0.2$
$\qquad = 1 \times 10^{-2} \times L$
Potential gradient $=\large\frac{V}{L}$
$\qquad= 10^{-2}\;V/m$
Hence a is the correct answer.
answered Mar 18, 2014 by
|
# What is path of light in the accelerating elevator?
1. Mathematically, (by mathematically I means by equations) what is path of light in the accelerating elevator?
2. What is the difference between an ordinary derivative and covariant derivative (which is used in curved geodesic)?
-
You are asking two different questions here.. Perhaps you should create two different threads. – John M Apr 21 '13 at 20:46
By the equivalence principle, the uniformly accelerated frame of the elevator than be treated as a spacetime with a uniform gravitional field. The metric for a spacetime in which there is a uniform gravitational field in the $z$-direction is (with $c=1$) $$ds^2 = -\left(1+ gz\right)^2dt^2 + dx^2 + dy^2 + dz^2$$ Since this metric is invariant under translations in $t,x,y$, we immediately get three killing vectors $\partial_t, \partial_x, \partial_y$ and three corresponding conserved quantities along a geodesic $\gamma^\mu(\lambda) = (t(\lambda), x(\lambda), y(\lambda), z(\lambda))$; \begin{align} c_t &= \dot \gamma \cdot \partial_t = -\left(1+ gz\right)^2 \dot t \\ c_x &= \dot \gamma \cdot \partial_x = \dot x \\ c_y &= \dot \gamma\cdot \partial_y = \dot y \end{align} where here overdots mean derivative with respect to affine parameter $\lambda$. Light travels along null geodesics which satisfy $\dot \gamma^2 = 0$ which gives the equation $$-\left(1+ gz\right)^2\dot t^2 + \dot x^2 + \dot y^2 + \dot z^2 = 0$$ Combining these results gives $$-\left(1+ gz\right)^{-2}c_t^2 + c_x^2+c_y^2+\dot z^2 = 0$$ and therefore all in all we have $$\ddot x = 0, \qquad \ddot y = 0, \qquad \ddot z = \frac{c_t^2g}{(1+g z)^3}$$ If we choose $\dot t = 1$, so that the affine parameter corresponds to time, then we get the following $$\ddot x = 0, \qquad \ddot y = 0, \qquad \ddot z = -\frac{g}{(1-g z)^3}$$ In which case we see that the motion of light is such that it experiences no acceleration in the $x$ and $y$ directions and a position-dependent acceleration in the $z$-direction. In fact, if we taylor expand the right hand side of the $z$ equation of motion with respect to the parameter $g$ then we find $$\ddot z = -g + 3z g^2 + \mathcal O(g^3)$$ So for small accelerations $g$, the light just experiences the acceleration of the elevator downward, plus higher order corrections.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.