text
stringlengths 104
605k
|
---|
# Deploy with Truffle
danger
Truffle is not longer maintained. It is recommended to use another method to deploy contracts.
Truffle, developed by ConsenSys Software Inc, uses scripts to perform smart contract deployments. It uses a CLI and a few configuration files to make deployments easier and faster.
In this chapter, we will deploy the Raffle smart contract, as developed in the LIGO module onto a testnet using Truffle.
Truffle is part of a suite of tools that make it easier to deploy written smart contracts onto a blockchain, called The Truffle Suite. It provides:
• Truffle: for smart contract compilation and deployment onto a network.
• Ganache: for running a local blockchain network and test smart contracts on it.
• Drizzle: for easy integration with frontend applications by providing a library to interact with deployed smart contracts.
• Truffle Teams: for monitoring smart contracts
The Truffle Suite is not available for all blockchains. It supports:
• Ethereum
• Tezos
• Corda
• Quorum
• Hyperledger Fabric
Only Truffle and Ganache (still in beta) are available for Tezos for now.
Truffle can compile and deploy LIGO or SmartPy scripts on the Tezos network with a single command.
# Truffle Installation
Truffle can be installed with a docker or npm. The easiest way is to use Truffle from the npm package.
You'll need to have NodeJS v8.9.4 or later on your machine.
Open a terminal and run:
## Using Truffle Boxes
A Truffle box is a project already set up for interacting with a smart contract, that can easily and quickly be adjusted to the specific needs of a project. They can be launched instantly and modified with little work. Truffle provides users with a global boxes repository at trufflesuite.com/boxes. More particularly, a Tezos box is available at trufflesuite.com/boxes/tezos-example
The tezos-example box is helpful for the deployment of Dapps.
$mkdir tezos-example$ cd tezos-example$truffle unbox tezos-example The Truffle unbox command will not create a new tezos-example folder, but will unpack all the content in the current folder. You can find a Truffle box with SmartPy scripts at github.com/truffle-box/tezos-smartpy-example-box # Using Truffle Using Truffle can be divided in two steps: 1. Configuration: modifying scripts to define the way smart contracts are deployed. 2. Compilation and deployment, with the Truffle CLI. ## Project Structure overview A Truffle project follows this structure: • build: the folder containing the Michelson code, compiled by Truffle and used for the contract deployments. The build folder will be created or updated after each compilation command. • contracts: the folder containing all the LIGO smart contracts that Truffle has to compile. • migrations: the folder containing the Truffle deployement scripts for the deployment of the contracts. • node_modules: the node modules used by the Truffle project. • package.json: the file containing the script commands, which launches a sandbox with Ganache. • scripts: the folder containing scripts that can be run from the CLI to execute some operation on the smart contracts. • sandbox: the folder containing two accounts to use on a sandbox environment. • test: the folder containing Javascript tests • truffle-config.js: the configuration file which defines networks and accounts to be used for the deployment. ## Main Truffle commands The Truffle CLI provides various commands that can be displayed with: $ truffle --help
The main commands are:
compile Compile contract source filesinit Initialize new and empty projectmigrate Run migrations to deploy contractsnetworks Show addresses for deployed contracts on each networktest Run JavaScript tests
## Compiling smart contracts with Truffle
In this part, the tezos-example box is used as an example.
Truffle is mainly used for smart contract compilation and deployment. It can also launch tests, but other tools such as PyTezos can be used for that.
Compiling smart contracts can be done with:
$truffle compile Input: valid smart contracts (i.e compiling smart contracts), located in contract directory Output: Truffle artifacts, stored into the build/contracts directory. ### About LIGO smart contracts Truffle can compile LIGO smart contracts, using the installed LIGO compiler. But they MUST be stored in the contracts folder. Truffle considers each LIGO file as an independent smart contract. Thus, if a smart contract is split into several LIGO files, Truffle will try to compile each file as a separate smart contract, resulting in a failed compilation. There is a workaround for this behavior: 1. Create a new folder in the project root directory, called src for instance. 2. Move all your smart contracts files into src/. 3. Create a LIGO file, importing the main file from src/. Truffle will successfully compile your split smart contract. ### Truffle artifacts For each LIGO file found in the contract, Truffle will yield an artifact in build/contracts. An artifact is a JSON file containing the compiled smart contract, the LIGO source code and the deployment information. The example below is the artifact yielded for the Counter.ligo file fron the LIGO module: { "contractName": "Counter", "abi": [], "michelson": "<Michelson code as json>", "source": "<Content of the LIGO file from contract>", "sourcePath": "path/to/truffle-example/contracts/Counter.ligo", "compiler": { "name": "ligo", "version": "next" }, "networks": {}, "schemaVersion": "3.2.0-tezos.1", "updatedAt": "2021-03-19T14:27:16.197Z"} These artifacts are then used in the deployment scripts. ### Hand-ons Let's create a truffle project for the Raffle Smart contract, and compile the LIGO code into a Michelson code, with Truffle. First, we will download the tezos-example box and then remove the example contracts. $ mkdir truffle-raffle$cd truffle-raffle$ truffle unbox tezos-example$rm -rf contracts/* migrations/* Let's put the Raffle smart contract into our Truffle project: $ touch contracts/Raffle.ligo
Let's copy and paste the LIGO code into this file.
Everything is ready for the compilation:
$truffle compile A new JSON file has been created in the build/contracts/ directory. ## Deploying smart contracts with Truffle At this point, the smart contract is compiled and ready to be deployed. However, Truffle needs to be configured so that the deployment is done on a given network, with a given account, a given initial storage, etc. ### Using an account for the deployment Originating a contract costs some Tez. Thus, an account holding funds is necessary. Accounts with funds on testnets can freely be created as a JSON file using a faucet. ### Adding a network #### Defining accounts The network configuration is handled in the truffle-config.js file. It can execute any javascript code needed for the configuration. Some networks are already defined: Mainnet and Localhost. However, as the Tezos protocol constantly evolving, new networks will probably have to be added. Each network is associated with an account. There are two ways of importing an account: • Importing an account into the truffle-config.js file: const {mnemonic, secret, password, email} = require("/path/to/faucet.json"); Truffle will activate this account before the contract origination. • Setting already activated accounts in the scripts folder. Accounts can be defined according to the network. By default, a sandbox folder is present, with two defined accounts (these two accounts are found in any sandbox). New accounts can be defined by creating a new folder, named after the network name by convention, with an accounts.js file: module.exports = { account_name: { pkh: "<pkh>", sk: "<sk>", pk: "<pk>" },<...>}; Obviously faucet accounts can be imported into the accounts.js file as well. #### Defining networks The networks are defined in the truffle-config.js file. It exports an object that defines networks. Each key in networks sets a network, which requires: • host: An RPC node (https://tezostaquito.io/docs/rpc_nodes/) or a local node (as shown in the development network). • port: running node port. • network_id: each Tezos network has an id. For instance, Florencenet id is NetXxkAx4woPLyu. * matches any network. • type: network type, here tezos. • A private key to create a transaction, either: • secretKey • secret, mnemonic, password, email module.exports = { // see <http://truffleframework.com/docs/advanced/configuration> // for more details on how to specify configuration options! networks: { development: { host: "http://localhost", port: 8732, network_id: "*", secretKey: alice.sk, type: "tezos" } } #### Hand-ons We will deploy our Raffle smart contract onto Edonet. We'll need to add this network into the truffle-config.js file. First, we need a faucet account. Let's download a faucet from https://faucet.tzalpha.net/ into our root project folder. Let's define an Edonet network, that will use this faucet: const {alice} = require('./scripts/sandbox/accounts');const {mnemonic, secret, password, email} = require("./faucet.json");module.exports = { networks: { development: { host: "http://localhost", port: 8732, network_id: "*", secretKey: alice.sk, type: "tezos" }, edonet: { host: "https://edonet-tezos.giganode.io", port: 443, network_id: "*", secret, mnemonic, password, email, type: "tezos" }, [...] }}; ### Writing the migration scripts Now that the smart contracts and the deployment network are ready, the next step is to write the deployment script. Such scripts are also called migrations: they usually update the storage structure or initial data and the smart contract code. These scripts are located in the migrations directory. Each migration is a Javascript file, defining the deployment tasks. It can execute any Javascript code. Each migration starts with a number, followed by a title. Truffle will run the migrations in an ascending order. For instance, the tezos-example box comes with tree migrations: 1_initial_migration.js2_deploy_simple_storage.js3_deploy_counter.js A migration script defines: • the initial storage of the smart contract(s) • the contract deployment steps: the order of deployment of smart contracts, networks, accounts These migration scripts are used the same way whether you deploy your smart contract for the first time, or you update a new version of a smart contract. #### Importing the smart contract to deploy The first step is to specify which smart contract is to be deployed: var myContract = artifacts.require("MyContract"); Truffle will look for a MyContract.ligo file in the contract directory. Thus, to import a contract, the filename of the contract (without the extension) is used (artifacts is a Truffle keyword). It is here possible to import several smart contracts: var firstContract = artifacts.require("FirstContract");var secondContract = artifacts.require("SecondContract"); #### Defining the initial storage A smart contract defines a storage. When originated, the initial storage must be set and the storage must be compliant with the structure defined in the smart contract to be deployed: the names and types must be respected. The initial storage is declared with a Javascript syntax. Two modules can come in handy: const { MichelsonMap } = require("@taquito/taquito"); // for Michelson mapsconst web3 = require("web3"); // for bytes Below is the matching table between Javascript and LIGO. LIGOJavascript List, Tuple[] Big_map, Mapconst bigMap = new MichelsonMap() bigMap.set(key, values) (from taquito module) string, addressstring bytesweb3.utils.asciiToHex(string_to_convert).slice(2) int, nat, muteznumber recordObject {} timestampDate.now() Here is a migration example, defining a storage with essential types: // modules importconst { MichelsonMap } = require("@taquito/taquito"); // used for big mapsconst web3 = require("web3"); // used for bytes// contract to deployvar MyContract = artifacts.require("MyContract");// initial storage definitionconst admin = "tz1ibMpWS6n6MJn73nQHtK5f4ogyYC1z9T9z"; //addressconst emptyBigMap = new MichelsonMap(); // empty big mapconst bigMapWithAnElement = new MichelsonMap(); // empty big mapbigMapWithAnElement.set( 1, { param1: 5, param2: "second param" }); // previous big map with a object (record in ligo) as value, and an int as keyconst emptySet = []; // empty setconst myBytes = web3.utils.asciiToHex("string to convert into bytes").slice(2); // bytesconst counter = 10; // intconst initialStorage = { "contractAdmin": admin, "contractFirstBigMap": emptyBigMap, "contractSecondBigMap": bigMapWithAnElement, "contractSet": emptySet, "contractCounter": metadata,}; Any type and structure change in the LIGO smart contract storage must be mirrored in the initialStorage variable. This way, the evolution of the storage used can be versioned. #### Deployment The last step of the migration is the deployment definition. It's a function export, which defines how the contracts should be deployed. This function takes three arguments: • deployer: truffle object which deploys a contract. • network: the network used. • account: the account used. ##### Deployer The deployer object deploys the code on the specified network. The deployer takes the initialStorage object and a few options as input. A minimal viable migration could be: var MyContract = artifacts.require("MyContract");const initialStorage = {}module.exports = (deployer, network, account) => { // deployment steps deployer.deploy(MyContract, initialStorage);}; The execution returns some data such as the contract address, the cost, etc. ##### Network It can be useful to deploy a smart contract differently according to the network. For instance, if the storage holds an administator address, it's likely to be different on the mainnet and on testnet. The migration can be branched according to the network, like this: var MyContract = artifacts.require("MyContract");const edonetInitialStorage = {}const mainnetInitialStorage = {}module.exports = (deployer, network, account) => { if (network === "edonet") { deployer.deploy(MyContract, edonetInitialStorage); } else { deployer.deploy(MyContract, mainnetInitialStorage); }}; The deployment changes according to the network. Here, the storage is different on each network. ##### Account The admin account can be set at deployment. var MyContract = artifacts.require("MyContract");const initialStorage = {admin: "tz1ibMpWS6n6MJn73nQHtK5f4ogyYC1z9T9z"}module.exports = (deployer, network, account) => { deployer.deploy(MyContract, {...initialStorage, admin: account[0]});}; ##### Migrating several contracts A migration can deploy several contracts at the same time. This is useful when the migration data has to be used for the deployment of another contract. Below is an example with two contracts. The second contract needs to have the address of the first contract in its initial storage. var firstContract = artifacts.require("firstContract");var secondContract = artifacts.require("secondContract");const initialStorage = {admin: "tz1ibMpWS6n6MJn73nQHtK5f4ogyYC1z9T9z", contractToCall: ""}module.exports = (deployer, network, account) => { deployer.deploy(firstContract).then(function () { return deployer.deploy(secondContract, { ...initialStorage, contractToCall: firstContract.address }); });}; ### Hand-ons Let's create the migration file for our Raffle contract: 1_deploy_raffle.js. We need to import the contract (step 1) then define the initial storage, which should have the following fields: • admin: address • close_date: timestamp • jackpot: mutez • description: string • players: address list • sold_tickets: nat address big_map • raffle_is_open: boolean • winning_ticket_number_hash: bytes taquito and web3 will be used for the big_map and bytes types. The initial storage that is defined contains an open raffle (step 2). Finally, the deployment is defined and the admin of the contract is set to the address used for the deployment (step 3). const Raffle = artifacts.require("Raffle"); // step 1// step 2const {MichelsonMap} = require("@taquito/taquito");const web3 = require("web3");const admin = ""const closeDate = Date.now() + 10;const jackpot = 100const description = ""const players = []const soldTickets = new MichelsonMap()const raffleIsOpen = trueconst winningTicketHash = web3.utils.asciiToHex("ec85151eb06e201cebfbb06d43daa1093cb4731285466eeb8ba1e79e7ee3fae3").slice(2)const initialStorage = { "admin": admin, "close_date": closeDate.toString(), "jackpot": jackpot, "description": description, "players": players, "sold_tickets": soldTickets, "raffle_is_open": raffleIsOpen, "winning_ticket_number_hash": winningTicketHash}// step 3module.exports = (deployer, network, account) => { deployer.deploy(Raffle, {...initialStorage, admin: account[0]})}; ### Running a migration Everything is now ready for deployment. The network and the migration account are set, the initial storage and the deployment step are defined. From the project directory, you can run: $ truffle migrate --network <network_name>
This command can be broken down into two steps:
1. Verifying that the smart contracts are already compiled. If not, it will launch a compilation.
2. Deploying the smart contracts, following the migration scripts under the migration folder. Before the deployment, Truffle checks if the initial storage is compliant with its Michelson definition. If not, it will raise an exception.
Each migration generally takes up to 30 seconds. Here is the console output:
1_deploy_raffle.js================== Replacing 'Raffle' ------------------ > operation hash: onoMN7C2YNwJPeXtkXFwTrcitD9udgEQNdBGTaZD2CHVjpNsTBQ > Blocks: 0 Seconds: 4 > contract address: KT1N3WFAwMUvqnKMJkNrLCnWBRkLTFvRw7Vk > block number: 206080 > block timestamp: 2021-04-26T14:38:53Z > account: tz1cGftgD3FuBmBhcwY24RaMm5D2UXLr5LHW > balance: 28390.642777 > gas used: 11056 > storage used: 2101 bytes > fee spent: 3.477 mtz > burn cost: 0.5895 tez > value sent: 0 XTZ > total cost: 0.592977 XTZ > Saving artifacts ------------------------------------- > Total cost: 0.592977 XTZSummary=======> Total deployments: 1> Final cost: 0.592977 XTZ
The most useful data here is the deployed contract address (to later interact with it), the cost of the origination and the transaction hash (to check its status on an explorer for instance).
You can find some of these data in the automatically generated JSON file under the build/contracts folder:
{"contractName": "Raffle","abi": [],"michelson": "<michelson_code>","source": "<ligo_code>","sourcePath": "/path/to/contracts/Raffle.ligo","compiler": {"name": "ligo","version": "next"},"networks": {"NetXSgo1ZT2DRUG": {"events": {},"links": {},"address": "KT18uWmKP5gTVh7FKHwRRwjE6XVAsm7WLHSF","transactionHash": "ooWVHFdjJbvGYDp9CUhUzonRfobvnHExzqsZBQmbEHpmcuveh6Q"}},"schemaVersion": "3.2.0-tezos.1","updatedAt": "2021-04-02T08:29:37.743Z","networkType": "tezos"}
## Interacting with a deployed contract
Once the migration is done, it can be useful to verify that the contract is deployed correctly. There are many tools for this: CLI, libraries, explorers, etc. In this section, we'll keep it simple with a GUI.
tzstats.com provides information about any public Tezos network: transactions, accounts, contracts (origination, storage, entrypoints), etc.
If you deployed your contract on a testsnet, e.g. Edonet you can check its status on the corresponding version of tzstats: edo.tzstats.com/<contract_address>.
You should see New Smart contract created by... line in the Calls section. The contract storage is also visible.
# Conclusion
The first step in developing a Dapp is to deploy the smart contracts. Truffle takes LIGO code, compiles it into Michelson code and deploys it onto any public or private network.
Each migration needs an initial storage that is compliant with the storage type of the Michelson code.
Thanks to its configuration and easily readable and versioned migration files, Truffle is an essential tool throughout the development and deployment of a Dapp. |
# Secure INSERTs with MySQLi
Is this code well protected, and if not, could you tell me how it might be exploited and how to secure it?
if(isset($_POST['vrsta_predmeta']) AND !empty($_POST['vrsta_predmeta']) AND
isset($_POST['res_text']) AND isset($_POST['glavni_dug']) AND isset($_POST['res']) AND isset($_POST['zaklj']) AND isset($_POST['povjerilac']) AND isset($_POST['duznik']) AND isset($_POST['predmet_zaveden'])){$racunob = trim($_POST['rac']);$obrazlozenje = trim($_POST['obr']);$ob_text = trim($_POST['res_ob']);$res_text = trim($_POST['res_text']);$vrsta_pre = trim($_POST['vrsta_predmeta']);$izvrsenje = trim(strtolower($_POST['res']));$obrazac = trim($_POST['zaklj']);$povjerilac = $_POST['povjerilac'];$duznik = $_POST['duznik'];$datum= trim($_POST['predmet_zaveden']); foreach($povjerilac as $key){$lica = $db -> prepare("INSERT INTO p_lica(povjerilac, doc_br, dokument_vlasnik) VALUES('$key', '$dok_broj', '$ses_val')");
}
foreach($duznik as$key1){
$lica1 =$db -> prepare("INSERT INTO d_lica(duznik,doc_br, dokument_vlasnik) VALUES('$key1', '$dok_broj', '$ses_val')"); }$insert_dok = $db -> prepare("INSERT INTO document_tbl(dokument_vlasnik,dokument_broj,vrsta_dokumenta,zakljucak, resenje_izvrsenja,datum,resenje_text,obrazlozenje,obtext,racunob) VALUES('$ses_val','$dok_broj', '$vrsta_pre','$obrazac','$izvrsenje','$datum','$res_text','$obrazlozenje','$ob_text','$racunob')"); if($lica -> execute() AND $insert_dok -> execute() AND$lica1 -> execute()){
$lica -> close();$lica1 -> close();
$insert_dok -> close(); echo '<script>new Messi(\'Dokument uspjesno dodat.\', {title: \'Obavjestenje\', titleClass: \'success\', buttons: [{id: 0, label: \'Close\', val: \'X\'}]});</script>'; header('location:login.php'); }else{ echo '<script>new Messi(\'Dokument uspjesno dodat.\', {title: \'Obavjestenje\', titleClass: \'anim warning\', buttons: [{id: 0, label: \'Close\', val: \'X\'}]});</script>'; } } ## 2 Answers No, this code is not secure, as you are using prepared statements wrong. See here for correct usage of mysqli prepared statements (you should not ever have user supplied input in the prepare statement, it is only for SQL syntax; add the user input via bind later on). For example, your code might look like this: $sql = $db->prepare('INSERT INTO p_lica (povjerilac, doc_br, dokument_vlasnik) VALUES(?, ?, ?)'); // note that there is no user input in prepare$sql->bind_param('iss', $key,$dok_broj, $ses_val); // s for string, i for integer. use d for double, b for blob foreach($povjerilac as $key) { // set values for$key, which is bound to the first ? parameter
$sql->execute(); // insert new row with current key }$sql->close();
Misc
• formatting: your code is very badly formatted. If this is just a copy-paste error it's ok, but otherwise, you should definitely fix it.
• variable names: the standard is to use english names, if you can.
• your header call seems wrong. From the documentation: Remember that header() must be called before any actual output is sent. You should also use a complete URL.
• Ok, thanks for response. If I won't use bind_param how then code should look like? – Vladimir Oct 27 '14 at 15:12
• @Vladimir: Like a blank sheet of paper: if you're not going to use prepared statements, then don't INSERT user data. Use prepared statements or don't query. Full stop. – Elias Van Ootegem Oct 27 '14 at 15:23
• @Vladimir why wouldn't you use bind_param? You should. I added an example. – tim Oct 27 '14 at 15:25
• I tought it's secured also without using bind_param. Shuold I use bind_param for all variables or I can use just for few? Also sould I use bind_param with select and update or what? – Vladimir Oct 27 '14 at 15:28
• @Vladimir like I said in my answer, you shouldn't ever have user input in prepare. With that in mind, you don't have all that many other options. And you should use bind for all variables, and also for select and update. – tim Oct 27 '14 at 15:32
Use PDO to secure your SQL, it's built in the default PHP and work with 12 database types. it also makes it a lot easier when it comes to making prepared statements.
PDO:
$params = array( ':username' => 'test', ':email' =>$mail,
$pdo->prepare(' SELECT * FROM users WHERE username = :username AND email = :email AND last_login > :last_login');$pdo->execute($params); MySQLi: $query = $mysqli->prepare(' SELECT * FROM users WHERE username = ? AND email = ? AND last_login > ?');$query->bind_param('sss', 'test', $mail, time() - 3600);$query->execute();
As you can see, A big bonus is that you can use :name instead of ? |
# Can you rearrange vectors in a set? And another misc questn.
1. Jan 28, 2016
Suppose you have a set of vectors v1 v2 v3, etc.
However large they are, suppose they span some area, which I think is typically represented by
Span {v1, v2, v3}
But I mean, if you're given these vectors, is there anything wrong with rearranging them? Because there's a theorem- that
"an indexed set S= {v1, v2... vp} of more than one vectors is linearly dependent if at least one vector is in a linear combination of the others."
So if S is linearly dependent, any vector in the set is a combination of the preceeding vectors?
Or did I read that wrong, and it just means a certain vector, possibly more than one is a lin comb of some other vectors?
However the theorem i'm reading seems to really detail that there's something special about "preceeding vectors". So if you have any set, is interchanging vectors allowed?
I feel like that there is nothing wrong with this. Is there some time when this is allowed and it isnt, maybe?
(I've just started linear algebra for a few weeks so I don't know any complex scenarios)
But it seems that this theorem suggests that there's something important to the permutation of these vectors.
2. Jan 28, 2016
### andrewkirk
Reordering the vectors in a spanning set has no effect. There's nothing wrong with it.
When we talk about a vector space basis, we may wish to imply an ordering, because without an ordering, we cannot speak unambiguously of the representation of a vector in that basis, which we often wish to do. If we take that definition of 'basis' then, for every set of linearly independent, spanning vectors in an n-dimensional vector space, there are n! different normalized bases, corresponding to the number of ways the vectors could be reordered.
My guess is that the reference to 'preceding' is just about the method by which one tests linear independence. One way to do that is to label the vectors as v1, v2, ... , vn. Then test that v2 is independent of v1, Next test that v3 is independent of v1 and v2, and so on. But that ordering is just a convenience used in performing the test, not an intrinsic requirement of the set.
3. Jan 28, 2016
Thanks.
4. Jan 28, 2016
But for instance, does this mean if you try to solve a matrix of [v1 v2 v3] and a matrix with just rearranged vectors like [v3 v1 v2]......... it's the same??
5. Jan 28, 2016
### andrewkirk
By 'solve a matrix' do you mean calculate its (multiplicative) inverse? If so then, no, the answer is not the same.
6. Jan 28, 2016
Ummmm I'm not sure.
Does it make a difference how you solve it?
For instance I've only learned about Ax=b, using the matrix A as a function. And also solving span{v1 v2 v3}=0, to test for interdependence.
I don't know about what the inverse is.
But maybe I meant if you switch the positions of vectors in a set, isn't that equivalent to swapping collumns in a matrix? In that case, then a matrix of [v1 v2 v3] is equivalent in any respect to [v3 v1 v2]?
7. Jan 28, 2016
### andrewkirk
It's equivalent in the sense that
$$[v3\ v1\ v2] = [v1\ v2\ v3] \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right)$$
[Or something like that. I often get my rows and cols muddled up in matrix mults]
Equation Ax=b will have a completely different solution from A*x=b where A* is A with shuffled columns.
However it will have the same solution as A*x=b*, where b* is b with the same shuffle applied to its components as was applied to the columns of A to get A*.
8. Jan 29, 2016 |
# Possible number of DFAs, NFAs, DPDAs, NPDAs, NDTMs and DTMs for various input parameters
I came across problem asking for possilble number of DFAs for a given number of states and alphabet. I started guessing if we can find possible number different automatas for given number of states, input alphabet and stack alphabet etc.
Given,
$$Q$$ is set of states
$$Σ$$ is input alphabet
$$Γ$$ is stack alphabet for PDAs and tape alphabet for TMs
$$L$$ means move head to left in TM
$$R$$ means move head to right in TM
$$ϵ$$ is empty symbol
I came up with following table:
First $$Q$$ in each cell of "Possible number of machines" column is possible number of start states. Last $$2^Q$$ is possible combinations of final states. And remaining middle part is number of transition. combinations. Also I have used the symbols directly to denote number of elements in each set. For example, $$Q$$ is set of states, but I used $$Q$$ to denote number of states.
Am I correct with these counts?
PS: this is combinatorial problem in the context of automata theory. |
# Thread: Trigonometric Equations - Different Methods/Different Results
1. ## Trigonometric Equations - Different Methods/Different Results
This is the equation:
1 + Sin(theta) = 2 Cos squared(theta)
The method I used goes like this:
1 + Sin(theta) = 2[1 - Sin squared(theta)] uses Pythagorean Identities
1 + Sin(theta)
------------------------------ = 2 divide by [1 - Sin squared(theta)]
(1 + Sin(theta)) (1 - Sin(theta))
1
--------------- = 2
(1 - Sin(theta))
1 - Sin(theta) = 1/2
-Sin(theta) = -1/2
Sin(theta) = 1/2
theta = PI/6 + 2k(PI) or theta = 5PI/6 + 2k(PI)
restrict the solution to 0 < theta < 2PI
therefore the solution is {PI/6, 5PI/6}
HOWEVER you get a MORE complete solution if you work via this method:
1 + Sin(theta) = 2 Cos squared(theta) original equation
1 + Sin(theta) = 2[1 - Sin squared(theta)] uses Pythagorean Identities
1 + Sin(theta) = 2 - 2 Sin squared(theta) multiplies out the right side
2 Sin squared(theta) + Sin(theta) - 1 = 0 rearranges the equation
(2 Sin(theta) - 1) (Sin(theta) + 1) = 0 factors left side
2 Sin(theta) - 1 = 0 or Sin(theta) + 1 = 0
Sin(theta) = 1/2 or Sin(theta) = -1
theta = {PI/6, 5PI/6} or theta = 3PI/2
Clearly the second method yields a more complete answer, and I verified it with a TI-89. Yet the first method is algebraicly sound, as far as I can tell. Any thoughts on this?
2. Because you cannot divide by,
$1-\sin^2 \theta$!!!!
Because it might be zero!
Then you divided by zero!
(Oh no!)
Once you divide by zero you get strange results.
That was you fault.
-----
$1+\sin x=2(1-\sin^2 x)$
Thus,
$(1+\sin x)-2(1+\sin x)(1-\sin x)=0$
Factor,
$(1+\sin x)[1-2(1-\sin x)]=0$
Thus,
$(1+\sin x)(2\sin x-1)=0$
Now set each factor equal to zero.
3. Originally Posted by spiritualfields
This is the equation:
1 + Sin(theta) = 2 Cos squared(theta)
The method I used goes like this:
1 + Sin(theta) = 2[1 - Sin squared(theta)] uses Pythagorean Identities
1 + Sin(theta)
------------------------------ = 2 divide by [1 - Sin squared(theta)]
(1 + Sin(theta)) (1 - Sin(theta))
This is only valid when both $1 + \sin(\theta) \ne 0$ and $1 - \sin(\theta) \ne 0$, so these cases need to be checked separately.
RonL
4. Thank you, The Perfect Hacker and Captain Black. Interestingly, with this particular equation, the value that did not show up when I divided by (1 + Sin(theta)) (1 - Sin(theta)) was 3PI/2, which in fact is the value that creates the divide by zero situation for this equation ... (1 + Sin(3PI/2)).
Ed |
## Geometry: Common Core (15th Edition)
$x = 3$ $y = 4$
Parallelograms have opposite sides that are congruent. In order for $RSTV$ to be a parallelogram, we must have $RV$ and $TS$ equal to one another and $VT$ and $SR$ equal to one another. Let's set $RV$ and $TS$ equal to one another first: $RV = TS$ Let's plug in what we are given in the diagram: $2x + 3 = y + 5$ Now, let's set $VT$ and $SR$ equal to one another: $VT = SR$ Plug in what we are given: $5x = 4y - 1$ We have two equations and two variables. We can use the elimination method by setting up the system of equations to solve for one variable: $2x + 3 = y + 5$ $5x = 4y - 1$ Let's get all the variables on one side and the constants on the other: $2x - y = 2$ $5x - 4y = -1$ We have to modify one of the equations because we need one of the variables in both equations to differ only in sign. Let's multiply the first equation by $-4$: $-8x + 4y = -8$ $5x - 4y = -1$ Now, we can add the two equations together: $-3x = -9$ Divide each side by $-3$ to solve for $x$: $x = 3$ Now that we have the value for $x$, we can plug in this value for $x$ into one of the original equations to find $y$: $5x = 4y - 1$ Substitute $3$ for $x$: $5(3) = 4y - 1$ Multiply to simplify: $15 = 4y - 1$ Add $1$ to both sides of the equation to solve for $y$: $4y = 16$ Divide each side by $4$ to solve for $y$: $y = 4$ |
# example of a non-amenable l.c. group such that $C_r^*(G)$ satisfies the UCT
Are there known any examples of non-amenable locally compact (or more restrictive, non-amenable discrete) groups $$G$$ for which the reduced group $$C^*$$-algebra $$C_r^*(G)$$ satisfies the universal coefficient theorem (UCT)? In this case, $$C_r^*(G)$$ is non-nuclear. For example,$$G=\mathbb{F}_2$$ the free non-abelian group in 2 generators is known to be non-amenable. However, $$C^*(\mathbb{F}_2)$$ is exact. However, I don't know whenever or not this $$C^*$$-algebra satisfies the UCT.
Both $$C^\ast(\mathbb F_2)$$ and $$C^\ast_r(\mathbb F_2)$$ satisfy the UCT. This is the special case of the following:
$$\mathbf{Theorem}$$. If $$G$$ and $$H$$ are countable, discrete, amenable groups, then $$C^\ast(G\ast H)$$ and $$C^\ast_r(G \ast H)$$ are $$KK$$-equivalent and satisfy the UCT.
$$\mathbf{Proof}$$. By Theorem 2.4 (c) in Cuntz' paper "$$K$$-theoretic amenability for discrete groups" it follows that $$G\ast H$$ is $$K$$-amenable and thus $$C^\ast(G\ast H)$$ and $$C^\ast_r(G\ast H)$$ are $$KK$$-equivalent. In the beginning of Section 3 of the same paper, Cuntz shows/remarks that $$C^\ast(G\ast H)$$ is $$KK$$-equivalent to the pull-back $$$$C^\ast(G) \oplus_{\mathbb C} C^\ast(H) = \{ (x,y) \in C^\ast(G) \oplus C^\ast(H) : t_G(x) = t_H(y)\}$$$$ via the trivial representations $$t_G$$ and $$t_H$$, so it suffices to show that this $$C^\ast$$-algebra satisfies the UCT. It fits into a short exact sequence $$$$0 \to I(G) \to C^\ast(G) \oplus_{\mathbb C} C^\ast(H) \to C^\ast(H) \to 0$$$$ where $$I(G) = \mathrm{ker} \, t_G$$ is the augmentation ideal. As $$C^\ast(H)$$ is nuclear the sequence is semi-split, so $$C^\ast(G) \oplus_{\mathbb C} C^\ast(H)$$ satisfies the UCT provided that $$C^\ast(H)$$ and $$I(G)$$ satisfy the UCT, by the 2-out-of-3-property for satisfying the UCT. $$C^\ast(H)$$ and $$C^\ast(G)$$ satisfy the UCT by Tu's theorem. As $$I(G)$$ fits into the split short exact sequence $$$$0 \to I(G) \to C^\ast(G) \to \mathbb C \to 0,$$$$ and as $$C^\ast(G)$$ and $$\mathbb C$$ satisfy the UCT, so does $$I(G)$$ which completes the proof. QED
Note that for the free group $$\mathbb F_2 = \mathbb Z \ast \mathbb Z$$ one does not need to use Tu's deep theorem to obtain UCT.
To my knowledge, the only known group $$C^\ast$$-algebras which do not satisfy the UCT, are $$C^\ast_r(G)$$ when $$G$$ is a countable, discrete group with property (T) and the Akemann-Ostrand property, see Skandalis' "Une Notion de Nuclearite en K-Theorie". By also assuming that the group is residually finite, one can show that $$C^\ast(G)$$ does not satisfy UCT. |
PmWiki
PmWiki includes a script called upload.php that allows users to upload files to the wiki server using a web browser. Uploaded files (also called attachments) can then be easily accessed using markup within wiki pages. This page describes how to install and configure the upload feature.
PmWiki takes a somewhat, but justifiable, paranoid stance when it comes to the uploads feature. Thus, the default settings for uploads tend to try to restrict the feature as much as possible:
• The upload function is disabled by default
• Even if you enable it, the function is password locked by default
• Even if you remove the password, you're restricted to uploading files with certain names, extensions, and sizes
• The characters that may appear in upload filenames are (default) alphanumerics, hyphen, underscore, dot, and space (see also here).
• The maximum upload size is small (50K by default)
This way the potential damage is limited until/unless the wiki administrator explicitly relaxes the restrictions.
Keep in mind that letting users (anonymously!) upload files to your web server does entail some amount of risk. The upload.php script has been designed to reduce the hazards, but wiki administrators should be aware that the potential for vulnerabilities exist, and that misconfiguration of the upload utility could lead to unwanted consequences.
By default, authorized users are able to overwrite files that have already been uploaded, without the possibility of restoring the previous version of the file. If you want to disallow users from being able to overwrite files that have already been uploaded, add the following line to config.php:
$EnableUploadOverwrite = 0; Alternatively, an administrator can keep older versions of uploads. An administrator can also configure PmWiki so the password mechanism controls access to uploaded files. ## Basic installation The upload.php script is automatically included from stdconfig.php if the $EnableUpload variable is true in config.php. In addition, config.php can set the $UploadDir and $UploadUrlFmt variables to specify the local directory where uploaded files should be stored, and the URL that can be used to access that directory. By default, $UploadDir and $UploadUrlFmt assume that uploads will be stored in a directory called uploads/ within the current directory (usually the one containing pmwiki.php). In addition, config.php should also set a default upload password (see PasswordsAdmin).
Thus, a basic config.php for uploads might look like:
<?php if (!defined('PmWiki')) exit();
$EnableUpload = 1;$UploadPermAdd = 0;
$DefaultPasswords['upload'] = crypt('secret'); If you have edit passwords and wish to allow all users with edit rights to upload, instead of $DefaultPasswords['upload'], you can set $HandleAuth['upload'] = 'edit'; in config.php. Important: do NOT create the uploads directory yet! See the next paragraph. You may also need to explicitly set which filesystem directory will hold uploads and provide a URL that corresponds to that directory like: $UploadDir = "/home/foobar/public_html/uploads";
$UploadUrlFmt = "http://example.com/~foobar/uploads"; Note: In most installations, you don't need to define or change these variables, usually PmWiki can detect them (and if you do, uploads may simply not work). ### Upload directory configuration Uploads can be configured site-wide, by-group (default), or by-page by changing $UploadPrefixFmt in config.php. This determines whether all uploads go in one directory for the site, an individual directory for each group, or an individual directory for each page. The default is to organize upload by group.
It is recommended that the $UploadPrefixFmt variable defined in config.php is the same for all pages in the wiki, and not different in group or page local configuration files. Otherwise you will be unable to link to attachments in other wikigroups. #### Single upload directory For site-wide uploads, use $UploadPrefixFmt = '';
To organize uploads by page, use:
$UploadPrefixFmt = '/$Group/$Name'; You may prefer uploads attached per-page rather than per-group or per-site if you plan to have many files attached to individual pages. This setting simplifies the management of picture galleries for example. (In a page, you can always link to attachments to other pages.) ### The upload directory For the upload feature to work properly, the directory given by$UploadDir must be writable by the web server process, and it usually must be in a location that is accessible to the web somewhere (e.g., in a subdirectory of public_html). Executing PmWiki with uploads enabled will prompt you with the set of steps required to create the uploads directory on your server (it differs from one server to the next). Note that you are likely to be required to explicitly create writable group- or page-specific subdirectories as well!
Once the upload feature is enabled, users can access the upload form by adding "?action=upload" to the end of a normal PmWiki URL. The user will be prompted for an upload password similar to the way other pages ask for passwords (see Passwords and PasswordsAdmin for information about setting passwords on pages, groups, and the entire site).
Another way to access the upload form is to insert the markup "Attach:filename.ext" into an existing page, where filename.ext is the name of a new file to be uploaded. When the page is displayed, a '?-link' will be added to the end of the markup to take the author to the upload page. (See Uploads for syntax variations.)
By default, PmWiki will organize the uploaded files into separate subdirectories for each group. This can be changed by modifying the $UploadPrefixFmt variable. See Cookbook:UploadGroups for details. ## Versioning Uploaded Files PmWiki does not manage versioning of uploaded files by default. However, by setting $EnableUploadVersions=1; an administrator can have older versions of uploads preserved in the uploads directory along with the most recent version.
Uploads can be enabled only for specific groups or pages by using a group customization. Simply set $EnableUpload=1; for those groups or pages where uploading is to be enabled; alternately, set $EnableUpload=1; in the config.php file and then set $EnableUpload=0; in the per-group or per-page customization files where uploads are to be disabled. ### Restricting total upload size for a group or the whole wiki Uploads can be restricted to an overall size limit for groups. In the group configuration file (i.e., local/Group.php), add the line $UploadPrefixQuota = 1000000; # limit group uploads to 1000KB (1MB)
This will limit the total size of uploads for that group to 1000KB --any upload that pushes the total over the limit will be rejected with an error message. This value defaults to zero (unlimited).
$UploadDirQuota = 10000000; # limit total uploads to 10000KB (10MB) This will limit the total size of uploads for the whole wiki to 10000KB --any upload that pushes the total over the limit will be rejected with an error message. This value defaults to zero (unlimited). ### Restricting uploaded files type and size The upload script performs a number of verifications on an uploaded file before storing it in the upload directory. The basic verifications are described below. filenames the name for the uploaded file can contain only letters, digits, underscores, hyphens, spaces, and periods, and the name must begin and end with a letter or digit. file extension only files with approved extensions such as ".gif", ".jpeg", ".doc", etc. are allowed to be uploaded to the web server. This is vitally important for server security, since the web server might attempt to execute or specially process files with extensions like ".php", ".cgi", etc. file size By default all uploads are limited to 50K bytes, as specified by the $UploadMaxSize variable. Thus, to limit all uploads to 100KB, simply specify a new value for $UploadMaxSize in config.php: $UploadMaxSize = 100000;
However, the default maximum file size can also be specified for each type of file uploaded. Thus, an administrator can restrict ".gif" and ".jpeg" files to 20K, ".doc" files to 200K, and all others to the size given by $UploadMaxSize. The $UploadExtSize array is used to determine which file extensions are valid and the maximum upload size (in bytes) for each file type. For example:
$UploadExtSize['gif'] = 20000; # limit .gif files to 20KB Setting an entry to zero disables file uploads of that type altogether: $UploadExtSize['zip'] = 0; # disallow .zip files
$UploadExtSize[''] = 0; # disallow files with no extension You can limit which types of files are uploadable by disabling all defaults and specifying only desired types. Setting the variable $UploadMaxSize to zero will disable all default file types. Individual file types may then be enabled by setting their maximum size with the variable $UploadExtSize. # turns off all upload extensions$UploadMaxSize = 0;
$aSize=100000; // 100 KB file size limitation$UploadExtSize['jpg' ] = $aSize;$UploadExtSize['gif' ] = $aSize;$UploadExtSize['png' ] = $aSize; ### Note: Files with multiple extensions Some installations with the Apache server will try to execute a file which name contains ".php", ".pl" or ".cgi" even if it isn't the last part of the filename. For example, a file named "test.php.txt" may be executed. To disallow such files to be uploaded, add to config.php such a line: $UploadBlacklist = array('.php', '.pl', '.cgi');
To add a new extension to the list of allowed upload types, add a line like the following to a local customization file:
$UploadExts['ext'] = 'content-type'; where ext is the extension to be added, and content-type is the "MIME type", or content-type (which you may find here or on the lower part of this page) to be used for files with that extension. For example, to add the 'dxf' extension with a Content-Type of 'image/x-dxf', place the line $UploadExts['dxf'] = 'image/x-dxf';
Each entry in $UploadExts needs to be the extension and the mime-type associated with that extension, thus: $UploadExts = array(
'gif' => 'image/gif',
'jpeg' => 'image/jpeg',
'jpg' => 'image/jpeg',
'png' => 'image/png',
'xxx' => 'yyyy/zzz'
);
## Other file size limits
There are other factors involved that affect upload file sizes. In Apache 2.0, there is a LimitRequestBody directive that controls the maximum size of anything that is posted (including file uploads). Apache has this defaulted to unlimited size. However, some Linux distributions (e.g., Red Hat Linux) limit postings to 512K so this may need to be changed or increased. (Normally these settings are in an httpd.conf configuration file or in a file in /etc/httpd/conf.d.)
Problem noted on Red Hat 8.0/9.0 with Apache 2.0.x, the error "Requested content-length of 670955 is larger than the configured limit of 524288" was occurring under Apache and a "Page not found" would appear in the browser. Trying the above settings made no change with PHP, but on Red Hat 8.0/9.0 there is an additional PHP config file, /etc/httpd/conf.d/php.conf, and increasing the number on the line "LimitRequestBody 524288" solves the issue.
PHP itself has two limits on file uploads (usually located in /etc/php.ini). The first is the upload_max_filesize parameter, which is set to 2MB by default. The second is post_max_size, which is set to 6MB by default.
With the variables in place--PmWiki's maximum file size, Apache's request-size limits, and the PHP file size parameters, the maximum uploaded file size will be the smallest of the three variables.
Setting a read password for pages (and groups) will prevent an attached file from being seen or accessed through the page, but to prevent direct access to the file location (the uploads/ directory) one can do the following:
• In local/config.php set $EnableDirectDownload=0; • If you use per-group upload directories (PmWiki default, see $UploadPrefixFmt), add to config.php $EnableUploadGroupAuth = 1; • Deny public access to the uploads/ directory through moving it out of the html/ or public_html/ directory tree, or through a .htaccess file. ## Other notes • If uploads doesn't seem to work, make sure that your PHP installation allows uploads. The php.ini file (usually /etc/php.ini or /usr/local/lib/php.ini) should have file_uploads = On • Another source of error in the php.ini file is a not defined upload_tmp_dir. Just set this variable to your temp directory, e.g. upload_tmp_dir = /tmp Note that if you change this values, httpd must generally be restarted. Another way to check if uploads are allowed by the server is to set $EnableDiag to 1 in config.php, and set ?action=phpinfo on a URL. The "file_uploads" variable must have a value of 1 (if it says "no value", that means it's off).
Here's an example of what to add to your local/config.php file to disable uploading of .zip files, or of files with no extension:
$UploadExtSize['zip'] = 0; # Disallow uploading .zip files$UploadExtSize[''] = 0; # Disallow files with no extension
How do I attach uploads to individual pages or the entire site, instead of organizing them by wiki group?
Use the $UploadPrefixFmt variable (see also the Cookbook:UploadGroups recipe). $UploadPrefixFmt = '/$FullName'; # per-page, in Group.Name directories $UploadPrefixFmt = '/$Group/$Name'; # per-page, in Group directories with Name subdirectories
$UploadPrefixFmt = ''; # site-wide For $UploadDirQuota - can you provide some units and numbers? Is the specification in bytes or bits? What is the number for 100K? 1 Meg? 1 Gig? 1 Terabyte?
Units are in bytes.
$UploadDirQuota = 100*1024; # limit uploads to 100KiB $UploadDirQuota = 1000*1024; # limit uploads to 1000KiB
$UploadDirQuota = 1024*1024; # limit uploads to 1MiB $UploadDirQuota = 25*1024*1024; # limit uploads to 25MiB
$UploadDirQuota = 2*1024*1024*1024; # limit uploads to 2GiB Is there a way to allow file names with Unicode or additional characters? Yes, see $UploadNameChars`
Where is the list of attachments stored?
It is generated on the fly by the
markup. |
show · cmf.fricke all knowls · up · search:
The Fricke involution $w_N$ is the involution of modular curve $X_{0}(N)$ given by $z\mapsto \frac{-1}{Nz}$.
The Fricke involution is represented by the matrix $W_N=\begin{pmatrix}0&-1\\N&0\end{pmatrix}$; this matrix normalizes the group $\Gamma_0(N)$ and therefore induces an (involutive) operator on the space of cusp forms $S_k(\Gamma_0(N))$ of weight $k$ with trivial character.
The Fricke involution is the product of all the Atkin-Lehner involutions $W_Q$ for $Q \parallel N$. As a consequence, forms which are eigenforms for the Hecke operators are also eigenforms for $W_N$. Note: the Fricke involution also acts on some spaces with non-trivial character but in those cases it does not commute with all Hecke operators.
Authors:
Knowl status:
• Review status: reviewed
• Last edited by John Voight on 2020-10-29 11:17:58
Referred to by:
History:
Differences |
# Bounding the modified Bessel function of the first kind
i'm looking for an upper bound for the modified Bessel function of the first kind of a +ive real argument. It seems that it satisfies the inequality : $$I_{n}(x)\leqslant \frac{x^{n}}{2^{n}n!}e^{x}$$ But i'm not able to prove this.
-
I can prove it for $0 \leq x \leq 4$ from the power series. Would that be useful to you? – Antonio Vargas Jun 9 '13 at 23:22
How did you get this bound? – Mhenni Benghorbal Jun 10 '13 at 0:20
It was proved by Yudell L. Luke in 1972 that
$$1 < \Gamma(\nu+1)\left(\frac{2}{x}\right)^\nu I_\nu(x) < \cosh x$$
for $x > 0$ and $\nu > -1/2$. This implies your inequality since
$$\cosh x - e^x = -\sinh x < 0$$
for $x > 0$ and hence
$$\cosh x < e^x$$
for $x > 0$.
Yudell L. Luke, Inequalities for generalized hypergeometric functions, Journal of Approximation Theory, Volume 5, Issue 1, January 1972, pp. 41–65.
(Link to the article on ScienceDirect)
- |
# Cauchy sequence
x2 − 5x + 6 = 0 x = ? This article/section deals with mathematical concepts appropriate for a student in mid to late high school.
The reader should be familiar with the material in the Limit (mathematics) page.
A Cauchy sequence (pronounced CO-she) is an infinite sequence that converges in a particular way. This type of convergence has a far-reaching significance in mathematics. Cauchy sequences are named after the French mathematician Augustin Cauchy (1789-1857).
There is an extremely profound aspect of convergent sequences. A sequence of numbers in some set might converge to a number not in that set. The famous example of this is that a sequence of rationals might converge, but not to a rational number. For example, the sequence
1.4
1.41
1.414
1.4142
1.41421
consists only of rational numbers, but it converges to $\sqrt{2}\,$, which is not a rational number. (See real number for an outline of the proof of this.)
The sequence given above was created by a computer, and it could be argued that we haven't really exhibited the sequence. But we can put such a sequence on a firm theoretical footing by using the Newton-Raphson iteration. This would give us
$A_0 = 1\,$
$A_{n+1} = \frac{1}{2}(A_n + 2/A_n)\,$
so that
$A_1= 3/2 = 1.5\,$
$A_2= 17/12 = 1.4166666...\,$
...
These aren't the same as the sequence given previously, but they are all rational numbers, and they converge to $\sqrt{2}\,$.
So if we lived in a world in which we knew about rational numbers but had never heard of the real numbers (the ancient Greeks sort of had this problem) we wouldn't know what to do about this. Recall that, for a sequence (an) to converge to a number A, that is
$\lim_{n\to \infty}a_n = A\,$
we would need to use the definition of a limit—we would need a number A such that, for every ε > 0, there is an integer M such that, whenever $n > M, |a_n-A| < \varepsilon\,$.
There is no such rational number A.
But there is clearly a sense in which $(a_n)\,$ converge. The definition of Cauchy convergence is this:
A sequence $(a_n)\,$ converges in the sense of Cauchy (or is a Cauchy sequence) if, for every ε > 0, there is an integer M such that any two sequence elements that are both beyond M are within ε of each other.
Whenever n > M and m > M, $|a_n-a_m| < \varepsilon\,$.
Note that there is no reference to the mysterious number A—the convergence is defined purely in terms of the sequence elements being close to each other. The example sequence given above can be shown to be a Cauchy sequence.
## Construction of the Real Numbers
What we did above effectively defined $\sqrt{2}\,$ in terms of the rationals, by saying
"The square root of 2 is whatever the Cauchy sequence given above converges to."
even though that isn't a "number" according to our limited (rationals-only) understanding of what a number is.
The real numbers can be defined this way, by saying that a real number is defined to be a Cauchy sequence of rational numbers.
There are many details that we won't work out here; among them are:
• There are different Cauchy sequences that converge to the same thing; we gave two sequences above that converged to $\sqrt{2}\,$. So a real number is actually an "equivalence class" of Cauchy sequences, under a carefully defined equivalence. This is a bit tricky.
• We have to show how to add, subtract, multiply, and divide Cauchy sequences. This is a bit tricky.
• We have to give the Cauchy sequences corresponding to rational numbers. This is easy—5/12 becomes (5/12, 5/12, 5/12, ...).
Once we have done that, the payoff is enormous. We have defined an extension to the rationals that is metrically complete—that extension of the rationals is the real numbers. Metrically complete means that every Cauchy sequence made from the set converges to an element which is itself in the set. The reals are the metric completion of the rationals.
The use of Cauchy sequences is one of the two famous ways of defining the real numbers, that is, completing the rationals. The other method is Dedekind cuts |
# Configuring libcork¶
#include <libcork/config.h>
Several libcork features have different implementations on different platforms. Since we want libcork to be easily embeddable into projects with a wide range of build systems, we try to autodetect which implementations to use, using only the C preprocessor and the predefined macros that are available on the current system.
This module provides a layer of indirection, with all of the preprocessor-based autodetection in one place. This module’s task is to define a collection of libcork-specific configuration macros, which all other libcork modules will use to select which implementation to use.
This design also lets you skip the autodetection, and provide values for the configuration macros directly. This is especially useful if you’re embedding libcork into another project, and already have a configure step in your build system that performs platform detection. See CORK_CONFIG_SKIP_AUTODETECT for details.
Note
The autodetection logic is almost certainly incomplete. If you need to port libcork to another platform, this is where an important chunk of edits will take place. Patches are welcome!
## Configuration macros¶
This section lists all of the macros that are defined by libcork’s autodetection logic. Other libcork modules will use the values of these macros to choose among the possible implementations.
CORK_CONFIG_VERSION_MAJOR
CORK_CONFIG_VERSION_MINOR
CORK_CONFIG_VERSION_PATCH
The libcork library version, with each part of the version number separated out into separate macros.
CORK_CONFIG_VERSION_STRING
The libcork library version, encoded as a single string.
CORK_CONFIG_REVISION
The git SHA-1 commit identifier of the libcork version that you’re using.
CORK_CONFIG_ARCH_X86
CORK_CONFIG_ARCH_X64
CORK_CONFIG_ARCH_PPC
Exactly one of these macros should be defined to 1 to indicate the architecture of the current platform. All of the other macros should be defined to 0 or left undefined. The macros correspond to the following architectures:
Macro suffix Architecture
X86 32-bit Intel (386 or greater)
X64 64-bit Intel/AMD (AMD64/EM64T, not IA-64)
PPC 32-bit PowerPC
CORK_CONFIG_HAVE_GCC_ASM
Whether the GCC inline assembler syntax is available. (This doesn’t imply that the compiler is specifically GCC.) Should be defined to 0 or 1.
CORK_CONFIG_HAVE_GCC_ATTRIBUTES
Whether the GCC-style syntax for compiler attributes is available. (This doesn’t imply that the compiler is specifically GCC.) Should be defined to 0 or 1.
CORK_CONFIG_HAVE_GCC_ATOMICS
Whether GCC-style atomic intrinsics are available. (This doesn’t imply that the compiler is specifically GCC.) Should be defined to 0 or 1.
CORK_CONFIG_HAVE_GCC_INT128
Whether the GCC-style 128-bit integer types (__int128 and unsigned __int128) are available. (This doesn’t imply that the compiler is specifically GCC.) Should be defined to 0 or 1.
CORK_CONFIG_HAVE_GCC_MODE_ATTRIBUTE
Whether GCC-style machine modes are available. (This doesn’t imply that the compiler is specifically GCC.) Should be defined to 0 or 1.
CORK_CONFIG_HAVE_GCC_STATEMENT_EXPRS
Whether GCC-style statement expressions are available. (This doesn’t imply that the compiler is specifically GCC.) Should be defined to 0 or 1.
CORK_CONFIG_HAVE_REALLOCF
Whether this platform defines a reallocf function in stdlib.h. reallocf is a BSD extension to the standard realloc function that frees the existing pointer if a reallocation fails. If this function exists, we can use it to implement cork_realloc().
CORK_CONFIG_IS_BIG_ENDIAN
CORK_CONFIG_IS_LITTLE_ENDIAN
Whether the current system is big-endian or little-endian. Exactly one of these macros should be defined to 1; the other should be defined to 0.
## Skipping autodetection¶
CORK_CONFIG_SKIP_AUTODETECT
If you want to skip libcork’s autodetection logic, then you are responsible for providing the appropriate values for all of the macros defined in Configuration macros. To do this, have your build system define this macro, with a value of 1. This will override the default value of 0 provided in the libcork/config/config.h header file.
Then, create (or have your build system create) a libcork/config/custom.h header file. You can place this file anywhere in your header search path. We will load that file instead of libcork’s autodetection logic. Place the appropriate definitions for each of the configuration macros into this file. If needed, you can generate this file as part of the configure step of your build system; the only requirement is that it’s available once you start compiling the libcork source files. |
Question: coef in dmrcate
0
2.2 years ago by
yoursbassanio0 wrote:
Hi All
What should be the coef in cpg.annotate if my intereste is "Condition". In bumphunting the coef is 2 as it calls for the column number of designmatrix. But in dmrcate manual I saw it to be index. So is it that my coef=1?
designMatrix <- model.matrix(~Condition+T1+T2+T3+T4)
designMatrix
(Intercept) Condition T1 T2 T3 T4
1 1 0 8.90 0.87 0.5 1.40
Looking forward
MVinu
modified 2.2 years ago by James W. MacDonald50k • written 2.2 years ago by yoursbassanio0
1
2.2 years ago by
United States
James W. MacDonald50k wrote:
I am not sure what you mean by 'I saw it to be index', but you can be sure that the dmrcate manual doesn't indicate that you should use the intercept column. The coef argument is how you specify which coefficient you are interested in testing, so in your case it would be coef=2.
Thanks james for the answer. In the dmrcate manual under the cpg.annoate its as follows "coef The column index in design corresponding to the phenotype comparison." |
# sodium and alcohol reaction
posted in: Uncategorized
MathJax.Hub.Config({ If you have looked at the chemistry of halogenoalkanes, you may be aware that there is a competition between substitution and elimination when they react with hydroxide ions. We will look at the reaction between sodium and ethanol as being typical, but you could substitute any other alcohol and the reaction would be the same. R is an alkyl group such as methyl, ethyl, propyl and more. Sodium borohydride is soluble in protic solvents such as water and lower alcohols. equationNumbers: { The hydroxide ions replace the halogen atom. We will see the role of acetic acid a bit later in our discussion when we study the mechanism. This catalytic dehydrogenation reaction produces aldehydes (as shown below) and ketones, and since the carbon atom bonded to the oxygen is oxidized, such alcohol to carbonyl conversions are generally referred to as oxidation reactions. When cycl… Because this is getting well beyond UK A level, I haven't given any detail for this anywhere on the site. Sodium borohydride is an odorless white to gray-white microcrystalline powder that often forms lumps. 700+ SHARES . This page describes the reaction between alcohols and metallic sodium,and introduces the properties of the alkoxide that is formed. autoNumber: "all", The solution formed can be washed away without problems (provided you remember that sodium ethoxide is strongly alkaline - see below). Once again we will take the ethoxide ions in sodium ethoxide as typical. + n } 700+ VIEWS. For example, propan- l-ol is produced by the hydroboration – oxidation reaction of propene. $CH_3CH_2O^- + H_2O \rightarrow CH_3CH_2OH + OH^-$. 2.2 Reaction with sodium; 2.3 Oxidation of alcohols; 2.4 Fermentation process; 3 Carboxylic acids. Missed the LibreFest? both rections will also give out the gas hydrogen. The anion component is an alkoxide. }); If the solution is evaporated carefully to dryness, the sodium ethoxide is left as a white solid. Simple 1º and 2º-alcohols in the gaseous state lose hydrogen when exposed to a hot copper surface. Macros: { Oxidation Reactions of Alcohols. Alkene molecules are unsaturated hydrocarbons because they contain 2 fewer hydrogen … The reaction is similar but much slower than the reaction of water and sodium. [ "article:topic", "authorname:clarkj", "showtoc:no" ], https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FOrganic_Chemistry%2FSupplemental_Modules_(Organic_Chemistry)%2FAlcohols%2FReactivity_of_Alcohols%2FThe_Reaction_Between_Alcohols_and_Sodium, Former Head of Chemistry and Head of Science, If a small piece of sodium is dropped into ethanol, it reacts steadily to give off bubbles of hydrogen gas and leaves a colorless solution of sodium ethoxide: $$CH_3CH_2ONa$$. Sodium hydroxide contains OH- ions; sodium ethoxide contains CH3CH2O- ions. Another such substitution reaction is the isotopic exchange that occurs on mixing an alcohol with deuterium oxide (heavy water). This page describes the reaction between alcohols and metallic sodium, and takes a very brief look at the properties of the alkoxide which is formed. Air contains nitrogen, oxygen, carbon dioxide, water vapor and more gases.Sodium reacts with oxygen gas and produce sodium oxide (Na 2 O) which is a strong basic oxide.. Na (s) + O 2(g) → Na 2 O (s). The solution formed can be washed away without problems (provided you remember that sodium ethoxide is strongly alkaline - see below). Tertiary alcohols react with strong acids to generate carbocations. The Reaction between Sodium Metal and Ethanol If a small piece of sodium is dropped into ethanol, it reacts steadily to give off bubbles of hydrogen gas and leaves a colorless solution of sodium ethoxide: C H 3 C H 2 O N a. Details of the reaction If a small piece of sodium is dropped into some ethanol, it reacts steadily to give off bubbles of hydrogen gas and leaves a colourless solution of sodium ethoxide, CH3CH2ONa. If you add water to sodium ethoxide, it dissolves to give a colourless solution with a high pH - typically pH 14. $2CH_3CH_2OH_{(l)} + 2Na_{(s)} \rightarrow 2CH_3CH_2O^-_{(aq)} + 2Na^+_{(aq)} + H_{2(g)}$. Sodium ethoxide is just like sodium hydroxide, except that the hydrogen has been replaced by an ethyl group. The ethoxide ion behaves in exactly the same way. Although initially this appears as something new and complicated, in fact, it is exactly the same (apart from being a more gentle reaction) as the reaction between sodium and water - something you have probably known about for years. If a small piece of sodium is dropped into ethanol, it reacts steadily to give off bubbles of hydrogen gas and leaves a colorless solution of sodium ethoxide: $$CH_3CH_2ONa$$. 2.6.5 describe the reaction of alcohols with sodium, hydrogen bromide and phosphorus pentachloride; 2.6.6 describe the oxidation of alcohols using acidified potassium dichromate(VI), with reference to formation of aldehydes and carboxylic acids from primary alcohols, formation of ketones from secondary alcohols and resistance to oxidation of… Related articles. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. For example, look at the reaction between ethanol and sodium metal: 2Na (s) + 2CH3CH2OH (l) → 2CH3CH2ONa (s) + H2 (g) This reaction is similar to the reaction which occurs between sodium and water, as both ethanol and … The reaction speed is different, according to the lengh of the carbon chain to which the OH group is attached. Use the BACK button on your browser to return to this page. . If you add water to sodium ethoxide, it dissolves to give a colorless solution with a high pH. A simple example is the facile reaction of simple alcohols with sodium (and sodium hydride), as described in the first equation below. To the menu of other organic compounds . is that you can work things out for yourself when you need to! Alcohols will react with sodium metal and produce hydrogen gas. a good method of synthesizing ethers in the lab. You will find the same thing happens when you write formulae for organic salts like sodium ethanoate, for example. Unfortunately, nothing can prevent reactions to alcohol or ingredients in alcoholic beverages. Ethanol is therefore used to dissolve small quantities of waste sodium. Clearly, the primary alcohol is the most water-like solvent. Sodium ethoxide is just like sodium hydroxide, except that the hydrogen has been replaced by an ethyl group. This reaction is known as the. 23g of sodium will react with ethyl alcohol to give 1:22 45.9k LIKES. The solution is strongly alkaline because ethoxide ions are. Alkoxide ion. Alcohols react with sodium to form a salt (sodium alkoxide) and hydrogen gas. Tertiary alcohols. 4. Ethyl alcohol is an acid and sodium is strong reducing agent and it will form sodium ethoxide and release hydrogen given by the following reaction equation. There are limited or no data available in the literature on many of these properties. }, It is based on pantoprazole sodium and alcohol (the active ingredients of Pantoprazole sodium and Alcohol, respectively), and Pantoprazole sodium and Alcohol (the brand names). Experimental: A mixture of alcohol (2 mmol), sodium azide (2.4 mmol) and PPh 3 (4.2 mmol) in 10 ml of CCl 4-DMF (1:4) was warmed at 90°C with stirring. Does Alcohol Deplete Your Sodium Level?. questions on the reaction of alcohols with sodium. The reaction of primary alcohols was completed within 4-6 hrs, whereas secondary alcohols required longer times (8-10 hrs). The ethoxide ion behaves in exactly the same way. Generally, alkoxide ion can be denoted by RO-. If this is the first set of questions you have done, please read the introductory page before you start. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. The Reaction between Sodium Metal and Ethanol If a small piece of sodium is dropped into ethanol, it reacts steadily to give off bubbles of hydrogen gas and leaves a colorless solution of sodium ethoxide: C H 3 C H 2 O N a. The hydroxide ions replace the halogen atom. This reaction is similar but much potential than the result of water and sodium. The reaction is: sodium + ethanol → sodium ethoxide + hydrogen If you look at what is happening with primary and secondary alcohols, you will see that the oxidising agent is removing the hydrogen from the -OH group, and a hydrogen from the carbon atom attached to the -OH. Key Notes Acid–base reactions . Ethyl alcohol reacts more slowly, but is still zippy. If it does, I guess the first step would be protonation of the OH group in alcohol, protonated -OH, which now is a fair LG, water, leaves the chain, then Cl free ion attack the carbon that connected to the protonated OH, hence, the product would be RCl. This reaction is rapid and produces few side reaction products. Sodium ethoxide is known as an alkoxide. To avoid a reaction, avoid alcohol or the particular substance that causes your reaction. Hydroxide ions are good nucleophiles, and you may have come across the reaction between a halogenoalkane (also called a haloalkane or alkyl halide) and sodium hydroxide solution. When a sodium metal piece is put in the air, there are several reactions occurring as a chain. Methyl alcohol is also VERY reactive towards sodium metal. $2H_2O_{(l)} + 2Na_{(s)} \rightarrow 2OH^-_{(aq)} + 2Na^+_{(aq)} + H_{2(g)}$. The solution is strongly alkaline. The reason that the ethoxide formula is written with the oxygen on the right unlike the hydroxide ion is simply a matter of clarity. A simple example is the facile reaction of simple alcohols with sodium (and sodium hydride), as described in the first equation below. The solution is strongly alkaline because ethoxide ions are Brønsted-Lowry bases and remove hydrogen ions from water molecules to produce hydroxide ions, which increase the pH. The mechanism literature on many of these properties that 's the only difference ( sodium )!, while aqueous sodium hydroxide a good way of making ethers in the lab oxidized to alcohol or in! Is an alkyl group such as methyl, ethyl, propyl and more to. That occurs on mixing an alcohol reacts with alkali metals such as methyl, ethyl propyl. Alcohols to ketones in moderate yield alkaline because ethoxide ions in sodium ethoxide is just like sodium,! Carefully to dryness, the primary alcohol should be most labile to alkali metal licensed CC... Alcohol can increase the nervous system side effects of valproic acid is the isotopic exchange that occurs on an! Alcohol but secondary and tertiary less energy to react than ethane due to the low frequency of carbon... Will also give out the gas hydrogen that the ethoxide ion behaves in the! To dispose of small amounts of sodium safely will react with sodium metal reaction if add... Oh^-\ ] a hot copper surface ethoxide formula is written with the water molecule and the alkyl or... Or no data available in the lab lose hydrogen when exposed to hot... Washed away without problems ( provided you remember that sodium ethoxide is strongly alkaline - see )... You need to use the oxidation of cyclohexanol as our model system and more in.... Vigorously is that you can work things out for yourself when you to! Your reaction handling sodium, this doesnt work with primary alcohol but secondary and tertiary ( SOCl 2.! Reaction speed is Different, according to the low density of the alkoxide that formed... Ch_3Ch_2O^- + H_2O \rightarrow CH_3CH_2OH + OH^-\ ] characterised and found to be in accordance with samples! Hydrogen peroxide in the air, there are limited or no data available in the air, are... Of NCSSM CORE collection: this video shows the physical properties of na metal sodium and alcohol reaction produce alkoxide can. To be in accordance with authentic samples is known as the final product times ( hrs... The alkoxide that is formed that are acidic 2 O + 2H 2 SO 4 2 H ;... Actual oxidizing agent is the isotopic exchange that occurs on mixing an alcohol * / is,. Been used in a two-phase system with a high pH > * / than HONa - but that 's only... Vi ) solution - but that 's the only difference the nervous system side effects valproic! From ethanol a primary alcohol is also VERY reactive towards sodium metal at https:.. Not the best test for an alcohol with deuterium oxide ( heavy water ) can... + H 2 ; however, these reactions are fairly slow carbon chain to which OH! Hydroxide ions ( provided you remember that sodium ethoxide and sodium sodium hypochlorite the... A colourless solution with a high pH - typically pH 14 H 2 ; however, these reactions are slow... This video shows the physical properties of the alkoxide that is formed same.. And 1413739 is strongly alkaline because ethoxide ions in sodium ethoxide is strongly alkaline ethoxide... And judgment response to simple alcohols is to reinforce the similarity between sodium ethoxide is just like sodium contains... + N } } } ) ; / * ] ] > * / … 23g of sodium just! Of aldehydes is reduced still zippy ) ; / * ] ] > * / is still.... Started by looking at what the reaction of water and sodium produce salt! Occurs in their reactions with ethoxide ions doesnt work sodium and alcohol reaction primary alcohol also. A salt and hydrogen gas ( anaphylactic reaction ) and hydrogen gas 2 ; however, was not a at! Proceeds steadily with the … alcohol and sodium hydroxide, except that the hydrogen has replaced. Button on your browser to return to this page describes the reaction is good. Is licensed by CC BY-NC-SA 3.0 acknowledge previous National sodium and alcohol reaction Foundation support under grant numbers 1246120, 1525057 and! The alkoxide that is formed is written with the reaction proceeds steadily with water... Air reaction when a sodium metal piece is put in the literature on many of properties! + OH^- \rightarrow CH_3CH_2CH_2OH + CH_3CH_2Br \rightarrow CH_3CH_2CH_2OCH_2CH_3 + Br^- \ ] hrs, secondary! Is rapid and produces few side reaction products potassium dichromate ( VI ) solution are reactions... Nor was Benzoic acid a sensitizer at 2 % Benzoic acid did likewise a sensitizer at 2 % of. Acid to give a colourless solution with a high pH - typically pH 14 alcohol but secondary and.. Adenine dinucleotide, NAD + … alcohol and sodium hydroxide contains OH- ;... At low temperatures only the carbonyl group of aldehydes is reduced to dryness, the sodium sinks 14... … alcohol and sodium carbonate are strong enough bases to dissolve most water-insoluble phenols, while aqueous hydroxide... The hydrogen has been replaced by an oxygen atom is called 600+ LIKES the form... Ion and hydrogen gas should be most labile to alkali metal content is licensed by BY-NC-SA. Without heating but is still zippy 2 SO 4 2 H 2 ; however, reactions! Whole point about understanding chemistry ( and other alkoxide ) and require emergency treatment acids... Water-Like solvent needs less energy to react explosively with the water molecule and alkyl... This particular one is 1-ethoxypropane or ethyl propyl ether to ketones in moderate yield Williamson Synthesis! Or ingredients in alcoholic beverages a colourless solution with a high pH - typically pH.. Whole point about understanding chemistry ( and especially its amine-substituted derivatives solvents such as sodium and! Because ethoxide ions in sodium ethoxide is just like sodium ethanoate, for example hydrogen … alcohols... Ethoxide formula is written with the reaction between alcohols and are converted to phenoxide ions with sodium,. We normally, of course, write the sodium hydroxide, except that the ethoxide behaves. Sodium ethanoate, for example, propan- l-ol is produced by the hydroboration – oxidation reaction of primary alcohols completed. Not sure if alcohol react with sodium metal piece is put in the lab with... Unfortunately, nothing can prevent reactions to alcohol sodium and alcohol reaction the particular substance that causes reaction. Are limited or no data available in the air, there are limited or data! Hydrogen … Different alcohols are reacted with hydrogen, propan- l-ol is by... Difficulty concentrating clearly, the sodium ethoxide is just like sodium ethanoate, for,! Actual oxidizing agent is the conjugate base of an alcohol reacts more slowly but... Slowly, but sodium will react with sodium hydroxide and sodium of you. Na 2 Cr 2 O + 2H 2 SO 4 2 H 2 4... Causes your reaction hydrocarbon ) groups bridged by an oxygen atom is called 600+ LIKES water ) hydrogen alcohols. Later in our discussion when we study the mechanism while aqueous sodium hydroxide, except that the hydrogen has replaced. H_2O \rightarrow CH_3CH_2OH + OH^-\ ] in the presence of aqueous sodium bicarbonate is not the best test for alcohol! Elicited a reaction, propene reacts with acidic hydrogen, alcohols, phenol and carboxylic.! Ion and hydrogen gas benzoate is obtained by the action of sodium hydxide on.! - see below ) require emergency treatment } } } } } } } } } } )! Group in alcohols study, 5 % benzyl alcohol, propyl alcohol and sodium carbonate strong. The primary alcohol but secondary and tertiary same competition occurs in their reactions with ethoxide are... The carbon chain to which the OH group is attached phenol and carboxylic acids 2 ) the solution formed be! Its amine-substituted derivatives 1525057, and introduces the properties of the dangers in! Deborane ( BH 3 ) 2 to form alcohols is called an ether 's get started by looking at the! A salt and hydrogen gas written with the water molecule and the alkyl ( or other )! Air reaction when a sodium metal with water alcohol or the particular substance that causes reaction!, they are weaker acids than car-boxylic acids and do not react HCl! The relations in the gaseous state lose hydrogen when exposed to a hot copper surface button on browser... You know the result of water and sodium same competition occurs in their reactions with ethoxide ions in sodium is! Hot copper surface vigorously is that the hydrogen has been replaced by oxygen... Can increase the nervous system side effects of valproic acid an example let use. As our model system example let 's get started by looking at the! Can increase the nervous system side effects of valproic acid such as dizziness, drowsiness, and concentrating... And other alkoxide ) ions ; sodium ethoxide, it 's important to realize we... Compounds were characterised and found to be in accordance with authentic samples is odorless! We study the mechanism ( anaphylactic reaction ) and hydrogen gas ions remove hydrogen ions from water molecules produce! Or the particular substance that causes your reaction dangers involved in handling sodium, and in another study 5.: //status.libretexts.org alcohols, phenol and carboxylic acids, ethoxide ( and other alkoxide ) ions like. 2 to form trialkyl borane as an example let 's get started by looking what. To oxidize them all the compounds were characterised and found to be in accordance with authentic samples reaction like... The nervous system side effects of valproic acid contains OH- ions ; sodium ethoxide is left a. + CH3CH2OH —————→ CH3CH2ONa + [ H ] the study uses data from the FDA for example propan-... If it comes from ethanol,1 ] } } ) ; / * ]! |
# Liability (financial accounting)
In financial accounting, a liability is defined as the future sacrifices of economic benefits that the entity is obliged to make to other entities as a result of past transactions or other past events,[1] the settlement of which may result in the transfer or use of assets, provision of services or other yielding of economic benefits in the future.
A liability is defined by the following characteristics:
• Any type of borrowing from persons or banks for improving a business or personal income that is payable during short or long time;
• A duty or responsibility to others that entails settlement by future transfer or use of assets, provision of services, or other transaction yielding an economic benefit, at a specified or determinable date, on occurrence of a specified event, or on demand;
• A duty or responsibility that obligates the entity to another, leaving it little or no discretion to avoid settlement; and,
• A transaction or event obligating the entity that has already occurred
Liabilities in financial accounting need not be legally enforceable; but can be based on equitable obligations or constructive obligations. An equitable obligation is a duty based on ethical or moral considerations. A constructive obligation is an obligation that is implied by a set of circumstances in a particular situation, as opposed to a contractually based obligation.
The accounting equation relates assets, liabilities, and owner's equity:
${\displaystyle {\text{Assets}}={\text{Liabilities}}+{\text{Owner's Equity}}}$
The accounting equation is the mathematical structure of the balance sheet.
Probably the most accepted accounting definition of liability is the one used by the International Accounting Standards Board (IASB). The following is a quotation from IFRS Framework:
A liability is a present obligation of the enterprise arising from past events, the settlement of which is expected to result in an outflow from the enterprise of resources embodying economic benefits
— F.49(b)
Regulations as to the recognition of liabilities are different all over the world, but are roughly similar to those of the IASB.
Examples of types of liabilities include: money owing on a loan, money owing on a mortgage, or an IOU.
Liabilites of sectors of USA economy, 1945-2017, based on flow of funds statistics of the Federal Reserve System.
Liabilities are debts and obligations of the business they represent as creditor's claim on business assets.
## Classification
Liabilities are reported on a balance sheet and are usually divided into two categories:
Liabilities of uncertain value or timing are called provisions.
## Example
When a company deposits cash with a bank, the bank records a liability on its balance sheet, representing the obligation to repay the depositor, usually on demand. Simultaneously, in accordance with the double-entry principle, the bank records the cash, itself, as an asset. The company, on the other hand, upon depositing the cash with the bank, records a decrease in its cash and a corresponding increase in its bank deposits (an asset).
## Debits and Credits
A debit either increases an asset or decreases a liability; a credit either decreases an asset or increases a liability. According to the principle of double-entry, every financial transaction corresponds to both a debit and a credit.
### Example
When cash is deposited in a bank, the bank is said to "debit" its cash account, on the asset side, and "credit" its deposits account, on the liabilities side. In this case, the bank is debiting an asset and crediting a liability, which means that both increase.
When cash is withdrawn from a bank, the opposite happens: the bank "credits" its cash account and "debits" its deposits account. In this case, the bank is crediting an asset and debiting a liability, which means that both decrease.
## References
1. ^ "Definition and Recognition of the Elements of Financial Statements" (PDF). Australian Accounting Standards Board. Retrieved 31 March 2015.
Accounts payable
Accounts payable (AP) is money owed by a business to its suppliers shown as a liability on a company's balance sheet. It is distinct from notes payable liabilities, which are debts created by formal legal instrument documents.
Accrual
Accrual (accumulation) of something is, in finance, the adding together of interest or different investments over a period of time. It holds specific meanings in accounting, where it can refer to accounts on a balance sheet that represent liabilities and non-cash-based assets used in accrual-based accounting. These types of accounts include, among others, accounts payable, accounts receivable, goodwill, deferred tax liability and future interest expense.
Accrued interest
In finance, accrued interest is the interest on a bond or loan that has accumulated since the principal investment, or since the previous coupon payment if there has been one already.
For a financial instrument such as a bond, interest is calculated and paid in set intervals (for instance annually or semi-annually). Ownership of bonds/loans can be transferred between different investors not just when coupons are paid, but at any time in-between coupons. Accrued interest addresses the problem regarding the ownership of the next coupon if the bond is sold in the period between coupons: Only the current owner can receive the coupon payment, but the investor who sold the bond must be compensated for the period of time for which he or she owned the bond. In other words, the previous owner must be paid the interest that accrued before the sale.
Accrued liabilities
Accrued liabilities are liabilities that reflect expenses that have not yet been paid or logged under accounts payable during an accounting period; in other words, a company's obligation to pay for goods and services that have been provided for which invoices have not yet been received. Examples would include accrued wages payable, accrued sales tax payable, and accrued rent payable.
There are two general types of Accrued Liabilities:
Routine and recurring
Infrequent or non-routineRoutine and recurring Accrued Liabilities are types of transactions that occur as a normal, daily part of the business cycle. Infrequent or non-routine Accrued Liabilities are transactions that do not occur as a daily part of the business cycle, but do happen from time to time.
Asset/liability modeling
Asset/liability modelling is the process used to manage the business and financial objectives of a financial institution or an individual through an assessment of the portfolio assets and liabilities in an integrated manner. The process is characterized by an on-going review, modification, and revision of asset and liability management strategies so that sensitivity to interest rate changes are confined within acceptable tolerance levels. There are different models used and some use different elements, according to specific needs and contexts. For instance, an individual or an organization may keep parts of the ALM process and outsource the modeling function or adapt the model according to the requirements and capabilities of relevant institutions such as banks, which often have their in-house modeling process. For pensioners, asset/liability modeling is all about determining the best allocation for specific situations. There is a vast array of models available today for practical asset and liability modeling and these have been the subject of several research and studies.
Asset and liability management
Asset and liability management (often abbreviated ALM) is the practice of managing financial risks that arise due to mismatches between the assets and liabilities as part of an investment strategy in financial accounting.
ALM sits between risk management and strategic planning. It is focused on a long-term perspective rather than mitigating immediate risks and is a process of maximising assets to meet complex liabilities that may increase profitability.
ALM includes the allocation and management of assets, equity, interest rate and credit risk management including risk overlays, and the calibration of company-wide tools within these risk frameworks for optimisation and management in the local regulatory and capital environment.
Often an ALM approach passively matches assets against liabilities (fully hedged) and leaves surplus to be actively managed.
Asset–liability mismatch
In finance, an asset–liability mismatch occurs when the financial terms of an institution's assets and liabilities do not correspond. Several types of mismatches are possible.
For example, a bank that chose to borrow entirely in US dollars and lend in Russian rubles would have a significant currency mismatch: if the value of the ruble were to fall dramatically, the bank would lose money. In extreme cases, such movements in the value of the assets and liabilities could lead to bankruptcy, liquidity problems and wealth transfer.
A bank could also have substantial long-term assets (such as fixed-rate mortgages) funded by short-term liabilities, such as deposits. If short-term interest rates rise, the short-term liabilities re-price at maturity, while the yield on the longer-term, fixed-rate assets remains unchanged. Income from the longer-term assets remains unchanged, while the cost of the newly re-priced liabilities funding these assets increases. This is sometimes called a maturity mismatch, which can be measured by the duration gap.
An interest rate mismatch occurs when a bank borrows at one interest rate but lends at another. For example, a bank might borrow money by issuing floating interest rate bonds, but lend money with fixed-rate mortgages. If interest rates rise, the bank must increase the interest it pays to its bondholders, even though the interest it earns on its mortgages has not increased.
Mismatches are handled by asset liability management.
Asset–liability mismatches are important to insurance companies and various pension plans, which may have long-term liabilities (promises to pay the insured or pension plan participants) that must be backed by assets. Choosing assets that are appropriately matched to their financial obligations is therefore an important part of their long-term strategy.
Few companies or financial institutions have perfect matches between their assets and liabilities. In particular, the mismatch between the maturities of banks' deposits and loans makes banks susceptible to bank runs. On the other hand, 'controlled' mismatch, such as between short-term deposits and somewhat longer-term, higher-interest loans to customers is central to many financial institutions' business model.
Asset–liability mismatches can be controlled, mitigated or hedged.
Collateral (finance)
In lending agreements, collateral is a borrower's pledge of specific property to a lender, to secure repayment of a loan. The collateral serves as a lender's protection against a borrower's default and so can be used to offset the loan if the borrower fails to pay the principal and interest satisfactorily under the terms of the lending agreement.
The protection that collateral provides generally allows lenders to offer a lower interest rate on loans that have collateral. The reduction in interest rate can be up to several percentage points, depending on the type and value of the collateral. For example, the interest rate (APR) on an unsecured loan is often much higher than on a secured loan or logbook loan, as the risk for the lender is then increased.
If a borrower defaults on a loan (due to insolvency or another event), that borrower loses the property pledged as collateral, with the lender then becoming the owner of the property. In a typical mortgage loan transaction, for instance, the real estate being acquired with the help of the loan serves as collateral. If the buyer fails to repay the loan according to the mortgage agreement, the lender can use the legal process of foreclosure to obtain ownership of the real estate. A pawnbroker is a common example of a business that may accept a wide range of items as collateral.
The type of the collateral may be restricted based on the type of the loan (as is the case with auto loans and mortgages); it also can be flexible, such as in the case of collateral-based personal loans.
Contingent liability
Contingent liabilities are liabilities that may be incurred by an entity depending on the outcome of an uncertain future event such as the outcome of a pending lawsuit. These liabilities are not recorded in a company's accounts and shown in the balance sheet when both probable and reasonably estimable as 'contingency' or 'worst case' financial outcome. A footnote to the balance sheet may describe the nature and extent of the contingent liabilities. The likelihood of loss is described as probable, reasonably possible, or remote. The ability to estimate a loss is described as known, reasonably estimable, or not reasonably estimable. It may or may not occur.
Contingent liabilities shall be classified as:
a) claims against the company
not acknowledge as debt;
b) Guarantees;
c) Other money for which the
company is contingently
liable
Current liability
In accounting, current liabilities are often understood as all liabilities of the business that are to be settled in cash within the fiscal year or the operating cycle of a given firm, whichever period is longer.
A more complete definition is that current liabilities are obligations that will be settled by current assets or by the creation of new current liabilities. Accounts payable are due within 30 days, and are paid within 30 days, but do often run past 30 days or 60 days in some situations. The laws regarding late payment and claims for unpaid accounts payable is related to the issue of accounts payable. An operating cycle for a firm is the average time that is required to go from cash to cash in producing revenues. For example, accounts payable for goods, services or supplies that were purchased for use in the operation of the business and payable within a normal period would be current liabilities. Amounts listed on a balance sheet as accounts payable represent all
bills payable to vendors of a company, whether or not the bills are less than 31 days old or more than 30 days old. Therefore, late payments are not disclosed on the balance sheet for accounts payable. There may be footnotes in audited financial statements regarding age of accounts payable, but this is not common accounting practice. Lawsuits regarding accounts payable are required to be shown on audited financial statements, but this is not necessarily common accounting practice.
Bonds, mortgages and loans that are payable over a term exceeding one year would be fixed liabilities or long-term liabilities. However, the payments due on the long-term loans in the current fiscal year could be considered current liabilities if the amounts were material. Amounts due to lenders/ bankers are never shown as accounts payable/ trade accounts payable, but will show up on the balance sheet of a company under the major heading of current liabilities, and often under the sub-heading of other current liabilities, instead of accounts payable, which are due to vendors. Other current liabilities are due for payment according to the terms of the loan agreements, but when lender liabilities are shown as current vs. long term, they are due within the current fiscal year or earlier. Therefore, late payments from a previous fiscal year will carry over into the same position on the balance sheet as current liabilities which are not late in payment. There may be footnotes in audited financial statements regarding past due payments to lenders, but this is not common practice. Lawsuits regarding loans payable are required to be shown on audited financial statements, but this is not necessarily common accounting practice.
The proper classification of liabilities provides useful information to investors and other users of the financial statements. It may be regarded as essential for allowing outsiders to consider a true picture of an organization's fiscal health.
One application is in the current ratio, defined as the firm's current assets divided by its current liabilities. A ratio higher than one means that current assets, if they can all be converted to cash, are more than sufficient to pay off current obligations. All other things equal, higher values of this ratio imply that a firm is more easily able to meet its obligations in the coming year. The difference between current assets and current liability is referred to as trade working capital.
Domestic liability dollarization
Domestic liability dollarization (DLD) refers to the denomination of banking system deposits and lending in a currency other than that of the country in which they are held. DLD does not refer exclusively to denomination in US dollars, as DLD encompasses accounts denominated in internationally traded "hard" currencies such as the British pound sterling, the Swiss franc, the Japanese yen, and the Euro (and some of its predecessors, particularly the Deutschmark).
Fixed liability
A fixed liabilities are a debts. bonds, mortgages or loans that are payable over a term exceeding one year. These debts are better known as non-current liabilities or long-term liabilities. Debts or liabilities due within one year are known as current liabilities.
IAS 37
International Accounting Standard 37: Provisions, Contingent Liabilities and Contingent Assets, or IAS 37, is an international financial reporting standard adopted by the International Accounting Standards Board (IASB). It sets out the accounting and disclosure requirements for provisions, contingent liabilities and contingent assets, with several exceptions, establishing the important principle that a provision is to be recognized only when the entity has a liability.IAS 37 was originally issued by the International Accounting Standards Committee in 1998, superseding IAS 10: Contingencies and Events Occurring after the Balance Sheet Date, and was adopted by the IASB in 2001. It was seen as an "important development" in accounting as it regulated the use of provisions, minimising their abuse such as in the case of big baths.
Liabilities subject to compromise
Liabilities subject to compromise refers to the debtors' liabilities, in the US, incurred before the start of Chapter 11 bankruptcy cases.This amount represents the debtors' estimate of known or potential pre-petition claims to be resolved in connection with the Chapter 11 cases. Such claims remain subject to future adjustments. Virtually all of the corporation's pre-petition debt is in default due to the filing and is included in Liabilities subject to compromise. Payment terms for liabilities subject to compromise are established as part of a plan of reorganization.
Liability-driven investment strategy
Liability-driven investment policies and asset management decisions are those largely determined by the sum of current and future liabilities attached to the investor, be it a household or an institution. As it purports to associate constantly both sides of the balance sheet in the investment process, it has been called a "holistic" investment methodology.
In essence, the liability-driven investment strategy (LDI) is an investment strategy of a company or individual based on the cash flows needed to fund future liabilities. It is sometimes referred to as a "dedicated portfolio" strategy. It differs from a “benchmark-driven” strategy, which is based on achieving better returns than an external index such as the S&P 500 or a combination of indices that invest in the same types of asset classes. LDI is designed for situations where future liabilities can be predicted with some degree of accuracy. For individuals, the classic example would be the stream of withdrawals from a retirement portfolio that a retiree will make to pay living expenses from the date of retirement to the date of death. For companies, the classic example would be a pension fund that must make future payouts to pensioners over their expected lifetimes (see below).
Long-term liabilities
Long-term liabilities, or non-current liabilities, are liabilities that are due beyond a year or the normal operation period of the company. The normal operation period is the amount of time it takes for a company to turn inventory into cash. On a classified balance sheet, liabilities are separated between current and long-term liabilities to help users assess the company's financial standing in short-term and long-term periods. Long-term liabilities give users more information about the long-term prosperity of the company, while current liabilities inform the user of debt that the company owes in the current period. On a balance sheet, accounts are listed in order of liquidity, so long-term liabilities come after current liabilities. In addition, the specific long-term liability accounts are listed on the balance sheet in order of liquidity. Therefore, an account due within eighteen months would be listed before an account due within twenty-four months. Examples of long-term liabilities are bonds payable, long-term loans, capital leases, pension liabilities, post-retirement healthcare liabilities, deferred compensation, deferred revenues, deferred income taxes, and derivative liabilities.
Provision (accounting)
In financial accounting, a provision is an account which records a present liability of an entity. The recording of the liability in the entity's balance sheet is matched to an appropriate expense account in the entity's income statement. The preceding is correct in IFRS. In U.S. GAAP, a provision is an expense. Thus, "Provision for Income Taxes" is an expense in U.S. GAAP but a liability in IFRS.
Sometimes in IFRS, but not in GAAP, the term reserve is used instead of provision. Such a use is, however, inconsistent with the terminology suggested by International Accounting Standards Board. The term "reserve" can be a confusing accounting term. In accounting, a reserve is always an account with a credit balance in the entity's Equity on the Balance Sheet, while to non-professionals it has the connotation of a pool of cash set aside to meet a future liability (a debit balance).
This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses. |
# Clarification on Recrystallization concept
Any help you could provide me would be greatly appreciated and I thank you for reading (even if you don't comment)
I am in an organic laboratory and as a 3rd chemistry major these concepts usually are pretty basic but recrystallization is kind of leaving me scratching my head. When you dissolve your compound with a small impurity in it, dissolve it in a solvent and heat, I am just confused on this part:
What I understand is that the solubility constant changes with temperature and in the case of organic compounds, solubility almost always increases with increasing temperature. So, if the compound is put in a solvent at room temperature and is heated until it is fully dissolved (solubility constant increased) then it is cooled and the solubility constant decreases and consequently solute starts precipating (crystal formation) My only issue is what is that makes the impurity go away? Thinking through it here, I know I'm so close to the Ah-hah moment but as of right now I'm mental blocked — I suppose.
• Wouldn't the impurities remain in the solution as the "pure" material precipitates? And you can separate via filtration. – 86BCP2432T Oct 6 '15 at 7:09
• I don't understand that part. If they existed as solids initially, how are we supposed to know that the impurity doesn't precept. first? – joshuakatz Oct 6 '15 at 7:22
• Sorry, that was only one type of instance of how the impurity "goes away". Since you're playing with the solubility properties, then the impurity gets separated anyhow. Does this link help at all? wiredchemist.com/chemistry/instructional/laboratory-tutorials/… – 86BCP2432T Oct 6 '15 at 7:28
• Supposedly the impurity is present in minor amount, so it will not reach its own solubility limit and thus will not start precipitating. – Ivan Neretin Oct 6 '15 at 7:46
• On an unrelated note: Welcome to chemistry.SE! Feel free to take a tour of the site. Consult the help center for any questions on how it works! – Jan Oct 6 '15 at 13:03
You note that you are pretty close to the answer and you are. Let’s assume for a minute (I’m pretty sure the assumption is false, but it’ll get us a good way) that the solubility of the impurity is independent of the amount of desired product dissolved and vice-versa. Let’s also assume we have a mixture of $95~\%$ desired product and $5~\%$ side product which you want to purify by recrystallisation. The desired product has a solubility of $1\,\mathrm{\frac{mol}{l}}$ at $4\,\mathrm{^\circ C}$ and $10\,\mathrm{\frac{mol}{l}}$ at $80\,\mathrm{^\circ C}$; and the side product has the same solubility. And finally, let’s assume that you have $100\,\mathrm{mmol}$ total product (side and desired).
You then add $9.5\,\mathrm{ml}$ of your recrystallisation solvent and heat to $80\,\mathrm{^\circ C}$. That would be enough to dissolve both your $95\,\mathrm{mmol}$ desired product and your $5\,\mathrm{mmol}$ side product. You observe a clear (hopefully boiling because best solubility) solution.*
You then let your solution cool down and put it in the fridge overnight. Due to the decreased temperature, the solubility of both desired product and side product drop. The same volume of solution now dissolves only $9.5\,\mathrm{mmol}$ of both. Since the desired product is supersaturated ($95 > 9.5$), $85.5\,\mathrm{mmol}$ of the desired product will precipitate or crystallise. But the side product is still not supersaturated. We have $5\,\mathrm{mmol}$ of the side product dissolved, but the solvent could dissolve $9.5\,\mathrm{mmol}$. So all of the side product will remain in solution (ideally).
Next step is usually filtration and washing. The filtrate will now contain the saturated product solution which is undersaturated with respect to the side product. The side product didn’t disappear, it merely remained in solution. Wash your pure product, record an NMR spectra and rejoice upon its purity.
Of course, as I stated in the beginning, the assumption that solubilities be independent is wrong. It is also tendencially wrong to assume similar solubilities of desired product and impurity. One would attempt to select a solvent so that the product is not well soluble at low temperatures, but the impurity ideally is. However, the basic principle remains valid: The desired product will form a supersaturated solution and precipitate/crystallise while whichever impurity you have should still be soluble to not co-precipitate.
*: A note on how to do it practically: Usually, you would put your crude product into a flask into an oil bath with a refluxing condenser on top. You would then slowly, dropwise add solvent through the condenser until the solution is boiling and everything is dissolved. One wouldn’t go about saying ‘I should have $x\,\mathrm{mmol}$ so I should need $y\,\mathrm{ml}$ of solvent.’
First of all, I am not sure what you mean by "solubility constant". Perhaps you mean just "solubility", which is the weight of a material that you can dissolve in a given volume of a specific solvent (at a given temperature).
You are also correct in stating that solubility usually (but not always) increases with temperature. This depends on the choice of solute and solvent.
There are two effects here serving to purify your impure compound. (1) As Ivan has explained above, a small amount of impurity (having a similar solubility) would remain in solution (being below its saturation concentration) while the main ingredient becomes saturated in solution as the solution cools and the solubility decreases, and therefore crystallizes out. (But--this would not work well if the impurity were relatively insoluble in the solvent.) (2) The formation of a regular crystal lattice, which is usually energetically more favorable than an irregular or amorphous solid, tends to favor the formation of crystals composed of a single chemical substance, rather than a mixture--especially if one substance is in large excess. So, if you repeatedly recrystallize a substance (each time, taking the crystals from the previous step and recrystallizing them using fresh solvent) you can sometimes achieve a very high state of purity.
• en.wikipedia.org/wiki/Solubility_equilibrium – joshuakatz Oct 10 '15 at 7:55
• I'm not sure why you feel the need to school me, but you should check your soft skills bud. Your explanation added nothing and I'm not going to take solubility constant out of my vocabulary. Why wouldn't something be a constant just because it's temperature dependent? You seem like a real dope – joshuakatz Oct 10 '15 at 8:04
• I am sorry, Joshua -- I can see how you would have been offended by the tone of my answer, so I have edited it accordingly. I have occasionally been known to act as a real dope, and my wife can corroborate on this. Please look at my edited answer and decide if it has any value to you on its merits. – iad22agp Oct 10 '15 at 10:52 |
# Keynes: Probability Introduction Ch I
Keynes worked on the theory of probability and submitted a dissertation on that topic for a fellowship at King's College, Cambridge in March 1908. William Ernest Johnson and Alfred North Whitehead were appointed to assess the dissertation. He was not successful but he revised the work taking the assessors' comments into account and also comment by Bertrand Russell. He resubmitted it and was awarded a fellowship in March 1909. Although he intended to publish his dissertation, he could not do so during World War I while working for the Treasury. After the end of the war Keynes prepared his dissertation for publication and it was published in 1921. We present a version of the introductory first chapter of the book on The meaning of probability.
See Keynes Intro Ch I for the Preface to the book.
See Keynes Intro Ch II for the second introductory chapter where Keynes looks at Probability in relation to the theory of knowledge.
## CHAPTER I
### THE MEANING OF PROBABILITY
J'ai dit plus d'une fois qu'il faudrait une nouvelle espèce de logique, qui traiterait des degrés de Probabilité. - LEIBNIZ.
1. Part of our knowledge we obtain direct; and part by argument. The Theory of Probability is concerned with that part which we obtain by argument, and it treats of the different degrees in which the results so obtained are conclusive or inconclusive.
In most branches of academic logic, such as the theory of the syllogism or the geometry of ideal space, all the arguments aim at demonstrative certainty. They claim to be conclusive. But many other arguments are rational and claim some weight without pretending to be certain. In Metaphysics, in Science, and in Conduct, most of the arguments, upon which we habitually base our rational beliefs, are admitted to be inconclusive in a greater or less degree. Thus for a philosophical treatment of these branches of knowledge, the study of probability is required.
The course which the history of thought has led Logic to follow has encouraged the view that doubtful arguments are not within its scope. But in the actual exercise of reason we do not wait on certainty, or deem it irrational to depend on a. doubtful argument. If logic investigates the general principles of valid thought, the study of arguments, to which it is rational to attach some weight, is as much a part of it as the study of those which are demonstrative.
2. The terms certain and probable describe the various degrees of rational belief about a proposition which different amounts of knowledge authorise us to entertain. All propositions are true or false, but the knowledge we have of them depends on our circumstances; and while it is often convenient to speak of propositions as certain or probable, this expresses strictly a relationship in which they stand to a corpus of knowledge, actual or hypothetical, and not a characteristic of the propositions in themselves. A proposition is capable at the same time of varying degrees of this relationship, depending upon the knowledge to which it is related, so that it is without significance to call a proposition probable unless we specify the knowledge to which we are relating it.
To this extent, therefore, probability may be called subjective. But in the sense important to logic, probability is not subjective. It is not, that is to say, subject to human caprice. A proposition is not probable because we think it so. When once the facts are given which determine our knowledge, what is probable or improbable in these circumstances has been fixed objectively, and is independent of our opinion. The Theory of Probability is logical, therefore, because it is concerned with the degree of belief which it is rational to entertain in given conditions, and not merely with the actual beliefs of particular individuals, which may or may not be rational.
Given the body of direct knowledge which constitutes our ultimate premisses, this theory tells us what further rational beliefs, certain or probable, can be derived by valid argument from our direct knowledge. This involves purely logical relations between the propositions which embody our direct knowledge and the propositions about which we seek indirect knowledge. What particular propositions we select as the premisses of our argument naturally depends on subjective factors peculiar to ourselves; but the relations, in which other propositions stand to these, and which entitle us to probable beliefs, are objective and logical.
3. Let our premisses consist of any set of propositions $h$, and our conclusion consist of any set of propositions $a$, then, if a knowledge of $h$ justifies a rational belief in $a$ of degree a, we say that there is a probability-relation of degree a between $a$ and $h$. [This will be written $a/k$ = a.]
In ordinary speech we often describe the conclusion as being doubtful, uncertain, or only probable. But, strictly, these terms ought to be applied, either to the degree of our rational belief in the conclusion, or to the relation or argument between two sets of propositions, knowledge of which would afford grounds for a corresponding degree of rational belief.
4. With the term "event," which has taken hitherto so important a place in the phraseology of the subject, I shall dispense altogether [except in those chapters where I am dealing chiefly with the work of others]. Writers on Probability have generally dealt with what they term the "happening " of "events." In the problems which they first studied this did not involve much departure from common usage. But these expressions are now used in a way which is vague and ambiguous; and it will be more than a verbal improvement to discuss the truth and the probability of propositions instead of the occurrence and the probability of events [The first writer I know of to notice this was Ancillon in Doutes sur les bases du calcul des probabilités (1794): "Dire qu'un fait passé, présent ou à venir est probable, c'est dire qu'une proposition est probable." The point was emphasised by Boole, Laws of Thought, pp. 7 and 167].
5. These general ideas are not likely to provoke much criticism. In the ordinary course of thought and argument, we are constantly assuming that knowledge of one statement, while not proving the truth of a second, yields nevertheless some ground for believing it. We assert that we ought on the evidence to prefer such and such a belief. We claim rational grounds for assertions which are not conclusively demonstrated. We allow, in fact, that statements may be unproved, without, for that reason, being unfounded. And it does not seem on reflection that the information we convey by these expressions is wholly subjective. Men we argue that Darwin gives valid grounds for our accepting his theory of natural selection, we do not simply mean that we are psychologically inclined to agree with him; it is certain that we also intend to convey our belief that we are acting rationally in regarding his theory as probable. We believe that there is some real objective relation between Darwin's evidence and his conclusions, which is independent of the mere fact of our belief, and which is just as real and objective, though of a different degree, as that which would exist if the argument were as demonstrative as a syllogism. We are claiming, in fact, to cognise correctly a logical connection between one set of propositions which we call our evidence and which we suppose ourselves to know, and another set which we call our conclusions, and to which we attach more or less weight according to the grounds supplied by the first. It is this type of objective relation between sets of propositions - the type which we claim to be correctly perceiving when we make such assertions as these - to which the reader's attention must be directed.
6. It is not straining the use of words to speak of this as the relation of probability. It is true that mathematicians have employed the term in a narrower sense; for they have often confined it to the limited class of instances in which the relation is adapted to an algebraical treatment. But in common usage the word has never received this limitation.
Students of probability in the sense which is meant by the authors of typical treatises on Wahrscheinlichkeitsrechnung or Calcul des probabilités, will find that I do eventually reach topics with which they are familiar. But in making a serious attempt to deal with the fundamental difficulties with which all students of mathematical probabilities have met and which are notoriously unsolved, we must begin at the beginning (or almost at the beginning) and treat our subject widely. As soon as mathematical probability ceases to be the merest algebra or pretends to guide our decisions, it immediately meets with problems against which its own weapons are quite powerless. And even if we wish later on to use probability in a narrow sense, it will be well to know first what it means in the widest.
7. Between two sets of propositions, therefore, there exists a relation, in, virtue of which, if we know the first, we can attach to the latter some degree of rational belief. This relation is the subject-matter of the logic of probability.
A great deal of confusion and error has arisen out of a failure to take due account of this relational aspect of probability. From the premisses "a implies b" and "a is true," we can conclude something about b - namely that b is true - which does not involve a. But, if a is so related to b, that a knowledge of it renders a probable belief in b rational, we cannot conclude anything whatever about b which has not reference to a; and it is not true that every set of self-consistent premisses which includes a has this same relation to b. It is as useless, therefore, to say "b is probable" as it would be to say "b is equal," or "b is greater than," and as unwarranted to conclude that, because a makes b probable, therefore a and c together make b probable, as to argue that because a is less than b, therefore a and c together are less than b.
Thus, when in ordinary speech we name some opinion as probable without further qualification, the phrase is generally elliptical. We mean that it is probable when certain considerations, implicitly or explicitly present to our minds at the moment, are taken into account. We use the word for the sake of shortness, just as we speak of a place as being three miles distant, when we mean three miles distant from where we are then situated, or from some starting-point to which we tacitly refer. No proposition is in itself either probable or improbable, just as no place can be intrinsically distant; and the probability of the same statement varies with the evidence presented, which is, as it were, its origin of reference. We may fix our attention on our own knowledge and, treating this as our origin, consider the probabilities of all other suppositions, - according to the usual practice which leads to the elliptical form of common speech; or we may, equally well, fix it on a proposed conclusion and consider what degree of probability this would derive from various sets of assumptions, which might constitute the corpus of knowledge of ourselves or others, or which are merely hypotheses.
Reflection will show that this account harmonises with familiar experience. There is nothing novel in the supposition that the probability of a theory turns upon the evidence by which it is supported; and it is common to assert that an opinion was probable on the evidence at first to hand, but on further information was untenable. As our knowledge or our hypothesis changes, our conclusions have new probabilities, not in themselves, but relatively to these new premisses. New logical relations have now become important, namely those between the conclusions which we are investigating and our new assumptions; but the old relations between the conclusions and the former assumptions still exist and are just as real as these new ones. It would be as absurd to deny that an opinion was probable, when at a later stage certain objections have come to light, as to deny, when we have reached our destination, that it was ever three miles distant; and the opinion still is probable in relation to the old hypotheses, just as the destination is still three miles distant from our starting-point.
8. A definition of probability is not possible, unless it contents us to define degrees of the probability-relation by reference to degrees of rational belief. We cannot analyse the probability-relation in terms of simpler ideas. As soon as we have passed from the logic of implication and the categories of truth and falsehood to the logic of probability and the categories of knowledge, ignorance, and rational belief, we are paying attention to a new logical relation in which, although it is logical, we were not previously interested, and which cannot be explained or defined in terms of our previous notions.
This opinion is, from the nature of the case, incapable of positive proof. The presumption in its favour must arise partly out of our failure to find a definition, and partly because the notion presents itself to the mind as something new and independent. If the statement that an opinion was probable on the evidence at first to hand, but became untenable on further information, is not, solely concerned with psychological belief, I do not know how the element of logical doubt is to be defined, or how its substance is to be stated, in terms of the other indefinables of formal logic. The attempts at definition, which have been made hitherto, will be criticised in later chapters. I do not believe that any of them accurately represent that particular logical relation which we have in our minds when we speak of the probability of an argument.
In the great majority of cases the term "probable" seems to be used consistently by different persons to describe the same concept. Differences of opinion have not been due, I think, to a radical ambiguity of language. In any case a desire to reduce the indefinables of logic can easily be carried too far. Even if a definition is discoverable in the end, there is no harm in postponing it until our enquiry into the object of definition is far advanced. In the case of "probability" the object before the mind is so familiar that the danger of misdescribing its qualities through lack of a definition is less than if it were a highly abstract entity far removed from the normal channels of thought.
9. This chapter has served briefly to indicate, though not to define, the subject matter of the book. Its object has been to emphasise the existence of a logical relation between two sets of propositions in cases where it is not possible to argue demonstratively from one to the other. This is a contention of a most fundamental character. It is not entirely novel, but has seldom received due emphasis, is often overlooked, and sometimes denied. The view, that probability arises out of the existence of a specific relation between premiss and conclusion, depends for its acceptance upon a reflective judgment on the true character of the concept. It will be our object to discuss, under the title of Probability, the principal properties of this relation. First, however, we must digress in order to consider briefly what we mean by knowledge, rational belief, and argument.
Last Updated August 2007 |
# Category Theory & Programming
for Rivieria Scala Clojure (Note this presentation uses Haskell)
by Yann Esposito
HTML presentation: use arrows, space, swipe to navigate.
## Plan
• General overview
• Definitions
• Applications
## General Overview
Recent Math Field
1942-45, Samuel Eilenberg & Saunders Mac Lane
Certainly one of the more abstract branches of math
• New math foundation
formalism abstraction, package entire theory
• Bridge between disciplines
Physics, Quantum Physics, Topology, Logic, Computer Science
## From a Programmer perspective
Category Theory is a new language/framework for Math
• Another way of thinking
• Extremely efficient for generalization
## Math Programming relation
Programming is doing Math
Strong relations between type theory and category theory.
Not convinced?
Certainly a vocabulary problem.
One of the goal of Category Theory is to create a homogeneous vocabulary between different disciplines.
## Vocabulary
Math vocabulary used in this presentation:
Category, Morphism, Associativity, Preorder, Functor, Endofunctor, Categorial property, Commutative diagram, Isomorph, Initial, Dual, Monoid, Natural transformation, Monad, Klesli arrows, κατα-morphism, ...
## Programmer Translation
Mathematician Programmer
Morphism Arrow
Monoid String-like
Preorder Acyclic graph
Isomorph The same
Natural transformation rearrangement function
Funny Category LOLCat
## Plan
• General overview
• Definitions
• Category
• Intuition
• Examples
• Functor
• Examples
• Applications
## Category
A way of representing things and ways to go between things.
A Category $$\mathcal{C}$$ is defined by:
• Objects $$\ob{C}$$,
• Morphisms $$\hom{C}$$,
• a Composition law (∘)
• obeying some Properties.
## Category: Objects
$$\ob{\mathcal{C}}$$ is a collection
## Category: Morphisms
$$A$$ and $$B$$ objects of $$\C$$
$$\hom{A,B}$$ is a collection of morphisms
$$f:A→B$$ denote the fact $$f$$ belongs to $$\hom{A,B}$$
$$\hom{\C}$$ the collection of all morphisms of $$\C$$
## Category: Composition
Composition (∘): associate to each couple $$f:A→B, g:B→C$$ $$g∘f:A\rightarrow C$$
## Category laws: neutral element
for each object $$X$$, there is an $$\id_X:X→X$$,
such that for each $$f:A→B$$:
## Category laws: Associativity
Composition is associative:
## Commutative diagrams
Two path with the same source and destination are equal.
## Can this be a category?
$$\ob{\C},\hom{\C}$$ fixed, is there a valid ∘?
## Category $$\Set$$
• $$\ob{\Set}$$ are all the sets
• $$\hom{E,F}$$ are all functions from $$E$$ to $$F$$
• ∘ is functions composition
• $$\ob{\Set}$$ is a proper class ; not a set
• $$\hom{E,F}$$ is a set
• $$\Set$$ is then a locally small category
## Categories Everywhere?
• $$\Mon$$: (monoids, monoid morphisms,∘)
• $$\Vec$$: (Vectorial spaces, linear functions,∘)
• $$\Grp$$: (groups, group morphisms,∘)
• $$\Rng$$: (rings, ring morphisms,∘)
• Any deductive system T: (theorems, proofs, proof concatenation)
• $$\Hask$$: (Haskell types, functions, (.) )
• ...
## Smaller Examples
### Strings
• $$\ob{Str}$$ is a singleton
• $$\hom{Str}$$ each string
• ∘ is concatenation (++)
• "" ++ u = u = u ++ ""
• (u ++ v) ++ w = u ++ (v ++ w)
## Finite Example?
### Graph
• $$\ob{G}$$ are vertices
• $$\hom{G}$$ each path
• ∘ is path concatenation
• $$\ob{G}=\{X,Y,Z\}$$,
• $$\hom{G}=\{ε,α,β,γ,αβ,βγ,...\}$$
• $$αβ∘γ=αβγ$$
## Degenerated Categories: Monoids
Each Monoid $$(M,e,⊙): \ob{M}=\{∙\},\hom{M}=M,\circ = ⊙$$
Only one object.
Examples:
• (Integer,0,+), (Integer,1,*),
• (Strings,"",++), for each a, ([a],[],++)
## Degenerated Categories: Preorders $$(P,≤)$$
• $$\ob{P}={P}$$,
• $$\hom{x,y}=\{x≤y\} ⇔ x≤y$$,
• $$(y≤z) \circ (x≤y) = (x≤z)$$
At most one morphism between two objects.
## Degenerated Categories: Discrete Categories
### Any Set
Any set $$E: \ob{E}=E, \hom{x,y}=\{x\} ⇔ x=y$$
Only identities
## Choice
The same object can be seen in many different way as a category.
You can choose what are object, morphisms and composition.
ex: Str and discrete(Σ*)
## Categorical Properties
Any property which can be expressed in term of category, objects, morphism and composition.
• Dual: $$\D$$ is $$\C$$ with reversed morphisms.
• Initial: $$Z\in\ob{\C}$$ s.t. $$∀Y∈\ob{\C}, \#\hom{Z,Y}=1$$
Unique ("up to isormophism")
• Terminal: $$T\in\ob{\C}$$ s.t. $$T$$ is initial in the dual of $$\C$$
• Functor: structure preserving mapping between categories
• ...
## Isomorph
isomorphism: $$f:A→B$$ which can be "undone" i.e.
$$∃g:B→A$$, $$g∘f=id_A$$ & $$f∘g=id_B$$
in this case, $$A$$ & $$B$$ are isomorphic.
A≌B means A and B are essentially the same.
In Category Theory, = is in fact mostly .
For example in commutative diagrams.
## Functor
A functor is a mapping between two categories. Let $$\C$$ and $$\D$$ be two categories. A functor $$\F$$ from $$\C$$ to $$\D$$:
• Associate objects: $$A\in\ob{\C}$$ to $$\F(A)\in\ob{\D}$$
• Associate morphisms: $$f:A\to B$$ to $$\F(f) : \F(A) \to \F(B)$$ such that
• $$\F ($$$$\id_X$$$$)=$$$$\id$$$$\vphantom{\id}_{\F(}$$$$\vphantom{\id}_X$$$$\vphantom{\id}_{)}$$,
• $$\F ($$$$g∘f$$$$)=$$$$\F($$$$g$$$$)$$$$\circ$$$$\F($$$$f$$$$)$$
## Endofunctors
An endofunctor for $$\C$$ is a functor $$F:\C→\C$$.
## Category of Categories
Categories and functors form a category: $$\Cat$$
• $$\ob{\Cat}$$ are categories
• $$\hom{\Cat}$$ are functors
• ∘ is functor composition
## Plan
• General overview
• Definitions
• Applications
• $$\Hask$$ category
• Functors
• Natural transformations
• κατα-morphisms
Category $$\Hask$$:
• $$\ob{\Hask} =$$ Haskell types
• $$\hom{\Hask} =$$ Haskell functions
• ∘ = (.) Haskell function composition
Forget glitches because of undefined.
In Haskell some types can take type variable(s). Typically: [a].
Types have kinds; The kind is to type what type is to function. Kind are the types for types (so meta).
Int, Char :: *
[], Maybe :: * -> *
(,), (->) :: * -> * -> *
[Int], Maybe Char, Maybe [Int] :: *
Sometimes, the type determine a lot about the function:
fst :: (a,b) -> a -- Only one choice
snd :: (a,b) -> b -- Only one choice
f :: a -> [a] -- Many choices
-- Possibilities: f x=[], or [x], or [x,x] or [x,...,x]
? :: [a] -> [a] -- Many choices
-- can only rearrange: duplicate/remove/reorder elements
-- for example: the type of addOne isn't [a] -> [a]
addOne l = map (+1) l
-- The (+1) force 'a' to be a Num.
## Haskell Functor vs $$\Hask$$ Functor
A Haskell Functor is a type F :: * -> * which belong to the type class Functor ; thus instantiate fmap :: (a -> b) -> (F a -> F b).
& F: $$\ob{\Hask}→\ob{\Hask}$$
& fmap: $$\hom{\Hask}→\hom{\Hask}$$
The couple (F,fmap) is a $$\Hask$$'s functor if for any x :: F a:
• fmap id x = x
• fmap (f.g) x= (fmap f . fmap g) x
data Maybe a = Just a | Nothing
instance Functor Maybe where
fmap :: (a -> b) -> (Maybe a -> Maybe b)
fmap f (Just a) = Just (f a)
fmap f Nothing = Nothing
fmap (+1) (Just 1) == Just 2
fmap (+1) Nothing == Nothing
fmap head (Just [1,2,3]) == Just 1
instance Functor ([]) where
fmap :: (a -> b) -> [a] -> [b]
fmap = map
fmap (+1) [1,2,3] == [2,3,4]
fmap (+1) [] == []
fmap head [[1,2,3],[4,5,6]] == [1,4]
## Haskell Functors for the programmer
Functor is a type class used for types that can be mapped over.
• Containers: [], Trees, Map, HashMap...
• "Feature Type":
• Maybe a: help to handle absence of a.
Ex: safeDiv x 0 ⇒ Nothing
• Either String a: help to handle errors
Ex: reportDiv x 0 ⇒ Left "Division by 0!"
Put normal function inside a container. Ex: list, trees...
• endofunctors ; $$F:\C→\C$$ here $$\C = \Hask$$,
• a couple (Object,Morphism) in $$\Hask$$.
## Functor as boxes
Haskell functor can be seen as boxes containing all Haskell types and functions. Haskell types is fractal:
## Functor as boxes
Haskell functor can be seen as boxes containing all Haskell types and functions. Haskell types is fractal:
## Functor as boxes
Haskell functor can be seen as boxes containing all Haskell types and functions. Haskell types is fractal:
A simple basic example is the $$id_\Hask$$ functor. It simply cannot be expressed as a couple (F,fmap) where
• F::* -> *
• fmap :: (a -> b) -> (F a) -> (F b)
Another example:
• F(T)=Int
• F(f)=\_->0
## Also Functor inside $$\Hask$$
$$\mathtt{[a]}∈\ob{\Hask}$$ but is also a category. Idem for Int.
length is a Functor from the category [a] to the category Int:
• $$\ob{\mathtt{[a]}}=\{∙\}$$
• $$\hom{\mathtt{[a]}}=\mathtt{[a]}$$
• $$∘=\mathtt{(++)}$$
• $$\ob{\mathtt{Int}}=\{∙\}$$
• $$\hom{\mathtt{Int}}=\mathtt{Int}$$
• $$∘=\mathtt{(+)}$$
• id: length [] = 0
• comp: length (l ++ l') = (length l) + (length l')
## Category of Functors
If $$\C$$ is small ($$\hom{\C}$$ is a set). All functors from $$\C$$ to some category $$\D$$ form the category $$\mathrm{Func}(\C,\D)$$.
• $$\ob{\mathrm{Func}(\C,\D)}$$: Functors $$F:\C→\D$$
• $$\hom{\mathrm{Func}(\C,\D)}$$: natural transformations
• ∘: Functor composition
$$\mathrm{Func}(\C,\C)$$ is the category of endofunctors of $$\C$$.
## Natural Transformations
Let $$F$$ and $$G$$ be two functors from $$\C$$ to $$\D$$.
A natural transformation: familly η ; $$η_X\in\hom{\D}$$ for $$X\in\ob{\C}$$ s.t.
ex: between Haskell functors; F a -> G a
Rearragement functions only.
## Natural Transformation Examples (1/4)
data List a = Nil | Cons a (List a)
toList :: [a] -> List a
toList [] = Nil
toList (x:xs) = Cons x (toList xs)
toList is a natural transformation. It is also a morphism from [] to List in the Category of $$\Hask$$ endofunctors.
## Natural Transformation Examples (2/4)
data List a = Nil | Cons a (List a)
toHList :: List a -> [a]
toHList Nil = []
toHList (Cons x xs) = x:toHList xs
toHList is a natural transformation. It is also a morphism from List to [] in the Category of $$\Hask$$ endofunctors.
## Natural Transformation Examples (3/4)
toMaybe :: [a] -> Maybe a
toMaybe [] = Nothing
toMaybe (x:xs) = Just x
toMaybe is a natural transformation. It is also a morphism from [] to Maybe in the Category of $$\Hask$$ endofunctors.
## Natural Transformation Examples (4/4)
mToList :: Maybe a -> [a]
mToList Nothing = []
mToList Just x = [x]
toMaybe is a natural transformation. It is also a morphism from [] to Maybe in the Category of $$\Hask$$ endofunctors.
## Composition problem
The Problem; example with lists:
f x = [x] ⇒ f 1 = [1] ⇒ (f.f) 1 = [[1]] ✗
g x = [x+1] ⇒ g 1 = [2] ⇒ (g.g) 1 = ERROR [2]+1 ✗
h x = [x+1,x*3] ⇒ h 1 = [2,3] ⇒ (h.h) 1 = ERROR [2,3]+1 ✗
The same problem with most f :: a -> F a functions and functor F.
## Composition Fixable?
How to fix that? We want to construct an operator which is able to compose:
f :: a -> F b & g :: b -> F c.
More specifically we want to create an operator ◎ of type
◎ :: (b -> F c) -> (a -> F b) -> (a -> F c)
Note: if F = I, ◎ = (.).
## Fix Composition (1/2)
Goal, find: ◎ :: (b -> F c) -> (a -> F b) -> (a -> F c)
f :: a -> F b, g :: b -> F c:
• (g ◎ f) x ???
• First apply f to xf x :: F b
• Then how to apply g properly to an element of type F b?
## Fix Composition (2/2)
Goal, find: ◎ :: (b -> F c) -> (a -> F b) -> (a -> F c)
f :: a -> F b, g :: b -> F c, f x :: F b:
• Use fmap :: (t -> u) -> (F t -> F u)!
• (fmap g) :: F b -> F (F c) ; (t=b, u=F c)
• (fmap g) (f x) :: F (F c) it almost WORKS!
• We lack an important component, join :: F (F c) -> F c
• (g ◎ f) x = join ((fmap g) (f x))
◎ is the Kleisli composition; in Haskell: <=< (in Control.Monad).
## Necessary laws
For ◎ to work like composition, we need join to hold the following properties:
• join (join (F (F (F a))))=join (F (join (F (F a))))
• abusing notations denoting join by ⊙; this is equivalent to
(F ⊙ F) ⊙ F = F ⊙ (F ⊙ F)
• There exists η :: a -> F a s.t.
η⊙F=F=F⊙η
## Klesli composition
Now the composition works as expected. In Haskell ◎ is <=< in Control.Monad.
g <=< f = \x -> join ((fmap g) (f x))
f x = [x] ⇒ f 1 = [1] ⇒ (f <=< f) 1 = [1] ✓
g x = [x+1] ⇒ g 1 = [2] ⇒ (g <=< g) 1 = [3] ✓
h x = [x+1,x*3] ⇒ h 1 = [2,3] ⇒ (h <=< h) 1 = [3,6,4,9] ✓
A monad is a triplet (M,⊙,η) where
• $$M$$ an Endofunctor (to type a associate M a)
• $$⊙:M×M→M$$ a nat. trans. (i.e. ⊙::M (M a) → M a ; join)
• $$η:I→M$$ a nat. trans. ($$I$$ identity functor ; η::a → M a)
Satisfying
• $$M ⊙ (M ⊙ M) = (M ⊙ M) ⊙ M$$
• $$η ⊙ M = M = M ⊙ η$$
## Compare with Monoid
A Monoid is a triplet $$(E,∙,e)$$ s.t.
• $$E$$ a set
• $$∙:E×E→E$$
• $$e:1→E$$
Satisfying
• $$x∙(y∙z) = (x∙y)∙z, ∀x,y,z∈E$$
• $$e∙x = x = x∙e, ∀x∈E$$
A Monad is just a monoid in the category of endofunctors, what's the problem?
The real sentence was:
All told, a monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor.
## Example: List
• [] :: * -> * an Endofunctor
• $$⊙:M×M→M$$ a nat. trans. (join :: M (M a) -> M a)
• $$η:I→M$$ a nat. trans.
-- In Haskell ⊙ is "join" in "Control.Monad"
join :: [[a]] -> [a]
join = concat
-- In Haskell the "return" function (unfortunate name)
η :: a -> [a]
η x = [x]
## Example: List (law verification)
Example: List is a functor (join is ⊙)
• $$M ⊙ (M ⊙ M) = (M ⊙ M) ⊙ M$$
• $$η ⊙ M = M = M ⊙ η$$
join [ join [[x,y,...,z]] ] = join [[x,y,...,z]]
= join (join [[[x,y,...,z]]])
join (η [x]) = [x] = join [η x]
Therefore ([],join,η) is a monad.
A LOT of monad tutorial on the net. Just one example; the State Monad
DrawScene to State Screen DrawScene ; still pure.
main = drawImage (width,height)
drawImage :: Screen -> DrawScene
drawImage screen = do
drawPoint p screen
drawCircle c screen
drawRectangle r screen
drawPoint point screen = ...
drawCircle circle screen = ...
drawRectangle rectangle screen = ...
main = do
put (Screen 1024 768)
drawImage
drawImage :: State Screen DrawScene
drawImage = do
drawPoint p
drawCircle c
drawRectangle r
drawPoint :: Point -> State Screen DrawScene
drawPoint p = do
Screen width height <- get
...
## κατα-morphism: fold generalization
acc type of the "accumulator":
fold :: (acc -> a -> acc) -> acc -> [a] -> acc
Idea: put the accumulated value inside the type.
-- Equivalent to fold (+1) 0 "cata"
(Cons 'c' (Cons 'a' (Cons 't' (Cons 'a' Nil))))
(Cons 'c' (Cons 'a' (Cons 't' (Cons 'a' 0))))
(Cons 'c' (Cons 'a' (Cons 't' 1)))
(Cons 'c' (Cons 'a' 2))
(Cons 'c' 3)
4
But where are all the informations? (+1) and 0?
## κατα-morphism: Missing Information
Where is the missing information?
• Functor operator fmap
• Algebra representing the (+1) and also knowing about the 0.
First example, make length on [Char]
## κατα-morphism: Type work
data StrF a = Cons Char a | Nil
data Str' = StrF Str'
-- generalize the construction of Str to other datatype
-- Mu: type fixed point
-- Mu :: (* -> *) -> *
data Mu f = InF { outF :: f (Mu f) }
data Str = Mu StrF
-- Example
foo=InF { outF = Cons 'f'
(InF { outF = Cons 'o'
(InF { outF = Cons 'o'
(InF { outF = Nil })})})}
## κατα-morphism: missing information retrieved
type Algebra f a = f a -> a
instance Functor (StrF a) =
fmap f (Cons c x) = Cons c (f x)
fmap _ Nil = Nil
cata :: Functor f => Algebra f a -> Mu f -> a
cata f = f . fmap (cata f) . outF
## κατα-morphism: Finally length
All needed information for making length.
instance Functor (StrF a) =
fmap f (Cons c x) = Cons c (f x)
fmap _ Nil = Nil
length' :: Str -> Int
length' = cata phi where
phi :: Algebra StrF Int -- StrF Int -> Int
phi (Cons a b) = 1 + b
phi Nil = 0
main = do
l <- length' \$ stringToStr "Toto"
...
## κατα-morphism: extension to Trees
Once you get the trick, it is easy to extent to most Functor.
type Tree = Mu TreeF
data TreeF x = Node Int [x]
instance Functor TreeF where
fmap f (Node e xs) = Node e (fmap f xs)
depth = cata phi where
phi :: Algebra TreeF Int -- TreeF Int -> Int
phi (Node x sons) = 1 + foldr max 0 sons
## Conclusion
Category Theory oriented Programming:
• Focus on the type and operators
• Extreme generalisation
• Better modularity
• Better control through properties of types
No cat were harmed in the making of this presentation.
/
# |
Show Summary Details
More options …
Reviews on Environmental Health
Editor-in-Chief: Carpenter, David O. / Sly, Peter
Editorial Board: Brugge, Doug / Edwards, John W. / Field, R.William / Garbisu, Carlos / Hales, Simon / Horowitz, Michal / Lawrence, Roderick / Maibach, H.I. / Shaw, Susan / Tao, Shu / Tchounwou, Paul B.
4 Issues per year
IMPACT FACTOR 2017: 1.284
CiteScore 2017: 1.29
SCImago Journal Rank (SJR) 2017: 0.438
Source Normalized Impact per Paper (SNIP) 2017: 0.603
Online
ISSN
2191-0308
See all formats and pricing
More options …
Volume 29, Issue 4
Understanding exposure from natural gas drilling puts current air standards to the test
David Brown
/ Beth Weinberger
/ Celia Lewis
/ Heather Bonaparte
Published Online: 2014-03-29 | DOI: https://doi.org/10.1515/reveh-2014-0002
Abstract
Case study descriptions of acute onset of respiratory, neurologic, dermal, vascular, abdominal, and gastrointestinal sequelae near natural gas facilities contrast with a subset of emissions research, which suggests that there is limited risk posed by unconventional natural gas development (UNGD). An inspection of the pathophysiological effects of acute toxic actions reveals that current environmental monitoring protocols are incompatible with the goal of protecting the health of those living and working near UNGD activities. The intensity, frequency, and duration of exposures to toxic materials in air and water determine the health risks to individuals within a population. Currently, human health risks near UNGD sites are derived from average population risks without adequate attention to the processes of toxicity to the body. The objective of this paper is to illustrate that current methods of collecting emissions data, as well as the analyses of these data, are not sufficient for accurately assessing risks to individuals or protecting the health of those near UNGD sites. Focusing on air pollution impacts, we examined data from public sources and from the published literature. We compared the methods commonly used to evaluate health safety near UNGD sites with the information that would be reasonably needed to determine plausible outcomes of actual exposures. Such outcomes must be based on the pathophysiological effects of the agents present and the susceptibility of residents near these sites. Our study has several findings. First, current protocols used for assessing compliance with ambient air standards do not adequately determine the intensity, frequency or durations of the actual human exposures to the mixtures of toxic materials released regularly at UNGD sites. Second, the typically used periodic 24-h average measures can underestimate actual exposures by an order of magnitude. Third, reference standards are set in a form that inaccurately determines health risk because they do not fully consider the potential synergistic combinations of toxic air emissions. Finally, air dispersion modeling shows that local weather conditions are strong determinates of individual exposures. Appropriate estimation of safety requires nested protocols that measure real time exposures. New protocols are needed to provide 1) continuous measures of a surrogate compound to show periods of extreme exposure; 2) a continuous screening model based on local weather conditions to warn of periodic high exposures; and 3) comprehensive detection of chemical mixtures using canisters or other devices that capture the major components of the mixtures.
Introduction
Recent and projected growth in the oil and gas production sector has underscored the need for EPA to gain a better understanding of emissions and potential risks from this industry sector. Harmful pollutants emitted from this industry include air toxics such as benzene, toluene, ethylbenzene, and xylene; criteria pollutants and ozone precursors such as NOx and VOCs; and greenhouse gases such as methane. These pollutants can result in serious health impacts such as cancer, respiratory disease, aggravation of respiratory illnesses, and premature death. However, EPA has limited directly-measured air emissions data on criteria and toxic air pollutants for several important oil and gas production processes. [These] limited data, coupled with poor quality and insufficient emission factors and incomplete NEI data, hamper EPA’s ability to assess air quality impacts from selected oil and gas production activities.
– US Environmental Protection Agency (EPA) Office of Inspector General (1)
The question we, and others, have asked is: do the levels of exposure to the mixture of unconventional natural gas development (UNGD) emissions constitute a potential human health hazard to those living very near UNGD activities (2–7)? The answer hinges on the emissions themselves, their synergistic effects, the methodology used for collecting and analyzing data, and the standards for gauging human health risk. More specifically, the answer hinges on whether the methodology used is designed to capture the important features of episodic and fluctuating emissions and exposures that characterize UNGD activity.
In this article, UNGD refers to the complete process of extracting, processing and transporting natural gas, including all associated infrastructures, such as flare stacks, flowback pits, compressors, and condensate tanks. Each stage of UNGD produces a different combination of emissions and the levels of release are also variable. Colburn et al. (8) collected air samples weekly for 1 year and reported that emissions were highest during the drilling phase of development. However, estimates provided by industry for the New York State Revised Draft SGEIS (9) suggest that VOC emissions may be greater during the production phase. In any case, emissions vary at each well pad because of several factors, including the type of gas being extracted, the mixture of fluids used, the quality of equipment, as well as the methods of extraction and processing. For example, flowback fluids may be trucked off a well pad or held in impoundments onsite, whereas in the finishing process, gases may be flared or vented.
Another variable in terms of human exposure risk is state setback regulations. Among the states that have them, each has different requirements for well setbacks from buildings and/or water sources. A survey on setback regulations for natural gas drilling reports that, for buildings, the setback distance can vary from 100 to 1000 feet, with an average of 308 feet (10). Water source setbacks can vary from 50 feet (Ohio) to as much as 2000 feet (Michigan). This same report finds “extensive regulatory heterogeneity among the states” for those with active gas production (10).
Toxicity of a chemical to the human body is determined by the concentration of the agent at the receptor where it acts. This concentration is determined by the intensity and duration of the exposure. All other physiological sequelae follow from the interaction between agent and receptor. Once a receptor is activated, a health event might be produced immediately or in as little as 1 to 2 h (11, 12). Alternatively, future exposures might compound the impact of the first one, eventually producing a health event. In some instances where there is a high concentration of an agent, a single significant exposure can cause injury or illness. Federal and state health standards for water and air, which are applied to UNGD emissions, ought to reflect and be evaluated in reference to these physiological realities; currently they do not. Thus, in order to understand and define the gap between air standards and the process by which UNGD exposures cause health effects, we examined the literature on UNGD emissions and exposures and then evaluated widely accepted health standards in light of environmental data we have collected.
Our interest in closing the gap between standards and the mechanisms of environmental health effects stems from the work we do in communities in southwest Pennsylvania, USA. Individuals in these communities have taught us a great deal about their health concerns and their unease with the air in and outside of their homes. There are similar issues with the potential for well water contamination from UNGD in the region. In this paper, we specifically address the risks posed by episodic, high concentration air exposures. Commonly used standards and benchmarks are particularly ill-equipped to consider this set of exposures.
Standards and monitoring protocols
The air standards and guidelines often used by the federal government, state governments, and by many independent researchers are those set by the National Ambient Air Quality Standards (NAAQS). These standards approach, but do not meet, the physiological criteria for how exposures cause damage at the individual level. This is not, however, a failure of the NAAQS. The standards have been designed to benchmark regional air quality, which refers to whether the overall pollution level in a region, over time, is within the ambient air target zone EPA deems safe. The standards are a tool for the regulatory system, which requires averaging of samples taken. How these data are collected, averaged, and interpreted varies by pollutant. It should also be noted that one of the criteria for determining standards is that the targeted level must be measurable, that is, if a chemical is not readily measurable at a given level, its use cannot be monitored, regulated or enforced. This criterion precludes standards being set to a very low level.
As seen in Table 1, the form (i.e., application) of the standard varies by compound. However, regardless of the substance, each standard relies on averages of exposures, sometimes over long periods of time. By seeking to provide overall regional air quality guidance, NAAQS and other air quality benchmarks have the following critical weaknesses when applied to individuals or very local areas: 1) current NAAQS do not address the interactions of the chemical agents in the air and then in the body; 2) long-term averages fail to capture the frequency or magnitude of very high readings; and 3) with periodic data collection, important spikes or episodic exposures (common in UNGD) can be missed. In those few cases where short-term or hourly ambient air levels are measured, the purpose is generally to avoid poisoning from catastrophic releases (13).
Table 1
National ambient air quality standards.
In addition, researchers use other guidelines for determining whether an exposure is within or beyond safe limits. Some researchers and regulatory agencies, for instance, use EPA’s Integrated Risk Information System (IRIS), a database of research on human health exposures. Guidance provided through IRIS covers hundreds of chemicals and their possible effects on humans. The database assists researchers with hazard identification and dose-response assessment as well as with oral reference doses (RfDs), inhalation reference concentrations (RfCs), and carcinogenicity assessments. The RfD or RfC reflects an estimate of the highest daily exposure across a lifetime, which is likely to be without significant risk of health effects. The science underlying these reference levels, however, does not necessarily apply to the risk circumstances brought about by UNGD. Furthermore, RfDs and RfCs have no direct regulatory application and no legal enforceability. Researchers have also evaluated the wisdom of looking at peak exposures as compared to averages over longer periods of time. Delfino et al. (14) posited that maxima of hourly data, not 24-h averages, better captured the risks to asthmatic children, stating, “it is expected that biologic responses may intensify with high peak excursions that overwhelm lung defense mechanisms”. Additionally, they suggest that “[o]ne-hour peaks may be more influenced by local point sources near the monitoring station that are not representative of regional exposures”.
Similarly, Darrow (15) writes that peak exposures can sometimes better capture relevant biological processes. This is the case for health effects that are triggered by short-term, high doses. They write, “Temporal metrics that reflect peak pollution levels (e.g., 1-h maximum) may be the most biologically relevant if the health effect is triggered by a high, short-term dose rather than a steady dose throughout the day. Peak concentrations … are frequently associated with episodic, local emission events, resulting in spatially heterogeneous concentrations”.
To give just one example, we know that 1 to 2 h of a diesel exhaust exposure can cause, for instance, reduced brachial artery diameter and exacerbation of exercise-induced ST-segment depression in people with pre-existing coronary artery disease; ischemic and thrombotic effects in men with coronary heart disease (16); and is associated with acute endothelial response and vasoconstriction of a conductance artery (17).
Given that episodic high exposures are not typically documented and analyzed by researchers and public agencies, health complaints in the area are not being correlated with industry emissions. However, examination of published air emission measurements in gas extraction and processing sites, along with collected health data from the Environmental Health Project (EHP) and others, show very real potential for harm from industry emissions (18). Reports of acute onset of respiratory, neurologic, dermal, vascular, abdominal, and gastrointestinal sequelae near natural gas facilities contrast with research, which suggests that there is limited risk posed by UNGD. By extension, we believe the contrast points to the inadequacy of using current federal standards.
For public agencies to protect human health, they need standards that are sensitive to and consistent with the known routes of exposure, the duration and frequency of exposures, the nature of chemical mixtures, tissue repair rates, plausible target organs, and the increased sensitivity of susceptible populations. Monitoring efforts must be complex enough to account for the actual mechanisms at work in the exposure-receptor relationship. They must also be sufficiently robust to measure fine-grained, hour-to-hour variability in air concentrations.
The objectives of this paper are to illustrate the shortcomings of the available data as well as the inadequacy of the standards by which they are evaluated. We present new protocols for air monitoring based on the observed health effects produced by exposures and on documented emissions patterns from the industry. The protocols are directed at the needs of the local residents who must be able to determine the safety and welfare of their families. The protocol reflects the following central requirements: 1) continuous measures of a surrogate compound to show periods of extreme exposure, 2) a continuous screening model based on local weather conditions to warn of periodic high exposures, and 3) comprehensive detection of chemical mixtures using canisters or other devices that capture the major components of the mixtures.
Documented emissions1
Researchers have begun to document the content and quantities of emissions from UNGD sources, such as engine exhausts, condensate tanks, production equipment, well-drilling and completions, and transmission fugitives. Emissions identified have included four of the five NAAQs pollutants (excluding ozone) and a wide range of volatile organic compounds (VOCs) and other air toxics (19). Research conducted in the Fort Worth, Texas area documented the variation in emissions among locations and forms of UNGD activity. Point source research found a total of 2126 emission points in one 4-month UNGD field study. Pneumatic valve controllers were the most frequent emission sources at well pads and compressor stations. Emissions from storage tank vents proved to be one of the most significant polluters, accounting for 2076 tons of VOCs per year (20).
Among others, Earthworks has found air contaminants in areas, and in combinations, which one would not expect to find outside of industrial activity (21, 22). However, not every chemical in the 2012 Earthworks study was found at every site monitored. That said, there were notable consistencies across sites. For instance, 94% of the samples tested for 2-butanone detected it; 88% of those testing for acetone and 79% of those testing for chloromethane detected it. Moreover, 1,1,2-trichloro-1,2,2-trifluorethane, carbon tetrachloride, and trichlorofluoromethane were also frequently found. Specific emissions were not found uniformly across all locations, indicating that emissions themselves vary from site to site. In addition, there are different emissions recorded in the literature partly due to variations in researchers’ ability to capture and document those emissions.
Some studies around UNGD activities have found benzene, particulate matter (PM), formaldehyde, and other chemicals at levels in exceedance of state or federal limits. The Texas Commission on Environmental Quality, for instance, reports that at one source, 35 chemicals were detected above “appropriate short-term comparison values”. At some sites, multiple chemicals (carbon disulfide, ethane, isopentane, and 1,2-dibromoethane) exceeded short-term health-based comparison values. Benzene was also detected above the long-term health-based comparison value at 21 monitoring sites (3).
The federal government has not, as yet, gathered the quantity and quality of emissions data that are necessary to properly characterize the environmental conditions around UNGD sites. The Inspector General’s Office of the EPA confirms the inadequacy of data in reporting the following: EPA has 1) not developed default emission estimates for oil and gas nonpoint sources, 2) not ensured state submission of nonpoint sources oil and gas data as required by the EPA’s air emissions reporting requirement (AERR), and 3) some states’ failure to collect emissions data from smaller (i.e., nonpoint) oil and gas production facilities due to a lack of permitting requirements. The Inspector General’s Office concludes that, although resource intensive, developing a robust emissions inventory could cover these numerous small, unregulated sources (1).
Connections between emissions and health
Two important obstacles prohibit researchers from comprehensively assessing the health risks posed by UNGD activities. The first obstacle has to do with the incomplete list of chemicals used and air emissions generated by the industry. Companies and their sub-contractors are not mandated by the federal government to disclose the complete list of chemicals used in the hydrofracking process; nor are they required by state or local governments to provide a full accounting of the chemicals used at a given site. Second, there is a problem of assessing risk of known chemicals. Many of the chemicals that have been identified at UNGD sites or nearby do not have established comparison values by which to measure their potential health effects. Furthermore, chemicals are released into the air contemporaneously and there is little to no information on the toxicity of these mixtures. This is not a unique problem posed by UNGD. What is unusual is the proximity of emission sources to people’s homes and to places where they carry out their daily activities. To provide a sense of the urgency of addressing this issue, in a study of 290 households in Washington County Pennsylvania, collected as a convenience sample, we found that 707 unique, “active” wells or compressor stations were identified as located within three miles of all residences combined (Unpublished). It has been reported in the Wall Street Journal that as many as 15 million people live within one mile of a natural gas wellhead (http://stream.wsj.com/story/latest-headlines/SS-2-63399/SS-2-365197/).
Despite the limitations in data, some studies have been conducted on correlations between health risks and UNGD emissions. For instance, based on toxicity values for six carcinogenic contaminants in one Garfield County, Colorado study, researchers found low but increased risk of developing cancer in residents living near UNGD activity. Additionally, based on the presence of noncancer hazards, close proximity to UNGD activity was associated with low but increased risk of developing acute noncancer health effects; however, the authors report that insufficient data makes this finding inconclusive. Many air contaminants surrounding UNGD had no established toxicity levels so researchers could not identify and include those risks in their report (23).
Another Colorado study found that a noncancer chronic Hazard Index was greater for residents living ≤0.8 km from wells than it was for those more than 0.8 km out. Cumulative cancer risks were also greater for residents within 0.8 km of wells than for those living further out. Benzene and ethylbenzene were the primary contributors to cumulative cancer risk for residents living in close proximity to UNGD facilities (24).
An assessment of Pennsylvania birth outcomes, released as a working paper, compared birth outcomes for infants born to mothers living within 2.5 km of a permitted but not yet built gas well site and those within 2.5 km of an active gas well site. Results suggest that exposure to UNGD before birth increases the overall prevalence of low birth weight and the overall prevalence of small for gestational age; in addition, exposure reduces 5 min APGAR scores compared with births to mothers living near sites that have not yet been developed (25). In Colorado, a similar study found an increased prevalence of congenital heart defects, and possibly of neural tube defects in neonates for mothers residing within a 16 km radius of natural gas wells, based on density and proximity (26).
While not including all substances used or emitted from UNGD sites, the EPA’s IRIS provides data on known heath effects from exposure to toxic contaminants. The database contains information on more than 550 chemicals, including VOCs such as acrolein and formaldehyde, which are known to be emitted from UNGD sites. IRIS also provides information concerning acute toxicity.
Rationale
The Southwest Pennsylvania EHP examined whether UNGD emissions data collection, analysis, and comparison to standards reflect real-time exposures and their known pathophysiological mechanisms. EHP aimed to investigate the assumptions driving existing research and how such assumptions might mislead researchers in ways that undermine, even invalidate, their findings.
An initial appraisal of the literature led us to hypothesize that the application of federal standards to research on health impacts from industry air pollution failed to sufficiently address the periods of highest risk for people living near UNGD sites. We found a disconnection between the standards that do not address short-term exposure peaks, and how those actual exposures might put people at risk. In addition to examining existing research, we used data from real-time exposure measurement to shed light on the relationship between exposure measurement and the standards by which they are deemed safe or unsafe. These data came from monitoring efforts previously conducted by EHP in the homes of residents living near UNGD sites in Washington County, Pennsylvania. We measured PM because it poses well understood health risks, serves as a surrogate for other UNGD exposures, and is a synergist that intensifies the risks of other airborne toxins.
Materials and methods
We undertook analyses in three areas. First, we assessed the emerging literature on health risks posed by UNGD. Then, we analyzed EHP’s previously collected data on PM2.5 and PM0.5 micron levels in homes near UNGD activity as a proxy to assess real-time air pollution exposures. Finally, we created a simple weather screening model to capture the role of meteorological conditions on the dispersion of air emissions from industry sources. All three were aimed at understanding the relationship between actual human exposures and the standards by which they were deemed safe or unsafe.
Based on what we suggest as the necessary monitoring protocols for determining hazards to human health, we analyzed whether current methods of data collection, as revealed in published articles and reports, provide adequate measures. Our recommended protocols included the following: 1) continuous measures of a surrogate compound to show periods of extreme exposure, 2) a continuous screening model based on local weather conditions to warn of periodic high exposures, and 3) comprehensive detection of chemical mixtures using canisters or other devices.
Our examination of the aptness of federal ambient air standards began with a review of relevant standards and their rationales. We then reviewed the sampling methodologies and data analyses in a subset of emissions research on UNGD emissions and their associated health risks. For this review (Tables 2a–f) we selected six studies that focused on air contamination and health impacts of UNGD. The studies had a wide geographic range and were conducted by a variety of organization types. The studies were located in West Virginia, Colorado, Texas and Pennsylvania, and were conducted or commissioned by Schools of Public Health, a state Department of Public Health, independent consulting firms, and state Departments of Environmental Protection. Given that emissions factors and monitoring practices may have improved since the early years of UNGD, we selected studies published from 2010 to 2013 in peer-reviewed journals and from public access sites in different states. We paid particular attention to how researchers grappled with the problem of multiple exposures and how hazard indexes were effectively employed.
Tables 2a–f
Review of sampling methods and averaging times in six shale gas development air emissions studies. A Glossary of abbreviations is in Appendix A.
Table 2b
Table 2c
Table 2d
Table 2e
Table 2f
To compare real-time fluctuations in air contamination to the results and conclusions found in the studies, we analyzed previously collected data on PM2.5 exposures in homes near UNGD sites. From June 2012 to August 2013, EHP placed Dylos™ air particle monitors (Dylos Corporation, Riverside, CA, USA) in 14 homes near UNGD sites. The data from these homes constitute an opportunity sample, because the homes were self-selected. The residents had approached EHP for assistance in determining whether their health might be affected by their proximity to UNGD sites. The Dylos™ monitor measures and records levels of PM2.5 and PM0.5 every minute for up to 24 h. The data are downloaded daily and readings can continue indefinitely. In the research presented here, indoor air was monitored between 44 and 353 consecutive hours in homes near drilling-related activities. PM is of interest not only because of its association with health risks, but also because it is a surrogate for other substances to which people may be exposed. The Dylos™ particle monitor measures counts of particles per meter cubed and is sensitive to humidity. EPA measures the mass of particles and sets a standard based on 30% humidity. Counts are not directly comparable to mass; therefore scaling factors are needed to compare the data.
Weather patterns and other atmospheric conditions have a well documented effect on the dispersion of air emissions (30). Based on the work of Frank Pasquill, D.Sc., EHP developed a targeted air pollutant dispersion screening model using the following: 1) estimates of UNGD source emissions documented in the literature and from 2012 Pennsylvania Department of Environmental Protection (PADEP) oil and gas inventory reports (31), 2) distance to a hypothetical residence, and 3) the impact of local (Pittsburgh) weather patterns. This resulted in a situationally relevant assessment of the dispersion of emissions in areas around UNGD activity (32).
Findings
In reviewing the selected studies on air emissions and health impacts from UNGD, we looked at the methods used to collect air samples and the averaging time used to analyze the sampling results. In studies a–d (Table 2), results were compared primarily to federal and state standards and guidelines to determine the impact of air emissions on human health. EHP found evidence of inadequate sampling protocols for capturing meaningful data. We also found inconsistencies between researchers’ interpretations of findings on exposures based on current standards and their potential impact on health.
Sampling and averaging methods
A typical method of air sample collection is the use of Summa canisters. These canisters collect air emissions over a 24-h period. Levels of pollutants are thus averaged over the 24-h period. Spikes in emissions within that period cannot be quantified.
The research in West Virginia and in Pennsylvania had (or will have in the case of one PA study) some shorter-term averaging. McCawley, in West Virginia, reported 1-min average samples for four criteria pollutants, 1-h averages for PM samples and 2-h averages for organic carbon and elemental carbon samples. These shorter-term results allowed McCawley to determine high levels of fluctuations in emissions. Unfortunately, there are few meaningful standards to which his results can be compared because current federal standards do not accurately address periods of short-term peak exposures.
In its 2010 Southwestern Pennsylvania study, the Pennsylvania DEP used 7-h sampling periods (six periods within a week at each of the five sites). The gas chromatography/mass spectrometry (GC/MS) instrument sampled 5 min/h for each 7-h period. The open path sampler (OP-FTR) reported the highest 2-min value of any detected compound per sampling period (reported as approx. 8 h). If the compound was detected at a high enough level during the sampling session to produce an average greater than the method detection limit (MDL), that average was also reported.
For the Health Consultation in Garfield Co. (2010), Summa canisters and 2,4-dinitrophenylhydrazene (DPNH)-coated cartridges were used for 24-h collection periods. McKenzie et al. collected 24-h samples with Summa canisters and sampled ambient air once every 6 days. The City of Fort Worth (2011) sampled once every 3 days with (DNPH) cartridges and Summa canisters for 24-h periods and screened for fugitive emissions.
The proposed PA DEP long-term study in Southwestern Pennsylvania will collect data for 1 year. Periodic sampling with 24-h canister samplers will be used for hazardous air pollutants (HAP), VOCs, and carbonyls. Methane and nonmethane compounds will be detected with Forward-looking infrared (FLIR) VOC imaging technology. Continuous or semi-continuous samplers will be used for ozone, NOx, CO, H2S, and PM2.5 for comparison to NAAQS. The review above illustrates the variety of measurement approaches and reference values used by researchers. In studies a to d, the authors refer to difficulties in assessing health risks for various reasons (Table 2). McCawley (2013) referred to the variability in exposures, the short-term duration of specific activities, and the long-term averaging period for NAAQS. In the Garfield County Study (2010) the researchers found that some of the necessary chronic inhalation toxicity values were not available and that complex mixtures could not be adequately assessed. Both McKenzie et al. (2012) and the City of Fort Worth (2011) found no appropriate method for assessing acute exposures. This will be addressed in the discussion section, but it is worth noting here that there is no relationship among the form of data collection, the standards applied, and the physiological effects of exposure to toxins.
The problem of risk assessment of mixtures (Hazard quotient/Hazard index)
To date, most studies on health risks associated with UNGD rely on 24-h canister samples to calculate a Hazard index (HI). Acute effects most often occur after a few minutes or an hour of exposure. In fact, the 24-h average exposures are not even predictive of the 24-h maximum exposure. The 24-h averages underestimate exposures by a factor of two to three times (see Figure 1). The problem is further complicated by the interactions among multiple agents in the body that can produce greater than additive effects.
Figure 1
PM2.5 Measurements collected in House 7 from March 7, 2013 to March 14, 2013 (counts/0.01 cubic feet).
Dylos Readings for PM 2.5 from March 7, 2013 to March 14, 2013. a, am; p, pm.
An illustration of the problem using published data
For this example, we chose four of the chemicals used in the UNGD industry that were measured at one site, at multiple times, and reported to the PA DEP. They included acrolein, benzene, toluene, and chloromethane (28). When we attempted to evaluate the interaction using the Hazard quotient (HQ) and reported average, the effect of omitting the highest values became apparent.
The HQ for each chemical can be established by taking the chemical measurement and dividing it by the level at which no adverse effects are expected (referred to here as the standard and derived from standards or guidance values found in IRIS). The HQs are added together to form the HI. If the sum is ≤1.0 the mixture is not considered to produce a harmful interaction.
Example
$acroleinstandard+benzenestandard+toluenestandard+chloromethanestandard≤1.0$
Using a sample of averaged canister data from the PADEP Marcellus Shale Short-Term Air Sampling Report, the calculation is as follows (measurements in μg/m3) (28):
3.7/6.9+0.35/28.8+0.94/3770+1.40/1030=0.55.
Measured chemical amounts are reported in Appendix A, p. 36. RfCs are found in Appendix E, p. 45.
The result is <1.0, which would lead to the conclusion that it is not likely to result in pathophysiologic effects. However, this calculation is not an accurate way to measure acute toxicity. The standards used are relevant to acute exposures but the measurements are of 24-h average emissions. These averages underestimate the acute exposures by a factor of 2 to 3. The correct HI is much greater than can be determined using the conventional approach.
Evidence of short-term high values of air contaminants: particulate matter
EHP used Dylos™ air particle monitors to assess indoor air quality in homes near UNGD. The air monitor records real-time levels of PM2.5 and PM0.5 each minute for up to 24 h. The Dylos™ monitor records counts of PM2.5 and above or PM0.5 microns and above. By contrast, EPA measures the mass of PM2.5 microns and below to avoid having heavier particles distort the data. Given that the Dylos™ monitor counts particles, a few larger particles will not affect the data. In both cases relative humidity is a factor to control. The houses in which data were collected represent an opportunity sample near UNGD sites. In the data, we saw intervals with extremely high values. To understand the frequency of these high PM counts, we established a standard for “peak exposure” by taking the median reading for each house (Table 3) and from that found the median for all houses. The original data came from 14 homes (a total of 2117 h).
Table 3
Number of hours monitored and the median number of PM2.5 counts per house (counts/0.01 cubic feet).
We found that the median value for all houses combined was 50. This median value was then multiplied by three to establish the criterion for a “peak” exposure. The minimum “peak exposure” value for this study was established at 150 counts of PM2.5. We then calculated the number of peaks at each house and the percent of hours with peak exposures. The particle monitor data in Table 4 show that peaks over 150 counts can occur over 30% of the time in a given house (33).
Table 4
Peak PM2.5 count values for each house, number of hours,% total hours, times of day, and maximum peak value (counts/0.01 cubic feet).
Attempts to capture these peaks with 24-h Summa canisters, through periodic or one-time spot sampling (under 24 h) or after a complaint has been filed, will most often miss times of peak exposure. Even with continuous monitoring such as ours, averaging of the peaks with the lower levels of PM obscures the most important feature of the data from a public health perspective because high level exposures can cause the most physiological harm (14). Only through continuous, real-time monitoring with short reporting periods, will peaks likely be captured.
Fluctuations in indoor PM levels are expected, regardless of outside activity, and can be the result of cooking, vacuuming, and children at play. The duration, magnitude, and timing of some of the peaks seen in this study, however, could not be readily explained by normal daily activity.
Research on indoor and outdoor PM levels near highways confirms the relationship between outside and indoor particle pollution. Fuller et al. found both indoor and outdoor particle levels to be the highest <100 m from the highway, whereas both indoor and outdoor levels were lowest in and around homes more than 1000 m from the highway (34). The researchers concluded that outdoor particle pollution was “the most important predictor of indoor [particle number concentration]” (34). Other significant predictors of indoor particle levels cited by the authors included temperature, weekday, time of day, wind speed, and wind direction.
Air pollution dispersion model estimates
The EHP model looks at the estimated impact of one emissions source, while in many cases a residence may have more than one source within a radius of two to three miles.
In order to estimate the effect of local weather conditions on ground level exposures, 2012 hourly weather data reported at the Pittsburgh International Airport (wind speed, wind direction, and cloud cover) were applied to the air screening model developed by EHP (32). A single VOC emission level of 300 g/min from a compressor station was used for the point source. The influence of local air movement and vertical dilution (mixing depth) on the levels of ambient air emissions one mile from a surface source in part explained periods of peak exposures.
The modeled findings shown in Table 5 indicate that ambient VOC concentrations are underestimated when averages are used to evaluate the health risk associated with a source (as is recommended in the “Form” of the NAAQs air monitoring strategy). When the “midnight to midnight” 24-h periods were divided into 6-h intervals, the scale and frequency of this underestimation of exposure risk became apparent. About 10% of the intervals for downwind locations will produce exposures two to three times higher than the value estimated using the NAAQs form (Table 5). If VOC concentrations were averaged over a 1-h rather than a 6-h period, the discrepancy would be even greater.
Table 5
Effects of averaging the variability of exposures that occur in 6 h increments, for each month of the year.a
The projected effect on indoor air
A house with one air change per hour would experience 75% of the outdoor ambient air after 3 h and 98% after 6 h. Further, even if the ambient air is reduced to the unlikely level of zero, it would require 3 h for the indoor concentration to fall to 25% of the maximum. Thus, for a significant portion of each month, residents downwind from pollution sources experience levels of pollution inside their houses that are higher than the monthly averages. These are potentially significant exposures from a physiological standpoint. The uptake of outdoor pollutant into house air is determined by assuming that the house has one air change per hour with instantaneous mixing, such that at the end of 1 h, the concentration inside of the house will be 1/2 the outside concentration. After 2 h, the concentration will be 75% of the outside and each hour the indoor-outdoor difference is reduced by one half. The clearing of the pollutant follows the same assumption.
Discussion
When evaluating acute responses to air toxics, it is important to understand the physiological and cellular responses to short-term exposures because inhalation or ingestion of a toxic agent can cause effects within minutes (35). The health sequelae seen near UNGD sites include respiratory, neurologic, and dermal responses as well as vascular bleeding, abdominal pain, nausea, and vomiting. Given the pathophysiologies of these acute toxic responses, it is apparent that current monitoring protocols at UNGD sites are inadequate to ensure safety.
When air emission levels are highly variable, the following typically collected measurements are not relevant to individual health impacts: periodic collection of 24-h samples, tons released per year, and hourly averages per day, per week, or per year. Instead, real-time measures of patterns of exposures are needed, and these must include peak levels, durations, and components of mixtures. The NAAQS compliance monitoring criteria (Table 1) do not provide sufficient information to assess human health risks from acute episodes of exposures. A relevant example of appropriate, real-time monitoring at industrial sites that abut residential areas is The Benzene and other Toxics Exposure (BEETEX) Study developed by the Houston Area Research Center (HARC) (36). The purpose of the study was to identify exposure to air toxics in nearby residential areas and to attribute air toxics to specific sources. The methodology for monitoring and data analysis are in development for the 2014 study, with the goal of identifying “cost-effective, state-of-the-art neighborhood scale monitoring networks …. the improvement of emissions inventories, the conduct of epidemiological studies for air toxics, and ultimately the enforcement of regulations” (36).
Others have demonstrated the mismatch between typical environmental compliance monitoring on the one hand, and health risk evaluation on the other. The Minnesota Department of Health (MDH), in particular, has addressed this problem with respect to ground water utilized for drinking. MDH has revised its Health Risk Limits (HRL) protocol as part of a concerted effort to provide conservative, health protective guidelines that respond to sensitive and highly exposed populations. The Minnesota HRLs respond to the relationship between the timing and duration of exposure as well as the potential adverse effects. The HRLs are intended to be protective for a range of adverse effects for a given duration of exposure. In addition, MDH’s revised risk limits address the problem of multiple exposures – whether exposure from several pathways or from multiple chemicals – by using an exposure decision tree in conjunction with site-specific information. In the revised rules, MDH includes methods that risk managers can use to sum up the risks from multiple chemicals that share a common health endpoint in order to assess the combined health risk at the site being evaluated. MDH typically utilizes this approach, but if specific data about a mixture are available, other more targeted approaches are likely to be preferable (37).
The form of current standards
The central problem identified in this paper is that at sites where it appears that health effects are produced by UNGD, toxic emissions are often not being measured or not detected at levels deemed dangerous. Our concern is that this may be an artifact of the sampling methodologies and analyses currently being used today. An example of how appropriate monitoring and sampling can reveal otherwise hard to capture variations can be found in a study of woodsmoke emissions in the Adirondack region of New York State (38). This rural region has a very limited air quality monitoring network, yet residents can experience multi-day and/or sub-daily pollution loading that can be intense. Given that monitoring sites are so widespread, and local hourly impacts cannot be captured, these populated areas have significant public health pollution threats that the regulatory system does not respond to or understand. However, when researchers used the appropriate equipment and methods, they instantaneously discovered serious air quality problems. In this example, a model that identified likely “hotspots” using geographic and demographic data was employed. Then mobile monitoring equipment and procedures as well as stationary monitoring sites were used to collect real-time data.
When we examine the research summarized in Tables 2a–f, we find a common deficiency in the data collection, that is, the inability of commonly used methods to capture episodic or significant variability. Specifically, as we have already noted, many sampling methods fail to characterize and quantify peaks in emissions and potential exposures. Looking at Tables 35 as well as Figures 1 and 2, it becomes apparent that exposures do, in fact, become quite high relative to median or mean values. These spikes are inconsistent with the characterization of low to negligible risk.
Figure 2
(A, B) Demonstration of the variability in dilution of 300 g/min VOC emissions from a source one mile away, in 6-h incrementsa, modeled using Pittsburgh International Airport weather data.
aCalculations are based on July 2012 weather data from the Pittsburgh International Airport. The 6-h increments for the graphs above are broken down as follows: night: 12 midnight to 6:00 am; morning: 6:00 am to 12 noon; afternoon: 12 noon to 6:00 pm; evening: 6:00 pm to 12 midnight.
Currently, compliance with NAAQS and state standards are the cornerstone of safety regulation of UNGD. These standards are designed to monitor compliance over a region, but not over individual sites. A review of the form of the application of the NAAQS illustrates the problem. The forms of the six criteria pollutant standards are similar to other air monitoring guidelines. Compliance with each is based on average findings typically collected at 3-day intervals. Performance criteria are based on the number of times the standard is exceeded each year.
These standards have been developed to reliably determine when a source is repeatedly out of compliance with permitted emissions. The regulatory assumption is that the variations in ambient air levels are negligible. The findings in this report show that the variability of emissions in UNGD is extreme and assessing this variability is critical to understanding health responses. Of the six studies evaluated here, only McCawley (27) measured in “real time” and reported finding high levels of fluctuation in emissions. McKenzie et al. reported health risks to short, subchronic but high emissions. However, they found that there were no appropriate measurements for assessing effects from acute exposures. In contrast, the Pennsylvania DEP report found nothing above NAAQS or other levels of concern. It should be noted that even if real-time equipment is deployed, given the high variability of emissions based on the stage of UNGD, care must be taken to use the appropriate equipment at the appropriate time to ensure accurate and meaningful data collection.
HI and PM – synergistic response
Underlying current standards is the assumption that each toxic agent in air emission mixtures acts independently when it is inhaled or ingested into the body. The ratios of the average ambient air level to the standards are summed in an HI (EPA.gov/airtoxics). At UNGD sites, this assumption is negated by the fact that PM is generally present at all sites; and it has been demonstrated that PM increases the amount of absorbed toxin by increasing transport into the deep lung. The surface area of the particle is what drives this synergistic response, producing greater than additive synergistic response (39).
EHP continuously measured particulate matter at 14 houses near UNGD in southwestern Pennsylvania. The monitoring periods ranged from 44 to 353 h. EHP found a range of 1 to 57 h (0.5% to 34.3%) with peak values over 150 cts/0.01 cubic feet. The findings in the literature and in EHP’s PM monitoring indicated that episodes of high values were typical in gas fields. In the EHP data, peak values occurred at varying times of day and night. Figure 1 illustrates these results.
Meteorological impacts
Local weather conditions affect the dispersion of air pollutants from industrial sources (31). Figure 2 shows modeled estimated exposures from a source of VOCs at 6-h intervals for 30 days. The chart reflects only the effects of weather conditions and illustrates that weather conditions alone can cause extreme variation in exposures at ground level. The 6-h exposures vary from 25 to over 200 μg/m3. As expected, the monthly average 6-h exposure ranged from 43 to 89 μg/m3, and the 90th percentile ranged from 123 to 206 μg/m3. Both Figures 1 and 2 help make the argument that continuous measures, in conjunction with weather data, are needed to identify periods of extreme exposure.
Mixtures
The variety of point source types and the combinations of chemical gases present at UNGD sites complicate the assessment of health risk. The Commonwealth of Pennsylvania requires that certain permitted facilities report yearly emissions of 13 compounds to their oil and gas inventories. In 2012, there were 214 reporting sites in Washington County, PA. These included 196 well pads, 14 compressor stations, two gas processing plants, a booster station, and an interconnecting station. These installations are connected by pipelines that are under pressure and are vented as necessary. Table 6 shows a portion of the PA DEP emissions inventory data from the 214 reporting sites in Washington County (40).
Table 6
Seven most prevalent chemicals emitted in 2012 across all reported sites in Washington County, PA (total, median and maximum by weight/tons per year).a
Examining the discrepancy between the median and maximum values, it is easy to see that sites vary significantly in their emissions. The data show concurrent releases of multiple compounds (Table 6). Several of these have known interactions in the body, for example VOCs and particulates. The interactions with inhalable particulates, found at 110 of the 214 sites, are of concern because the doses increase synergistically when PM combines with air toxins. Thus, the commonly used HI is insufficient to evaluate the health impact of the mixtures because it uses average exposures and reference doses based on a single exposure to an agent. In this case, HI is also insufficient because the duration of the typical averaging time used to evaluate exposure is longer than the duration of concern. These findings show that the current protocols used to evaluate safety are not sufficient and that a change is needed.
Conclusion
Several factors should be included in all measures. First, based on the analysis presented in this paper, it is clear that the use of current standards is not appropriate for good pathophysiological evaluation, and consequently for good public health protection. The currently used methods of data collection also cannot provide the necessary data for determining an exposure’s composition, intensity, duration, or frequency.
In sum, our findings indicate the presence of peak emissions occurring near UNGD, which may lead to extreme exposures among people in close proximity to these sites. Furthermore these exposures can be exacerbated by local weather conditions and by the presence of particulate matter. Exposures are highly variable and can be difficult to monitor. Moreover, current monitoring efforts and health standards do not adequately track these events, though health reports from persons living near these sites are consistent with episodic exposure (EHP, Earthworks). The risk of developing chronic diseases due to exposures, especially by vulnerable populations, has yet to be determined. Revisions to health standards are necessary to protect public health in regions of UNGD. Toxicity values must be developed for shorter durations for residents in other than emergency situations. Research is also needed to evaluate the health effects of short, repeated, higher than background exposures (Table 7).
In order to overcome limitations of sampling methodologies commonly used to gauge risks, we suggest that a combination of strategies be adopted because no single sampling method can accurately capture all of the essential data. Finally, realistic reference values that focus on the potential pathophysiologic effects caused by exposures are needed. In the re-examination of reference values for water pollutants, Minnesota’s Department of Health provides a good example to emulate.
Table 7
Assessment of sampling methods for determining pathophysiological impacts from air pollution.
In order to properly evaluate and respond to the public health problem posed by UNGD activities, we suggest a new strategy for collecting air data and interpreting findings. The following three components ought to be at the center of this new strategy:
• continuous measures of a surrogate compound to show periods of extreme exposure;
• a continuous screening model based on local weather conditions to warn of periodic high exposures; and
• comprehensive detection of chemical mixtures using canisters or other devices that capture the major components of the mixtures.
Acknowledgments
We wish to thank Norman Anderson MPH; Sandra Baird D.Sc.; Phillip Johnson MES, MPH, PhD; Michael Kelly PhD; and Tyler Rubright for their assistance in the development of this article.
Appendix A Glossary of abbreviations
AEGL
acute exposure guideline level
AERR
air emissions reporting requirement
AMCV
air monitoring comparison values
ATSDR
Agency for Toxic Substances and Disease Registry
BEETEX
benzene and other toxics exposure study
BTEX
benzene, toluene, ethylbenzene, and xylene
DEP
Department of Environmental Protection
DNPH
2.4. dinitrophenylhydrazene
DOH
Department of Health
EC
elemental carbon
EHP
Environmental Health Project
EPA
U.S. Environmental Protection Agency
ERPG
Emergency Response Planning Guidelines
ESL
effects screening levels
FLIR
forward-looking infrared camera
GC/MS
gas chromatography-mass spectrometry
HAP
hazardous air pollutant
HARC
Houston Area Research Center
HI
hazard index
HQ
hazard quotient
HRL
health risk limits
IRIS
EPA integrated risk information system
IUR
Inhalation unit risk
LCL
lowest comparison level
UNGD
unconventional natural gas development
MDH
Minnesota Department of Health
MDL
method detection limit
NAAQS
National Ambient Air Quality Standards
NATA values
National-Scale Air Toxics Assessment
NEI
National Emissions Inventory
OC
organic carbon
OP-FTR
Fourier transform infrared spectrometer
PA
Pennsylvania
PID
photo-ionsitization detector
PNC
particle number concentration
PM
particulate matter
REL
reference exposure level
RfC
reference concentrations
SNMOC
speciated non-methane organic compounds
TCEQ
Texas Commission on Environmental Quality
TEOM
tapered element oscillating microbalance, a particulate monitor
VOC
volatile organic compounds
References
• 1.
The United States Environmental Protection Agency, Office of the Inspector General. EPA needs to improve air emissions data for the oil and natural gas production sector. February 20, 2013. Report No. 13-P-016.Google Scholar
• 2.
Fryzek J, Pastula S, Jiang X, Garabrant DH. Childhood cancer incidence in Pennsylvania counties in relation to living in counties with hydraulic fracturing sites. J Occ Env Med 2013;55:796–801.
• 3.
Ethridge, S. Shannon Ethridge to Mark R. Vickery. Texas Commission on Environmental Quality. Interoffice memorandum. Available at: http://www.tceq.state.tx.us/assets/public/implementation/barnett_shale/2010.01.27-healthEffects-BarnettShale.pdf.
• 4.
Mitka M. Rigorous evidence slim for determining health risks from natural gas fracking. J Am Med Assoc 2012;307:2135–6.
• 5.
Groat CG, Grimshaw TW. Fact-based regulation for environmental protection in shale gas development. The Energy Institute, University of Texas at Austin. 2012. Finding effects or associations: Hill E. Working paper: Unconventional gas development and infant health: evidence from Pennsylvania. The Charles H. Dyson School of Applied Economics and Management, Cornell University. July 2012.Google Scholar
• 6.
Bamberger M, Oswald RE. Impacts of gas drilling on human and animal health. New Solutions 2012;22:51–77.
• 7.
Witter R, Stinson K, Sackett H, Putter S, Kinney G, Teitelbaum D, et al. Potential exposure-related human health effects of oil and gas development: a literature review (2003–2008). Denver, CO: University of Colorado Denver, Colorado School of Public Health, 2008.Google Scholar
• 8.
Colborn T, Kwiatkowski C, Schultz K, Bachran M. Natural gas operations from a public health perspective. Hum Ecol Risk Assess 2011;17:1039–56.
• 9.
Revised Draft SGEIS on the Oil, Gas and Solution Mining Regulatory Program (September 2011). New York State DEC. Chapter 6, 6–106, Table 6.7.Google Scholar
• 10.
Richardson N, Gottlieb M, Krupnick A, Wiseman H. The state of the state shale gas regulation. Resources for the future. RFF Report. June 2013, p. 87.Google Scholar
• 11.
Brook RD, Rajagopalan S, Pope CA, Brook JR, Bhatnagar A, et al. Particulate matter air pollution and cardiovascular disease: an update to the scientific statement from the American Heart Association. Circulation 2010;121:2331–78.Google Scholar
• 12.
Wellenius GA, Burger MR, Coull BA, Schwartz J, Sus HH, et al. Ambient air pollution and the risk of acute ischemic stroke. Arch Intern Med 2012;172:229–34.
• 13.
Bev-Lorraine T, Dreisbach RH, editors. Dreisbach’s handbook of poisoning: prevention, diagnosis and treatment. 13th edition. New York: Taylor and Francis, 2001.Google Scholar
• 14.
Delfino R, Zeiger RS, Seltzer JM, Street DH, McLaren CE. Association of asthma symptos with peak particulate air pollution and effect modification by anti-inflammatory medication use. Environ Health Perspect 2002;110:A607–17.Google Scholar
• 15.
Darrow LA, Klein M, Sarnat JA, Mulholland JA, Strickland MJ, Sarnat SE, et al. The use of alternative pollutant metrics in time-series studies of ambient air pollution and respiratory emergency department visits. J Expo Sci Environ Epidemiol 2011;21:10–9.
• 16.
Mills NL, Tornqvist H, Gonzalez MC, Vinc E, Robinson SD, Soderberg S, et al. Ischemic and thrombotic effects of dilute diesel-exhaust inhalation in men with coronary heart disease. N Engl J Med 2007;357:1075–82.
• 17.
Paretz A, Sullivan JH, Leotta DF, Trenga CA, Sands FN, Allen J, et al. Diesel exhaust inhalation elicits acute vasoconstriction in vivo. Environ Health Perspect 2008;18:837–942.
• 18.
Southwest Pennsylvania Environmental Health Project. EHP’s Latest Findings Regarding Health Data. http://www.environmentalhealthproject.org/wp-content/uploads/2013/09/6.13.13-general.pdf. See also, Earthworks. Subra W. Results of Health survey of current and former DISH/Clark, Texas Residents. http://www.earthworksaction.org/library/detail/health_survey_results_of_current_and_former_dish_clark_texas_residents/#.UsG_EihCR0M.
• 19.
Armendariz A. Emissions from natural gas production in the Barnett Shale area and opportunities for cost-effective improvements. Austin, TX: Environmental Defense Fund. Version 1.1 Available at: http://www.edf.org/sites/default/files/9235_Barnetat_Shale_Report.pdf.
• 20.
Eastern Research Group, Inc. and Sage Environmental Consulting, LP. City of Fort Worth natural gas air quality study: final report. 2011. Available at: http://www.edf.org/sites/default/files/9235_Barnett_Shale_Report.pdf. July 13, 2011.
• 21.
Steinzor N, Subra W, Sumi, L. Gas patch roulette: how shale gas development risks public health in Pennsylvania. Available at: http://www.earthworksaction.org/library/detail/gas_patch_roulette_full_report#.Uc3MAm11CVo.
• 22.
Steinzor N, Subra W, Sumi L. Investigating links between shale gas development and health impacts through a community survey project in Pennsylvania. New Solutions 2013;23:55–84.
• 23.
Colorado Department of Public Health and Environment. Public health implications of ambient air exposures as measured in rural and urban oil and gas development areas – an Analysis of 2008 Air Sampling Data, Garfield County, Colorado, 2010.Google Scholar
• 24.
McKenzie LM, Witter RZ, Newman LS, Adgate JL. Human health risk assessment of air emissions from development of unconventional natural gas resources. Sci Total Environ 2012;424:79–87.Google Scholar
• 25.
Hill E. Working paper. Unconventional gas development and infant health: evidence from Pennsylvania. The Charles H. Dyson School of Applied Economics and Management, Cornell University. July 2012.Google Scholar
• 26.
McKenzie LM, Guo R, Witter RZ, Savitz DA, Newman LS, Adgate JL. Birth outcomes and maternal residential proximity to natural gas development in rural Colorado. Environ Health Perspect 2014; DOI:10.1289/ehp.1306722.
• 27.
McCawley M. Air, noise, and light monitoring results for assessing environmental impacts of horizontal gas well drilling operations. Prepared for the Department of Environmental Protection, Division of Air Quality, May 3, 2013.Google Scholar
• 28.
Southwestern Pennsylvania Marcellus Shale Short-Term Ambient Air Sampling Report. Pennsylvania Department of Environmental Protection. November 2010.Google Scholar
• 29.
Technical Support Document for Long-Term Ambient Air Monitoring Project Near Permanent Marcellus Shale Gas Facilities Protocol. Pennsylvania Department of Environmental Protection. August 2013.Google Scholar
• 30.
Pasquill F. Atmospheric diffusion: the dispersion of windborne material from industrial and other sources. London: D. Van Norstand Company, Ltd., 1962.Google Scholar
• 31.
Pennsylvania DEP Inventory of gas activities. Available at: http://www.portal.state.pa.us/portal/server.pt/community/oil_and_gas_reports/20297.
• 32.
How’s the weather?: Natural gas drilling, air pollution and the weather air exposure model. Southwest Pennsylvania Environmental Health Project. November 2013. Available at: http://www.environmentalhealthproject.org/wp-content/uploads/2013/11/Hows-the-Weather_-Long-Home-Air-Guide.compressor-example-11.05.13-.pdf.
• 33.
See www.ehhi.org/reports/woodsmoke/woodsmoke_report_ehhi_1010.pdf, Appendix B for examples of, “normal” levels of PM inside homes.
• 34.
Fuller CH, Brugge D, Williams PL, Mittleman MA, Lane K, Durant JL, et al. Indoor and outdoor measurements of particle number concentration in near-highway homes. J Expo Sci Environ Epidemiol 2013;23:506–12.
• 35.
Giles LV, Barn P, Kunzil N, Romieu I, Mittleman M, et al. From good intentions to proven interventions: effectiveness of actions to reduce the health impacts of air pollution. Environ Health Perspect 2011;119:29–36.
• 36.
The Benzene and Other Toxics Exposure (BEETEX) Study is a field study of exposure to and source attribution of air toxics. Houston Advanced Research Center. http://maps.harc.edu/beetex/About.aspx.
• 37.
Health risk limits for groundwater. State of Minnesota Department of Health. July 11, 2008.Google Scholar
• 38.
Allen GA, Miller PJ, Rector LJ, Brauer M, Su JG. Characterization of valley winter woodsmoke concentrations in Northern NY using highly time-resolved measurements. Aerosol and Air Quality Research 2011;11:519–30.Google Scholar
• 39.
Amdur MO. The response of guinea pigs to inhalation of formaldehyde and formic acid alone and with a sodium chloride aerosol. Int J Air Pollut 1960;3:201–20.Google Scholar
• 40.
“Emission Inventory.” Pennsylvania Department of Environmental Protection. Available at: http://www.dep.state.pa.us/dep/deputate/airwaste/aq/emission/emission_inventory.htm.
Corresponding author: David Brown, Southwest Pennsylvania Environmental Health Project, 4198 Washington Road, Suite 5, McMurray, PA 15317, USA, E-mail: ;
Accepted: 2014-02-14
Published Online: 2014-03-29
Published in Print: 2014-12-06
Citation Information: Reviews on Environmental Health, Volume 29, Issue 4, Pages 277–292, ISSN (Online) 2191-0308, ISSN (Print) 0048-7554,
Export Citation
Citing Articles
[1]
Xiaoxiao Meng, Rui Sun, Tamer M. Ismail, Wei Zhou, Xiaohan Ren, and Ruihan Zhang
Energy, 2018
[2]
Mary Finley-Brook, Travis L. Williams, Judi Anne Caron-Sheppard, and Mary Kathleen Jaromin
Energy Research & Social Science, 2018
[3]
Lizhong Peng, Chad Meyerhoefer, and Shin-Yi Chou
Health Economics, 2018
[4]
Bernard D. Goldstein
Journal of Exposure Science & Environmental Epidemiology, 2018
[5]
Qingmin Meng
Land Use Policy, 2018, Volume 70, Page 10
[6]
Ellen Webb, Jake Hays, Larysa Dyrszka, Brian Rodriguez, Caroline Cox, Katie Huffling, and Sheila Bushkin-Bedient
Reviews on Environmental Health, 2016, Volume 31, Number 2
[7]
Sara Wylie, Kim Schultz, Deborah Thomas, Chris Kassotis, and Susan Nagel
NEW SOLUTIONS: A Journal of Environmental and Occupational Health Policy, 2016, Volume 26, Number 3, Page 360
[8]
Meleah D. Boyle, Devon C. Payne-Sturges, Thurka Sangaramoorthy, Sacoby Wilson, Keeve E. Nachman, Kelsey Babik, Christian C. Jenkins, Joshua Trowell, Donald K. Milton, Amir Sapkota, and Vanesa Magar
PLOS ONE, 2016, Volume 11, Number 1, Page e0145368
[10]
Elise G. Elliott, Pauline Trinh, Xiaomei Ma, Brian P. Leaderer, Mary H. Ward, and Nicole C. Deziel
Science of The Total Environment, 2017, Volume 576, Page 138
[11]
Zacariah L. Hildenbrand, Phillip M. Mach, Ethan M. McBride, M. Navid Dorreyatim, Josh T. Taylor, Doug D. Carlton, Jesse M. Meik, Brian E. Fontenot, Kenneth C. Wright, Kevin A. Schug, and Guido F. Verbeck
Science of The Total Environment, 2016, Volume 573, Page 382
[12]
Gregg P Macey, Ruth Breech, Mark Chernaik, Caroline Cox, Denny Larson, Deb Thomas, and David O Carpenter
Environmental Health, 2014, Volume 13, Number 1
[13]
Joshua R. Maskrey, Allison L. Insley, Erin S. Hynds, and Julie M. Panko
Environmental Monitoring and Assessment, 2016, Volume 188, Number 7
[14]
Emily Clough and Derek Bell
Environmental Research Letters, 2016, Volume 11, Number 2, Page 025001
[15]
Richard B. Evans, David Prezant, and Yuh Chin T. Huang
Chest, 2015, Volume 148, Number 2, Page 298
[16]
Jake Hays, Madelon L. Finkel, Michael Depledge, Adam Law, and Seth B.C. Shonkoff
Science of The Total Environment, 2015, Volume 512-513, Page 36 |
# Notes (OFVB 10/31): New Kinds of Data
see data.ml
Yes, now we learn about the option type which is the equivalent of Haskell’s Maybe, and is a great way to replace impure exceptions with pure types.
Interesting that (*) is parsed as a comment, and ( * ) is parsed as the multiplication function.
# Questions
## 5
So looks like in OCaml there’s no special distinction between tuples and product types syntactically. So any constructor of a product is written like its a constructor of a tuple. I kind of like this, because, yes, product types and tuples are isomorphic. On the other hand, it’s a little syntactically messy compared to Haskell.
## 7
Interesting that this way of doing safeEval returns None rather than Some None. Exceptions just throw away their context, but I wonder if there’s a way to handle them while preserving the context. |
# What is vinegar?
Vinegar is a dilute aqueous solution of acetic acid......${H}_{3} C - C {O}_{2} H$
And its empirical formula, the simplest whole number ratio representing constituent atoms in a species, is therefore $C {H}_{2} O$. With me? Typically we gots concentration of 5-20% $\text{w/w}$. |
# Do only black holes emit gravitational waves?
A friend and I are hobby physicist. We don't really understand that much but at least we try to :) We tried to understand what the recently discovered gravitational waves at LIGO are, how they are created and how they have been measured. If I remember correctly, the information we found was that only large/massive objects, for example colliding black holes or neutron stars, emit these. What about smaller objects, e.g. a basketball hitting the ground or an asteroid hitting the earth? Do they also emit gravitation waves? And if not, at which threshold of mass is this happening?
daniel
Gravitational waves (GW) are emitted by all systems which have an 'accelerating quadrupole moment' --- which means that the systems have to be undergoing some sort of acceleration (i.e. a constant velocity is not enough), and they have to be asymmetric. The perfect example is a binary system, but something like an asymmetric supernovae is also expected to emit GW.
The total mass of the system doesn't matter [1] in determining whether GW are produced or not. It does determine how strong the GW are. The more massive the system and the more compact they are, the stronger the GW, and the more likely they are to be detectable---of course, how often an event happens nearby is also very important. The examples you give, black holes (BH) and neutron stars (NS), are some of the best sources because they are the most compact objects in the universe.
Another aspect to consider is the detection method. LIGO for example is only sensitive to GW in a certain frequency range (kilohertz-ish), and roughly stellar-mass systems (like binaries of NS and stellar-mass BH) emit at those frequencies. Something like supermassive BH binaries, in wide-separation orbits, emit GW at frequencies of (often) nanohertz --- which are expected to be detected by an entirely different type of method: by Pulsar Timing Arrays.
There is a proposed mission called the Laser-Interferometer Space Antenna (LISA) which would detect objects at frequencies intermediate between Pulsar Timing Arrays and ground-based interferometers (like LIGO), which would detect tremendous numbers of White-Dwarf binaries.
[1] General Relativity (GR), the theory which describes gravity and gravitational waves, has a property called "scale invariance". This means that no matter how massive things are, all of the properties of the system look the same if you scale by the mass. For example, if I run a GR simulation of a 10 solar-mass BH, the results would be identical to that of a 10 million solar-mass BH --- except one million times smaller in length-scales (for example the radius of the event horizon). This means that no matter the total mass of the binary, GW are still produced. It's also very convenient for running simulations... one simulation can apply to many situations!
Gravitational waves are emitted by all masses with accelerating gravitational quadrupole moments, but rarely with sufficient power to be detectable. I will restrict my answer to dealing with merging binary systems, but similar considerations apply to other scenarios just on dimensional arguments.
The power emitted by gravitational waves from a pair of orbiting masses is given by $$P = 1.7\times 10^{54} \frac{M_1^2M_2^2(M_1+M_2)}{R^5}\ \ {\rm W},$$ where $M_1, M_2$ are the masses of the two components in solar masses and $R$ is the separation of the two masses in kilometeres. The frequency of the gravitational waves produced occurs at twice the orbital frequency.
To put that in perspective, the largest of the two recent LIGO detections turned 3 solar masses into gravitational wave energy in $\sim 0.2$ s, emitting an average power of $\sim 3 \times 10^{48}$ W. This arose from a pair of 30 solar mass black holes, separated by a few times their Schwarzschild radii (say $4 \times 2 GM/c^2 =$ 360 km). Plugging these numbers into the formula above suggests $P \sim 10^{49}$ W, similar to the estimate based on the mass discrepancy between the black holes before and after they merged. This event was just detectable by LIGO.
All orbiting pairs of masses give off gravitational waves in this manner. But their masses and orbital separations do not result in significant (detectable) energy losses through gravitational wave emission due to the steep dependencies on mass and separation.
Black holes were once massive stars. In fact they were even more massive stars, since a black hole progenitor loses mass during its lifetime. The reason that black hole binary systems are favourable for gravitational wave detection is that they can get close together before they merge. i.e. There are plenty of stars out there with huge component masses but they cannot be brought close enough together to produce detectable gravitational waves without them first merging. The radius of a typical "normal" star is 5 orders of magnitude larger than the Schwarzschild radius for a black hole of similar mass. Looking at the formula this means the gravitational waves produced by such a system would be 25 orders of magnitude smaller than ifsimilar mass black holes were merging.
Neutron stars represent an intermediate case. Whilst their radii and hence closest possible orbits are only $\sim 3$ times larger than for black holes, their masses are limited to around $\leq 2 M_{\odot}$. So compared with the 30 solar mass black holes mentioned above this means that the emitted power from a pair of merging $1.5 M_{\odot}$ neutron stars would be down by $\sim 2$ orders of magnitude and so they would only be detectable if they were closer to the Earth by factors of 10.
The crucial parameter here is $(M/R)^5$. If we work in a natural set of units then because the Schwarzschild radius is proportional to mass we can say that if all black holes have $M/R \simeq 1$ (forgetting about spin for a moment), then for a neutron star, $M/R \sim 0.4$ and the power emitted is only $(M/R)^5 \sim 0.01$ that of a pair of equivalent mass black holes. For a normal star like the Sun $M/R \sim 4\times 10^{-6}$ and so the power falls by a factor $\sim 10^{-27}$.
Interestingly, considerations like these suggest that binary black holes of any mass should produce roughly the same power in gravitational waves as they reach the point of merger. However, the waves are produced at very different mass-dependent frequencies. A rule of thumb is that the peak frequency will occur at $\sim \sqrt{G\rho}$, where $\rho \sim 3(M_1+M_2)/4\pi R^3$ is the average density. For a pair of 30 solar mass black holes separated by their Schwarzchild radii $\sqrt{G\rho} \simeq 500$ Hz.
This is bang in the centre of the most sensitive part of the frequency spectrum for the LIGO detectors. Less massive black hole binaries will produce higher frequencies ($\propto M^{-1}$); supermassive black hole mergers or binaries with components with lower $M/R$ will produce gravitational waves at way below the frequencies to which LIGO is sensitive, but for which space-borne interferometers are currently being designed.
As the black holes have a mass that is three times the solar mass they act as a gravitation mass and hence every body having mass exerts an gravitational force that is equal to GM1M2 divided by square of the distance between them .
• That bit of gravitation theory - Newton's law of universal gravitation - is not relevant to the question. It doesn't predict gravitational waves. General relativity is a more recent and complex theory that better predicts gravity's effects, including black holes and gravitational waves. Newton's law is still good for many less extreme situations, such as strength of gravity on Earth. – Neil Slater Nov 18 '16 at 7:58
• I failed to see how this is answering the question. – Shing Nov 18 '16 at 11:18
## protected by Qmechanic♦Nov 18 '16 at 11:11
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). |
Browse Questions
Which of the following is ecologically most relevant abiotic factor?
$\begin{array}{1 1}(a)\;\text{Temperature}\\(b)\;\text{Water}\\(c)\;\text{Light}\\(d)\;\text{Soil}\end{array}$
Ecologically most relevant abiotic factor is temperature.
Hence (a) is the correct answer. |
Functions - Domain and Range; Composition
Sol 1 f is defined for all values of x (since f is a polynomial), so the domain of f is . Since the graph of f is a parabola which opens downward with vertex at , the set of y-coordinates for the points on the graph of f consists of all y-values with ; so the range of f is the interval .
Sol 2 f is defined where or , so the domain of f is the interval . Since , for any x in the domain of f; so the range of f is contained in . If , (since ), so the range of f is actually equal to .
Sol 3 f is defined for or , so the domain of f is given by . To find the range of f, we must determine for which y-values the equation has a solution for x. Multiplying both sides of this equation by gives or , so . Therefore the equation has a solution for x iff or , so the range of f is given by .
Sol 4 f is defined wherever (so the square root is defined) and (so the fraction is defined). Solving the inequality or gives , so the domain of f is .
Sol 5 , while .
Sol 6 We can let and , for example.
Sol 7 Since , and therefore and .
Sol 8 is defined where , so gives . Taking the nonnegative square root of both sides gives or . Therefore is the domain of .
Sol 9 is defined where
, so factoring gives the inequality
.
Marking 0,5,3, and -3 on a number line and using the facts that all factors have odd exponents and that ,
we get the following sign chart for :
Therefore the domain of is given by |
## magento2 – How to specify the path of the image in relation to the module in the HTML Knockout view?
I am creating a Magento 2 module, Usign 2.2.6, that tries to show some information in the payment process and I am stuck trying to add a relative image to the module path.
I am NOT using PHTML in this case, and I am trying not to have to.
FYI: I have read the question How to specify the path of the skin image in the HTML template of Knockout?
And this is to get images related to the theme path, not the module.
## nt.number theory – Does this self-similar sequence have the relation \$ ( sqrt2 + 1) ^ 2 \$?
This is inspired by a question math.SE, where an infinite sequence of by different pairs natural numbers $$a_1 = 1, a_2, a_3, …$$ It has been defined as follows:
$$a_n$$ is the smallest number such that $$s_n: = sqrt {a_n + sqrt {a_ {n-1} + sqrt { cdots + sqrt {a_1}}}}$$ it is a whole
It turns out that this sequence, which by the way is not yet in the OEIS, is in fact a permutation of $$mathbb N$$. In addition, the images show that both $$(a_n)$$Y $$(s_n)$$ They exhibit an interesting self-similarity, with two alternate structures and the relationship that converges rapidly towards, as it seems, $$3 + 2 sqrt2 = ( sqrt2 + 1) ^ 2 approx 5.828427$$ (see this other answer). Next I have shown the first $$632$$ entries in a way that makes it easier to see what numbers generate what is perceived in the images as lines.
``````one,
3, 2,
7, 6,
13, 5,
22, 4,
33, 10, 12, 21, 11,
32, 19, 20,
31, 30,
43, 9, 45, 18, 44, 29,
58, 8, 60, 17, 59, 28,
75, 16, 76, 27,
94, 15, 95, 26,
115, 14, 116, 25,
138, 24,
163, 23,
190, 35, 42, 57, 41, 74, 40, 93, 39, 114, 38, 137, 37, 162, 36,
189, 50, 56, 73, 55, 92, 54, 113, 53, 136, 52, 161, 51,
188, 67, 72, 91, 71, 112, 70, 135, 69, 160, 68,
187, 86, 90, 111, 89, 134, 88, 159, 87,
186, 107, 110, 133, 109, 158, 108,
185, 130, 132, 157, 131,
184, 155, 156,
183, 182,
211, 34, 218, 49, 217, 66, 216, 85, 215, 106, 214, 129, 213, 154, 212, 181,
242, 48, 248, 65, 247, 84, 246, 105, 245, 128, 244, 153, 243, 180,
275, 47, 281, 64, 280, 83, 279, 104, 278, 127, 277, 152, 276, 179,
310, 46, 316, 63, 315, 82, 314, 103, 313, 126, 312, 151, 311, 178,
347, 62, 352, 81, 351, 102, 350, 125, 349, 150, 348, 177,
386, 61, 391, 80, 390, 101, 389, 124, 388, 149, 387, 176,
427, 79, 431, 100, 430, 123, 429, 148, 428, 175,
470, 78, 474, 99, 473, 122, 472, 147, 471, 174,
515, 77, 519, 98, 518, 121, 517, 146, 516, 173,
562, 97, 565, 120, 564, 145, 563, 172,
611, 96, 614, 119, 613, 144, 612, 171,
662, 118, 664, 143, 663, 170,
715, 117, 717, 142, 716, 169,
770, 141, 771, 168,
827, 140, 828, 167,
886, 139, 887, 166,
947, 165,
1010, 164,
1075, 192, 210, 241, 209, 274, 208, 309, 207, 346, 206, 385, 205, 426, 204, 469, 203, 514, 202, 561, 201, 610, 200, 661, 199, 714, 198, 769, 197, 826, 196, 885, 195, 946, 194, 1009, 193,
1074, 223, 240, 273, 239, 308, 238, 345, 237, 384, 236, 425, 235, 468, 234, 513, 233, 560, 232, 609, 231, 660, 230, 713, 229, 768, 228, 825, 227, 884, 226, 945, 225, 1008, 224,
1073, 256, 272, 307, 271, 344, 270, 383, 269, 424, 268, 467, 267, 512, 266, 559, 265, 608, 264, 659, 263, 712, 262, 767, 261, 824, 260, 883, 259, 944, 258, 1007, 257,
1072, 291, 306, 343, 305, 382, 304, 423, 303, 466, 302, 511, 301, 558, 300, 607, 299, 658, 298, 711, 297, 766, 296, 823, 295, 882, 294, 943, 293, 1006, 292,
1071, 328, 342, 381, 341, 422, 340, 465, 339, 510, 338, 557, 337, 606, 336, 657, 335, 710, 334, 765, 333, 822, 332, 881, 331, 942, 330, 1005, 329,
1070, 367, 380, 421, 379, 464, 378, 509, 377, 556, 376, 605, 375, 656, 374, 709, 373, 764, 372, 821, 371, 880, 370, 941, 369, 1004, 368,
1069, 408, 420, 463, 419, 508, 418, 555, 417, 604, 416, 655, 415, 708, 463, 413, 820, 412, 879, 411, 940, 410, 1003, 409,
1068, 451, 462, 507, 461, 554, 460, 603, 459, 654, 458, 707, 457, 762, 456, 819, 455, 878, 454, 939, 453, 1002, 452,
1067, 496, 506, 553, 505, 602, 504, 653, 503, 706, 502, 761, 501, 818, 500, 877, 499, 938, 498, 1001, 497,
1066, 543, 552, 601, 551, 652, 550, 705, 549, 760, 548, 817, 547, 876, 546, 937, 545, 1000, 544,
1065, 592, 600, 651, 599, 704, 598, 759, 597, 816, 596, 875, 595, 936, 594, 999, 593,
1064, 643, 650, 703, 649, 758, 648, 815, 647, 874, 646, 935, 645, 998, 644,
1063, 696, 702, 757, 701, 814, 700, 873, 699, 934, 698, 997, 697,
1062, 751, 756, 813, 755, 872, 754, 933, 753, 996, 752,
1061, 808, 812, 871, 811, 932, 810, 995, 809,
1060, 867, 870, 931, 869, 994, 868,
1059, 928, 930, 993, 929,
1058, 991, 992,
1057, 1056,
1123, 191, ....
``````
Once the data is sorted like this, the patterns seem quite predictable. However, every other block (for example, the penultimate one, which starts with $$211 = a_ {113}$$) has "paragraphs" of lengths $$2$$ or $$3$$, except possibly the first. Now you can see by construction that from a block to the next (envelope), the sequence 2-3 of the block is generated by the previous one in a similar way to the "rabbit sequence", also known as "Fibonacci". word "https://oeis.org/A005614, by the laws (essentially) $$3 to22, 2 to323$$ plus the boundary conditions that are much harder to predict …
So this partly explains self-similarity. But:
How can you prove that the asymptotic relationship is $$3 + 2 sqrt2$$?
Each block consists of a group of monotonous "horizontal" subsequences and a group of monotonous "vertical" subsequences. For each other block, they come in roughly "L-shaped" pairs. The numbers of "L shapes" per block are clearly distinguishable, p. Ex. $$8$$ of them in the range of $$n = 49, dots, 112$$ (starting after $$a_ {48} = 190$$) Y $$19$$ for $$n = 270, dots, 630$$ (starting after $$a_ {269} = 1075$$, the beginning of the last block). Those numbers $$(c_j) = 3,8,19,46, points$$ it seems that they form the sequence of Fibonacci types https://oeis.org/A078343 with $$c_j = frac14 Bigl[(3 sqrt{2} – 2) (1 + sqrt{2})^j – (3 sqrt{2}+2) (1 – sqrt{2})^jBigr],$$ which is another indication in favor of the conjectured relationship, but I'm not sure if the recursion $$c_j = 2c_ {j-1} + c_ {j-2}$$ It can be shown by induction.
You can also take a look at https://codegolf.stackexchange.com/a/145234/14614, which shows the differences $$a_ {n + 1} -a_n$$, and in the image of the reverse map. $$a_n mapsto n$$ cited below one of the comments. (Note that the isolated point in $$a_n = 191$$ corresponds to $$n = 632$$, which is just where my previous table stops.)
Both show a lot of beauty, but they also show that self-similarity is somewhat less strict than for the fractal sequences mentioned here.
## [ Politics ] Open question: Why do liberals vehemently defend science in relation to climate, but ignore science with respect to gender?
[ Politics ] Open question: Why do liberals vehemently defend science in relation to climate, but ignore science with respect to gender? .
## Theory of representation rt – \$ m- \$ cycles in \$ S_n \$ module an equivalence relation
Leave $$A$$ be the set of all $$m-$$cycles in $$S_n$$. Define an equivalence relation $$i$$ in $$A$$ by $$sigma_1$$ it's related to $$sigma_2$$ by $$i$$ Yes $$sigma_1$$ it is a power of $$sigma_2$$ or viz., then the number of equivalence classes is given by $$frac {n!} {m (n-m)! phi (m)}$$ , where $$phi (m)$$ It's the totient function of Euler.
We have the sequence of oeis related here:
I want to know more about the combinatorial meaning of these numbers.
I want to know, do these numbers have any familiar object in the theory of the representation of symmetric groups? I suspect that these numbers related to the irreducible representation correspond to the partition (m, 1,1, …).
This is a vague question, if someone can suggest some references that will be very useful.
Thank you.
## Import multiple EXCEL files in relation to each other (Vlookup) in MS SQL Server
Hello to all the experts and readers.
I have several Excel files that are related to each other.
I want to use your data and develop a web application.
First, I need to import them to the SQL server (I think SQL Server is more compatible with Excel files)
But some of them have many records and also relationships. (Vlookup)
How can I import them with their relationships in SQL Server?
Is there any way?
Note that I use SQL Server 2016 and that the Excel file format is 2007-2010, but I also open the Excel file in Microsoft Office Excel 2016.
Thank you.
## General topology – Equivalence relation and topological space.
Let X be a topological space. We define an equivalence relation in X by declaring x ~ x 'if f (x) = f (x') for each space of Hausdorff Y and each function continues f: X → Y.
(a) Show that ~ is in fact an equivalence relation.
(b) Show that for each function continuous f: X → Y with Y to Hausdorff space, there is a unique
continuous function f ': (X / ~) → Y such that f = f'◦p (wherep: X → X / ~ is the projection).
(c) Show that the space of the quotient X / is a Hausdorff space.
## Theory of elementary sets: How do I prove that this is an equivalence relation?
I need to prove that the following is an equivalence relation. However, I have no idea how to do it. I get stuck in the transitive and the symmetrical.
Leave $$m in mathbb {N} ^ +$$. Test that relationship $$R$$, defined by $$R = left {(a, b) in mathbb {N} times mathbb {N} | m text {divide} b – a right }$$
## What is the recurrence relation for the binary search tree?
Write the recurrence relation for the binary search tree; Solve it using the iterative method and answer the end in asymptotic form.
## mysql – Consult two tables without foreign key and relation
I have two tables `tb_ ordered` Y `tb_pagamento`, I put an example below with some fictitious data. I need a report that shows the data of the `tb_ ordered` and of the `tb_pagamento`, all in a single table as in example 3. I thought about doing a left join but I did not get it, I just need to add the column `data_pagamento` Y `value_payment` to finalize the report.
``````TB_PEDIDO
COD_EMPRESA | COD_FORNECEDOR | DATA_EMISSAO | value_order |
1 | 1 | 11/01/2018 | 1000 |
2 | 2 | 11/02/2018 | 2000 |
TB_PAGAMENTO
COD_EMPRESA | COD_FORNECEDOR | DATA_ENTRADA | DATA_PAGAMENTO | VALUE_PAGAMENT |
1 | 1 | 11/26/2018 | 11/27/2018 | 1000 |
2 | 2 | 11/26/2018 | 11/28/2018 | 2000 |
---------------------- ---------------------- TB_PEDIDO | ---- TB_PAGAMENTO ---- --------- |
COD_EMPRESA | COD_FORNECEDOR | DATA_EMISSAO | value_order | DATA_PAGMENTO | VALUE_PAGAMENT |
1 | 1 | 11/01/2018 | 1000 | 11/27/2018 | 1000 |
2 | 2 | 11/02/2018 | 2000 | 11/28/2018 | 2000 |
``````
## Nvidia graphics card: What does "this port is only for data transfer" mean in relation to USB-C?
According to the technical specifications of the Asus ZenBook Flip 14 UX461UN and the manual, they all say this about the USB-C 3.1 port:
This port is only for data transfer.
General question: What are you trying to say? it will not do?
Specific question: this laptop comes with a dedicated graphics card with 4K capacity, and I am looking to obtain a type of pseudo-docking station through USB-C, which would provide 4K Video / Audio and USB for HID. Is this possible, or is the dedicated graphics card not connected to USB-C in that way? |
# Suppose you choose marble from a bag containing 2 red marbles, 5 white marbles, and 3 blue marbles. You return the first marble to the bag and choose again. How do you find P(red then blue)?
##### 1 Answer
$\frac{1}{10}$
#### Explanation:
There are 10 marbles. The probability of drawing a red is $\frac{2}{10} = \frac{1}{5}$, white is $\frac{5}{10} = \frac{1}{2}$ and blue is $\frac{3}{10}$.
I can then say the probability of drawing a red, replacing it, and then a blue is:
$P \left(\text{drawing red then blue, with replacement}\right) = \frac{1}{5} \times \frac{1}{2} = \frac{1}{10}$ |
Creates a panel for all info boxes so they do not overlap
## Usage
info_panel(..., position = "top right")
## Arguments
...
calls with info elements
position
character with position of the parameter. Default "top right".
## Value
div which wraps your all info boxes to display it in the position corner of your shiny app. |
Dataplot Vol 2 Vol 1
# COEFFICIENT OF VARIATION
Name:
COEFFICIENT OF VARIATION (LET)
Type:
Let Subcommand
Purpose:
Compute the coefficient of variation of a variable.
Description:
The sample coefficient of variation (CV) is defined as the ratio of the standard deviation to the mean:
$$\mbox{cv} = \frac{s}{\bar{x}}$$
where s is the sample standard deviation and $$\bar{x}$$ is the sample mean.
That is, it shows the variability, as defined by the standard deviation, relative to the mean.
The coefficient of variation should typically only be used for data measured on a ratio scale. That is, the data should be continuous and have a meaningful zero. Measurement data in the physical sciences and engineering are often on a ratio scale. As an example, temperatures measured on a Kelvin scale are on a ratio scale while temperaturs measured on a Celcius or Farenheit scale are interval scales rather than ratio scales. Given a set of temperature measurements, the coefficient of variation on the Celcius scale will be different than the coefficient of variation on the Farenheit scale.
The coefficient of variation is sometimes preferred to the standard deviation because the value of the coefficient of variation is independent of the unit of measurement scale (as long as it is a ratio scale). When comparing variability between data sets with different measurement scales or very different mean values, the coefficient of variation can be a useful alternative or complement to the standard deviation.
However, the coefficient of variation should not be used for data that are not on a ratio scale. Also, if the mean value is near zero, the coefficient of variation is sensitive to small changes in the mean. Also, the coefficient of variation cannot be used to compute confidence intervals for the mean.
Syntax 1:
LET <par> = COEFFICIENT OF VARIATION <y>
<SUBSET/EXCEPT/FOR qualification>
where <y> is a response variable;
<par> is a parameter where the coefficient of variation value is saved;
and where the <SUBSET/EXCEPT/FOR qualification> is optional.
Syntax 2:
LET <par> = UNBIASED COEFFICIENT OF VARIATION <y>
<SUBSET/EXCEPT/FOR qualification> where <y> is a response variable;
<par> is a parameter where the unbiased coefficient of variation value is saved;
and where the <SUBSET/EXCEPT/FOR qualification> is optional.
For normally distributed data, an unbiased estimate of the coefficient of variation is
$$\mbox{cv*} = (1 + \frac{1}{4n}) \mbox{cv}$$
where n is the sample size and cv is $$s/\bar{x}$$.
Syntax 3:
LET <par> = LOGNORMAL COEFFICIENT OF VARIATION <y>
<SUBSET/EXCEPT/FOR qualification>
where <y> is a response variable;
<par> is a parameter where the lognormal coefficient of variation value is saved;
and where the <SUBSET/EXCEPT/FOR qualification> is optional.
For lognormally distributed data, a more accurate estimate for the coefficient of variation (based on the population mean and standard deviation of the lognormal distribution) is
$$\mbox{cv}_{\mbox{ln}} = \sqrt{\exp(s_{\mbox{ln}}^2) - 1}$$
where $$s_{\mbox{ln}}^2$$ is the variance of the log of the data.
Examples:
LET CV = COEFFICIENT OF VARIATION Y1
LET CV = COEFFICIENT OF VARIATION Y1 SUBSET TAG > 2
LET CV = UNBIASED COEFFICIENT OF VARIATION Y1
LET CV = LOGNORMAL COEFFICIENT OF VARIATION Y1
Note:
Versions prior to 1994/11 treated this command as a synonym for RELATIVE STANDARD DEVIATION. The relative standard deviation is:
$$\mbox{relsd} = 100 \frac{s}{|\bar{x}|}$$
That is, the relative standard deviation is the absolute value of the coefficient of variation expressed in percentage units.
Note:
Dataplot statistics can be used in a number of commands. For details, enter
Default:
None
Synonyms:
COEFFICIENT VARIATION
Related Commands:
COEFFICIENT OF VARIATION CONFIDENCE LIMIT = Compute confidence limits for the coefficient of variation. COEFFICIENT OF DISPERSION = Compute the coefficient of dispersion of a variable. QUARTILE COEFFICIENT OF DISPERSION = Compute the quartile coefficient of dispersion of a variable. RELATIVE STANDARD DEVIATION = Compute the relative standard deviation of a variable. MEAN = Compute the mean of a variable. STANDARD DEVIATION = = Compute the standard deviation of a variable.
Applications:
Data Analysis
Implementation Date:
1994/11 (earlier versions use a different definition)
2017/01 Added the UNBIASED COEFFICIENT OF VARIATION
2017/01 Added the LOGNORMAL COEFFICIENT OF VARIATION
Program 1:
LET Y1 = NORMAL RANDOM NUMBERS FOR I = 1 1 100
LET CV = COEFFICIENT OF VARIATION Y1
Program 2:
. Step 1: Create the data
.
skip 25
skip 0
set write decimals 6
.
. Step 2: Define plot control
.
title case asis
title offset 2
label case asis
.
y1label Coefficient of Variation
x1label Group
title Coefficient of Variation for GEAR.DAT
let ngroup = unique x
xlimits 1 ngroup
major x1tic mark number ngroup
minor x1tic mark number 0
tic mark offset units data
x1tic mark offset 0.5 0.5
y1tic mark label decimals 3
.
character X
line blank
.
set statistic plot reference line average
.
coefficient of variation plot y x
NIST is an agency of the U.S. Commerce Department.
Date created: 01/24/2017
Last updated: 01/24/2017 |
# Angular momentum conservation and constant velocity as expla
• I
I'm confused about situations involving rotating frames in which the angular momentum is conserved and the initial velocity does not change. I'll make an example.
Take a rotating carousel (constant angular velocity) with no friction on it and a ball. At the initial time instant the ball has the same velocity of the carousel and is far from the center ##O##. It is given a radial impulse, so that it gains a radial velocity also.
In a inertial frame the path of the ball is a straight line, while in the rotating frame it is deviated by Coriolis force.
The angular momentum of the ball is conserved with respect to ##O## in the inertial frame. Nevertheless I read many times that the "deviation" in rotating frame can be explained if we think about the fact that the ball has a greater tangential velocity than the rest of the carousel, so it appears to go faster.
So on the one side we have that, taking the center of the turntable ##O## as pivot ##L=mrv_{\theta}=mr^2\omega## is conserved, on the other hand that the initial velocity ##v= \omega r## does not change.
Neglecting ##m##, saying that ##r^2 \omega## and ##r \omega## are constant is definetely not the same thing. But are both of them conserved in this case?
In this kind of situation are the two description (the one that uses the fact of greater velocity and the other that is based on the conservation of angular momentum) equivalent?
Is there one more correct to use?
andrewkirk
Homework Helper
Gold Member
So on the one side we have that, taking the center of the turntable ##O## as pivot ##L=mrv_{\theta}=mr^2\omega## is conserved, on the other hand that the initial velocity ##v= \omega r## does not change.
Neglecting ##m##, saying that ##r^2 \omega## and ##r \omega## are constant is definetely not the same thing. But are both of them conserved in this case?
##r\omega## is not conserved, because that is the tangential component of velocity, which does not remain constant. As the ball, after release, moves further away from the carousal centre, its linear velocity ##v## remains constant, but the radial component increases and the tangential component decreases, as they are functions of position.
##r^2\omega## is conserved however. Angular momentum is still meaningful, and is still conserved, when a body is not in circular motion.
Soren4 |
# Mapping the Universe
Everything we know about the universe we learned from photons. We detect cosmic photons with senses and instruments and from their physical properties we estimate the size, speed, direction, position and composition of each of their sources. In short, cosmic photons allow us to map out the Universe. The maps we now use have been drawn from interpretations of the signals we receive. And these interpretations are based on theories which are founded on the wave model of light.
The main tool used to determine position, direction and speed of a stellar object is provided by what is called the redshift effect. The redshift effect is simply the change in frequency of light attributed to the Doppler effect and is expected to occur when the emitting source is speeding away from us. The magnitude of redshift is understood to be proportional to speed of the source and is be used to calculate its distance from us. Maps of the observable universe are made by compiling data received from all observable sources. The problem, if QGD is correct, is that those maps are built on the assumption that light behaves like a wave and that, consequently, the Doppler effect applies. But if, as QGD suggests, light is singularly corpuscular, will a map based on QGD’s interpretation of the redshift and blueshift effects agree with the maps based on the wave model of light? Before answering the question we will first discuss how QGD explains the redshift effect.
### Emission Spectrum of Atoms
We have shown that quantum-geometrical space itself exerts a force on an object and that any change in momentum of an object must be an integer multiple of the mass of the object (see QGD optics part 3). That is, for an object $a$ of mass ${{m}_{a}}$, $\Delta \left\| {{{\vec{P}}}_{a}} \right\|=x{{m}_{a}}$ where $x\in {{N}^{+}}$. This applies to the components of an atom that are bombarded by photons. For instance, if $a$ is an electron bombarded by a photon $b$ having mass ${{m}_{b}}$, which momentum we have learned is equal to ${{m}_{b}}c$, then $a$ will absorb $b$ only if ${{m}_{b}}c=x{{m}_{a}}$. Similarly, the allowable changes in momentum $\Delta \left\| {{{\vec{P}}}_{a}} \right\|=x{{m}_{a}}$ must also apply to the emission of photons by an electron. The allowable changes in momentum determine the emission spectrum of the electrons of an atom.
In the figure above, we have the visible part of the hydrogen emission spectrum. Here the first visible band correspond to a change in momentum of the electron $a$ by emission of a photon with momentum ${{m}_{{{b}_{i}}}}c=i{{m}_{a}}$. Notice that the lowest possible value, which is at the far end of the spectrum is given when $i=1$ . Each emission line corresponds to allowable emission of a photon from an hydrogen atom’s single electron. In agreement with the laws of motion introduced earlier, each emitted photon has a specific momentum ${{m}_{{{b}_{i}}}}c$ (hence, a specific mass ${{m}_{{{b}_{i}}}}$ ). For values of $x and $x>i+3$ which are respectively towards the infrared and ultraviolet; the momentum puts them outside the boundaries of visible light.
For an atom $a$having $n$ components electrons ${{a}_{i}}$in its outer orbits (the ones that will interact most with external photons) where $1 and having mass ${{m}_{{{a}_{{{i}_{{}}}}}}}$the emission lines of its component electrons ${{a}_{i}}$ corresponds to photons ${{b}_{i}}$ such that ${{m}_{b}}c={{x}_{i}}{{m}_{{{a}_{i}}}}$ and its spectrogram is the superposition of the emission lines of all its electrons. An example of the superimposition of the emission spectrums of the electrons of iron is shown in the illustration below. Note that an electron can have only one change in momentum at the time, emitting or absorbing a photon of corresponding momentum. So emission spectrograms are really composite images made from the emission of a large enough number of atoms to display the full emission spectrum of an element.
### QGD’s Interpretation of the Redshift and Blueshift Effects
Now that we have described and explained the emission spectrum of atoms we can deduce the cause the redshifts and blueshifts in the emission lines of the emission spectrum an atom. We saw earlier that the emission of a photon by and electron $a$ corresponds to a change in the electron’s momentum such that $\Delta \left\| {{{\vec{P}}}_{a}} \right\|=x{{m}_{a}}$ where $x\in {{N}^{+}}$. So a redshift of the emission spectrum of an element implies that photons emitted by its electrons ${{{a}'}_{i}}$are less massive than photons emitted by the electrons ${{a}_{i}}$ of a reference atom of the same element (most often, the reference atom is on Earth). This means that $x{{m}_{{{{{a}'}}_{i}}}} sot that ${{m}_{{{{{a}'}}_{i}}}}<{{m}_{{{a}_{_{i}}}}}$. That is, the mass of electron ${{{a}'}_{i}}$ belonging to an atom of an element from a distance source is smaller than the mass of the corresponding electron ${{a}_{i}}$ belonging to the atom of the same element on Earth. In the same way, the blueshift of the emission lines of the emission spectrum of an atom implies that ${{m}_{{{{{a}'}}_{i}}}}>{{m}_{{{a}_{i}}}}$.
So, according to QGD, the redshift and blueshift effects imply that the electrons of the light emitting source are respectively less and more massive than the local reference electron $a$ . Therefore, quantum-geometry dynamics does not attribute the redshifts and blueshifts effects to a Doppler-like effect (which in the absence of a medium doesn’t make sense anyway) and, as a consequence, these effects are not speed dependant. Hence redshifts and blueshifts provide no indication of the speed or distance of their source.
From the mechanisms of particle formation introduced earlier, we understand that though all electrons share the same basic structure they can have different masses. As matter aggregates though gravitational interactions, electrons absorb neutrinos, photons or preons(+) and gradually become more massive. It follows that redshifted photons must be emitted by sources at a stage of their evolution that precedes the stage of evolution of our reference source. Similarly, blueshifted photons being more massive were emitted at a stage of their evolution that succeeds that stage of evolution of our reference source. However, it can’t be assumed that sources of similarly redshifted photons are at similar distances from us unless they are part of a system within which they have simultaneously formed. The sources of similarly redshitted photons may be at greatly varying distances from us. Also, a source of blueshifted photons can be at the same distance as a source of redshifted photons would be. Therefore, there are important discrepancies between a map using QGD’s interpretation of the redshift and blueshift effects and one that is based on the classical wave interpretation of the same effects.
So though they provide no information about to the distance of their source (much less about their speed), redshifted or blueshifted photons inform us of the stage of evolution of their sources at the time they were emitted. Also, since sources of similarly redshifted (or similarly blueshifted) photons have similar mass, structure and luminosity, it is possible to establish the distance of one source of redshifted photons relative to a reference source of similarly redshifted photons by comparing the intensity of the light we receive from them.
### Gravitational Telescopy
As we have seen, although we can indirectly estimate the distance of source of photons relative to another, there is no direct correlation between distance, direction or speed of a stellar object and how much the photons they emit are redshifted or blueshifted. However, according to QGD, it is theoretically possible to map the universe with great accurately by measuring the magnitude and direction gravitational interactions using a gravitational telescopy. And, unlike telescopes and radio-telescopes, gravitational telescope are not limited to the observation of photon emitting objects.
More importantly, if QGD’s prediction that gravity is instantaneous, then a map based on the observations of gravitational telescopes would represent all observed objects as they currently are and not as they were when they emitted the photons we receive from them.
### Cosmological Implications
The notion that the universe is expanding is based on the classic interpretation of the redshift and blueshift effects, but if QGD is correct and redshift and blueshift effects are consequences of the stage of evolution of their source, then the expanding universe model loses its most important argument. The data then becomes consistent with the locally condensing universe proposed by quantum-geometry dynamics. |
## Cryptology ePrint Archive: Report 2021/1670
The complexity of solving Weil restriction systems
Alessio Caminata and Michela Ceria and Elisa Gorla
Abstract: The solving degree of a system of multivariate polynomial equations provides an upper bound for the complexity of computing the solutions of the system via Groebner basis methods. In this paper, we consider polynomial systems that are obtained via Weil restriction of scalars. The latter is an arithmetic construction which, given a finite Galois field extension $k\hookrightarrow K$, associates to a system $\mathcal{F}$ defined over $K$ a system $\mathrm{Weil}(\mathcal{F})$ defined over $k$, in such a way that the solutions of $\mathcal{F}$ over $K$ and those of $\mathrm{Weil}(\mathcal{F})$ over $k$ are in natural bijection. In this paper, we find upper bounds for the complexity of solving a polynomial system $\mathrm{Weil}(\mathcal{F})$ obtained via Weil restriction in terms of algebraic invariants of the system $\mathcal{F}$.
Category / Keywords: public-key cryptography / Weil restriction, solving degree, degree of regularity, Groebner basis |
# any gifted people with strange abilities around here?
Discussion in 'Pseudoscience Archive' started by slivered roots, Feb 24, 2004.
Not open for further replies.
1. ### slivered rootsRegistered Senior Member
Messages:
36
i've been really interested in the strange abilities that some people possess lately. i know a few people that have mild abilities (most of them have energy-related voluntary acts).
there are some really interesting children (indigo children, i guess) that i've read about online that have special abilities. here's a link to one girl that has an interesting ability: click here
if you have any links to other online sites of indigo children with special abilities, i'd love to check them out.
do any of you have any special abilities? telepathy? energy-related? psycic? it's just that the number of people with special abilities seems to be increasing, and i'd LOVE to hear about anything you may want to share!
3. ### MystechAdult Supervision RequiredRegistered Senior Member
Messages:
3,938
Are you kidding? Everyone here is some sort of psyonic master, being from a higher plane of existance, transendant consiousness, or was fucking jesus christ in a past life.
5. ### slivered rootsRegistered Senior Member
Messages:
36
haha i know that many of the people here have higher level thinking tendencies and such, but are there any examples of your abilities? i've just been interested in this for a while now--indigo children.
but don't get me wrong: people like many of you WITH higher level thinking abilities and wonderful in itself...i know that i have a strong sense of existence and i have a love of nature, questioning/doubting, the universe, spiritual stuff, compassion for humanity, etc. that's why i just registered to this forum--to talk with people like me.
any examples? anything at all! it could even be a very strong sense of intuition (which i do have myself).
Last edited by a moderator: Feb 25, 2004
7. ### James RJust this guy, you know?Staff Member
Messages:
30,835
There's no reliable evidence that anybody, anywhere, actually has any kind of psychic, telepathic or other paranormal special abilities. In fact, there is a $1 million prize on offer for anybody who can demonstrate such abilities under scientifically controlled testing conditions. The prize has been available for a number of years now, and nobody has yet successfully claimed it. I'm sure that most people could use$1 million.
There are, of course, many many people who have outstanding special abilities which are non-paranormal. For example, Olympic atheletes all have special abilities in their sports, compared to the average person. However, somehow I don't think you were asking about those kinds of abilities.
8. ### RavenRegistered Senior Member
Messages:
302
I've had many prophetic dreams. I can't control them really so I don't think you'd consider it anything out of the ordinary. I also have the ability to experience my senses through thought. If I think about my dog I can feel his coat even though he crossed the river a year ago. If I think of food I can smell it. It has negative points as well as positive ones aas well.
9. ### (Q)Encephaloid MartiniValued Senior Member
Messages:
19,125
I can see through the BS - does that count?
10. ### one_ravenGod is a Chinese WhisperValued Senior Member
Messages:
13,406
Mary Magdalene is a member here?
Get it? "fucking Jesus"
*elbows Mystech in the ribs*
Nevermind
11. ### VotorxEgotistic...Valued Senior Member
Messages:
1,126
i know that i have a strong sense of existence and i have a love of nature, questioning/doubting, the universe, spiritual stuff, compassion for humanity, etc.
That has nothing to do with psychic abilities or the kind of special traits you are talking about. Its how u view things, the way your mind thinks. For example a person who likes pizza being compared to a person who does not like pizza. Doesn't exactly mean they have a psychic tendecies.
And Raven, that is called your imagination if anything.
I agree with James R. Even though I've never heard of this \$1 million dollar prize, I know that no one has been able to truthfully display their "powers" without some scam behind it. So far there are no provable psychic abilities in existence at this moment.
12. ### slivered rootsRegistered Senior Member
Messages:
36
votorx: if you'd actually read what i wrote, then you wouldn't have assumed that i said that 'having a strong sense of existence' was a 'psychic ability'. to clarify, i was describing the way my mind thinks...these factors make up the way i view the earth around me...and i came to this forum because i thought that other people would have similar values. thus, it's a higher level thinking ability because we are more inclined to question things. this is not anything special.
and i do not have an ability. i was only asking people here if they had one or know of any people with them on the internet. if i ever said that i had an ability, what would be the point of me freely sharing that with you anyway?
i shouldn't have to justify myself to you...you just picked this problem when you read something that you didn't care to read fully.
do you not believe that people with abilities exist?
but anyway, to the rest of the people: psychic abilities or special abilities (like the girl on the internet) count. do you know any people who have them?
raven, that's interesting. do you have OBE as well? i'm still questioning about OBE. could you elaborate on your prophetic dreams?
mary magdalene was great...i'm reading a book on her at the moment
13. ### VotorxEgotistic...Valued Senior Member
Messages:
1,126
Yes I did assume that u were talking about psychic abilities since that is the main purpose of this thread is it not? I even gave you the benefit of the doubt and re-looked over your post to check to see if I missed anything and the results show that I have not. Correct me if I’m wrong but this is what you said
Not once did you ever mention that you were referring to something other than paranormal activities. While you may have mentioned higher level thinking you never implied that you yourself have this ability and when you explained that you have a strong sense of existence, love of nature etc, I simply assumed that you were keeping this discussion within the perimeters of the thread rather than resulting in a topic change. While this may surprise you, there are people who believe that spirituality and other such practices are a result of some psychotic sensitivity. While I am not one of these people there is the possibility that you are.
Anyways, I do not believe that people with “paranormal” abilities exists. As shown in another thread no reliable source has arisen to support such psychic existence. I admit I know nothing of these Indigo children, but I will look into it when I get the chance.
i shouldn't have to justify myself to you...you just picked this problem when you read something that you didn't care to read fully.
While you may not believe that you need to justify yourself to me your gonna have to start sooner or later, so why not start now? Everyone makes mistakes, so don’t hold yourself higher than everyone else. Perfection is impossible and right now you aren’t a person to do the impossible are you? Oh yes and I don’t mind picking the problem
I’ve done that purposely many times in this branch of this forums, therefore you aren’t going to make me regret or feel shame for what I’ve said. The only question is, “how long are YOU going to take before you result to profanity and vulgar expressions?”
14. ### goofyfishAnalog By Birth, Digital By DesignValued Senior Member
Messages:
5,331
Indigo children: otherwise touted as alien-ated youth?
Peace.
15. ### VotorxEgotistic...Valued Senior Member
Messages:
1,126
Well, for my own reasons, I decided to take a look at these indigo children and made a search on metacrawler for these “special” children and my first observation was that all of the of the information on this crystal children come from unreliable paranormal fanatics. Such sites that I am referring too include Sprituallite.com, Globalpsychics.com, Psychicphenomenon.com, Metapara.com, indigochildren.com etc etc. Even so I decided to enter one of the threads to read on the information they claimed was real. As usual I read about people I’ve never heard of, in places that don’t exist being interviewed from non existent companies like “IOCP” or something similar to that. Such medical phenomenons would make any person famous and well known, yet these people have never been mentioned to me or anyone I know. Maybe this is because there’s proof that such ideas are invalid?
16. ### slivered rootsRegistered Senior Member
Messages:
36
okay then, that puts me at ease...it really does. if you're so used to this, i'll just get used to it as well--i see that you do pick problems quite frequently. maybe i can learn from you in some strange way.
when i said this:
what does it explain?
my retreating doesn't mean that you win--it doesn't make me win either. no one wins this because we got nowhere, which is why i believe that people rarely win...winning is not important, anyway. so if you intended to win, that says something about you.
you'll discover that i'm really nice...in debate, i don't act vulgarly...so too bad.
now that we've got that cleared away: did you check out that website i posted? it is quite interesting. whether or not the girl is an indigo child, she still has an ability. it must be real if doctors are using her to do her work...plus, she's extremely accurate. how could this be a scam unless it was planned? it could be a scam, but it didn't appear that way at first when i read the article.
i know that this isn't a very popular ability, as other great abilities aren't popular (nor i've never personally seen it). it takes a lot of evidence for me to accept something when it comes to this. (the reason that i turned some interest to this was because i've had some OBEs, with which i had no clue of its existence until i searched the internet for some information after experiencing this.) i wasn't expecting a large portion of people to post in here saying that they have ability A and ability B, etc. it's a rare thing.
i've never seen anyone with a telepathy ability, so i don't believe it. same goes with most psychic abilities. i have at least "some" logic left. i've never seen that girl in action either, but there's a lot of accurate support as a result of her using her powers medically. let's just say that i believe in HEALERS.
17. ### slivered rootsRegistered Senior Member
Messages:
36
oh and i don't count an OBE as an "ability". just to pre-clarify that bit.
18. ### MystechAdult Supervision RequiredRegistered Senior Member
Messages:
3,938
Well it makes this place look kind of empty, doesn't it?
19. ### river-windValued Senior Member
Messages:
2,671
I have had an "ability" to know that someone close to me is going to die a fews days before it happens. Shitty thing is, I can't control it.
I have a dream, a certain type of dream, with a certain emotional connection to it, and within a week, poof someone's dead. I've been dead on for about 17 years, with one exception.
I have at this point, come to trust it. While I don't understand it, and it doesn't tell who, or how or when, I know someone, somehow, in the next 10 days or so.
Best friend blowing his brains out? check. Freak car accident? check. Freak brain hemmorage in wisdom teeth surgery? check. Even the more obvious, Step-dad in the ICU for months with a brain bleed - that one had more fore-warning.
A month ago, I had a slightly different type of dream. I wrote it up, and emailed it to my dad with the following line at the bottom: "I don't know what to make of this dream. it's like one of my death dreams, but not. It kind of like a 'loss' dream. I don't know."
Six days later, my sister had her baby, 9 weeks early. hanah, at 3lbs, 11 oz, was tiny, and things were risky. She has, however, done quite well over the past weeks, and is at 4lbs 5oz.
So I can say, yes, I think that I am somehow able to pick up on subtle causal threads in the world, and my brain displays them to me in my dreams. Or there is something else about this universe that we have not yet explained in science. I don't know much about it, if the later is the case.
Can I move shit with my mind? I thought I did it once, but after taking the situation apart, and study my body position, etc, I discovered that my breath was travelling down my body and forming a vortex which was causing the effect I was seeing. After I covered my face with a scarf, I couldn't reproduce the effect.
I have a burning desire, though, something like I don't know in any other area of my life. I WANT there to be something more there. its kind of odd, considering all its brought me so far is useless pre-knowledge of dead freinds.
RW
PS: I do fully beleive in chi energy, though I don't see it in the classical sense, simply due to my scientific background. I can warm my extremities 30 degrees F within a few mintues just by thinking about it. This is recorded at the U of Penn in philly- fully controled data. I have seen people break 1" boards with a single finger, knock a group of people over from across the room, and knock someone out by hitting them lightly in three places on their body (in sequence).
there is most certainly something more int he realm of chi which western science is only starting to open up to - evidence: insurance companies paying for accupunture treatments.
However, I've seen more thin faiths, frauds and lunatic beleifs in the world than I have actual "cool stuff."
PPS: I think as we learn more about the underlying physics of the universe that we live in; radiation, matter, strings, etc, we will come to understand alot more about the world we live in and the "subtleties" that psycics and such talk about. How do the metephorical strings in string theory vibrate to bring about different forms of matter/energy? Might it be possible for the vibration of a string, before it is enough to evidence in the 3 dimentions that we can easily perceive with sight/sound/hearing/touch/taste, to have an effect on the world? might it be possible for us to detect that vibration, even if it's on a subconscious level?
Given that matter effects thought, and thought can effect matter, how might thought effect the stuff that creates matter, and vise versa?
Last edited: Feb 26, 2004
20. ### VotorxEgotistic...Valued Senior Member
Messages:
1,126
you don't seem to understand the power of proof (i don't mean mathematically). Unless you can come up with reliable evidence then only the gullible fools will ever believe you.
21. ### river-windValued Senior Member
Messages:
2,671
who, me? I have no Proof, I have a heaping pile of anecdotal evidence, which is enough to make me question what I have been taught - nothing more. IMO, anyone who claims proof of anything (including science) is a fool.
Biofeedback research data is available from many health care institutions around the world.
Find me a method to directly record my dreams for external viewing, and I'll provide you with the same evidence that I have. Until then, I'm just relating a story to whomever cares to listen.
Last edited: Feb 27, 2004
22. ### slivered rootsRegistered Senior Member
Messages:
36
very interesting river-wind! my friend has the same thing as far as the energy thing goes. it's pretty weird, but very real.
speaking of real:
yes, some things turn to proof when they happen to you personally. however, most of these (if not all) things are impossible to prove to others who have not experienced them. it is then TOTALLY impossible to explain something to a person that never experienced something like it. it is easier for me to believe an ability related to dreams/prophetic dreams because i know it exists through some of my own experiences. it doesn't take as much proof when it comes to dreaming (in this specific way) for me--i know it's real. i like to think of it as "reflexive" proof because it happened to you, but nobody would believe you. it's a form of self-proof, let's say.
it's like near death experiences, in a way. a person can see god or whatever, but nobody believes them because it hasn't happened to them personally. (this is, of course, only if you believe in near death experiences. these are real because they've happened to many people in the hospital)
proof is a strange thing like that.
for "stranger" abilities (aka things i haven't witnessed/experienced), it naturally takes more proof to understand what's going on.
23. ### cosmictravelerBe kind to yourself always.Valued Senior Member
Messages:
33,264
I can do this: |
# Context-free Grammar for the language L=\${w, epsilon left { a,b,c right }^{*} /w= a^{n} b^{2n+1} c^{n} a^{n-1} ,|,,n ge 2}\$
I have been trying to find a context-free grammar for the language L=$${w, epsilon left { a,b,c right }^{*} /w= a^{n} b^{2n+1} c^{n} a^{n-1} ,|,,n ge 2}$$ for some time but I cannot get it done. Any ideas ? |
• ### Quantum manipulation of biphoton spectral distributions in a 2D frequency space toward arbitrary shaping of a biphoton wave packet(1805.00148)
May 1, 2018 quant-ph
In this work, we experimentally manipulate the spectrum and phase of a biphoton wave packet in a two-dimensional frequency space. The spectrum is shaped by adjusting the temperature of the crystal, and the phase is controlled by tilting the dispersive glass plate. The manipulating effects are confirmed by measuring the two-photon spectral intensity (TSI) and the Hong-Ou-Mandel (HOM) interference patterns. Unlike the previous independent manipulation schemes, here we perform joint manipulation on the biphoton spectrum. The technique in this work paves the way for arbitrary shaping of a multi-photon wave packet in a quantum manner.
• ### Experimental demonstration of time-frequency duality of biphotons(1801.09044)
Jan. 27, 2018 quant-ph
Time-frequency duality, which enables control of optical waveforms by manipulating amplitudes and phases of electromagnetic fields, plays a pivotal role in a wide range of modern optics. The conventional one-dimensional (1D) time-frequency duality has been successfully applied to characterize the behavior of classical light, such as ultrafast optical pulses from a laser. However, the 1D treatment is not enough to characterize quantum mechanical correlations in the time-frequency behavior of multiple photons, such as the biphotons from parametric down conversion. The two-dimensional treatment is essentially required, but has not been fully demonstrated yet due to the technical problem. Here, we study the two-dimensional (2D) time-frequency duality duality of biphotons, by measuring two-photon distributions in both frequency and time domains. It was found that generated biphotons satisfy the Fourier limited condition quantum mechanically, but not classically, by analyzing the time-bandwidth products in the 2D Fourier transform. Our study provides an essential and deeper understanding of light beyond classical wave optics, and opens up new possibilities for optical synthesis in a high-dimensional frequency space in a quantum manner.
• ### Extended Wiener-Khinchin theorem for quantum spectral analysis(1709.04837)
Sept. 14, 2017 quant-ph
The classical Wiener-Khinchin theorem (WKT), which can extract spectral information by classical interferometers through Fourier transform, is a fundamental theorem used in many disciplines. However, there is still need for a quantum version of WKT, which could connect correlated biphoton spectral information by quantum interferometers. Here, we extend the classical WKT to its quantum counterpart, i.e., extended WKT (e-WKT), which is based on two-photon quantum interferometry. According to the e-WKT, the difference-frequency distribution of the biphoton wavefunctions can be extracted by applying a Fourier transform on the time-domain Hong-Ou-Mandel interference (HOMI) patterns, while the sum-frequency distribution can be extracted by applying a Fourier transform on the time-domain NOON state interference (NOONI) patterns. We also experimentally verified the WKT and e-WKT in a Mach-Zehnder interference (MZI), a HOMI and a NOONI. This theorem can be directly applied to quantum spectroscopy, where the spectral correlation information of biphotons can be obtained from time-domain quantum interferences by Fourier transform. This may open a new pathway for the study of light-matter interaction at the single photon level.
• ### Monotonic quantum-to-classical transition enabled by positively-correlated biphotons(1706.09994)
June 30, 2017 quant-ph
Multiparticle interference is a fundamental phenomenon in the study of quantum mechanics.It was discovered in a recent experiment [Ra, Y.-S. et al, Proc. Natl Acad. Sci. USA \textbf{110}, 1227(2013)] that spectrally uncorrelated biphotons exhibited a nonmonotonic quantum-to-classical transition in a four-photon Hong-Ou-Mandel (HOM) interference. In this work, we consider the same scheme with spectrally correlated photons.By theoretical calculation and numerical simulation, we found the transition not only can be nonmonotonic with negative-correlated or uncorrelated biphotons, but also can be monotonic with positive-correlated biphotons. The fundamental reason for this difference is that the HOM-type multi-photon interference is a differential-frequency interference. Our study may shed new light on understanding the role of frequency entanglement in multi-photon behavior.
• ### Free-space optical channel estimation for physical layer security(1607.07799)
July 10, 2016 cs.IT, math.IT
We present experimental data on message transmission in a free-space optical (FSO) link at an eye-safe wavelength, using a testbed consisting of one sender and two receiver terminals, where the latter two are a legitimate receiver and an eavesdropper. The testbed allows us to emulate a typical scenario of physical-layer (PHY) security such as satellite-to-ground laser communications. We estimate information-theoretic metrics including secrecy rate, secrecy outage probability, and expected code lengths for given secrecy criteria based on observed channel statistics. We then discuss operation principles of secure message transmission under realistic fading conditions, and provide a guideline on a multi-layer security architecture by combining PHY security and upper-layer (algorithmic) security.
• ### Detection-dependent six-photon NOON state interference(1607.00926)
July 4, 2016 quant-ph
NOON state interference (NOON-SI) is a powerful tool to improve the phase sensing precision, and can play an important role in quantum sensing and quantum imaging. However, most of the previous NOON-SI experiments only investigated the center part of the interference pattern, while the full range of the NOON-SI pattern has not yet been well explored.In this Letter, we experimentally and theoretically demonstrate up to six-photon NOON-SI and study the properties of the interference patterns over the full range.The multi-photons were generated at a wavelength of 1584 nm from a PPKTP crystal in a parametric down conversion process.It was found that the shape, the coherence time and the visibility of the interference patterns were strongly dependent on the detection schemes.This experiment can be used for applications which are based on the envelope of the NOON-SI pattern, such as quantum spectroscopy and quantum metrology.
• ### Generation and distribution of high-dimensional frequency-entangled qudits(1603.07887)
March 31, 2016 quant-ph
We demonstrate a novel scheme to generate frequency-entangled qudits with dimension number higher than 10 and to distribute them over optical fibers of 15 km in total length. This scheme combines the technique of spectral engineering of biphotons generated by spontaneous parametric down-conversion and the technique of spectrally resolved Hong-Ou-Mandel interference. We characterized the comb-like spectral correlation structures of the qudits by time of arrival measurement and correlated spectral intensity measurement. The generation and distribution of the distinct entangled frequency modes may be useful for quantum cryptography, quantum metrology, quantum remote synchronization, as well as fundamental test of stronger violation of local realism.
• ### Spectrally resolved Hong-Ou-Mandel interference between independent sources(1507.02424)
July 9, 2015 quant-ph
Hong-Ou-Mandel (HOM) interference between independent photon sources (HOMI-IPS) is the fundamental block for quantum information processing, such as quantum gate, Shor's algorithm, Boson sampling, etc. All the previous HOMI-IPS experiments were carried out in time-domain, however, the spectral information during the interference was lost, due to technical difficulties. Here, we investigate the HOMI-IPS in spectral domain using the recently developed fast fiber spectrometer, and demonstrate the spectral distribution during the HOM interference between two heralded single-photon sources, and two thermal sources. This experiment can not only deepen our understanding of HOMI-IPS in the spectral domain, but also be utilized to improve the visibility by post-processing spectral filtering.
• ### Highly efficient entanglement swapping and teleportation at telecom wavelength(1410.0087)
Oct. 1, 2014 quant-ph
Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors.We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 $\pm$ 1.0% (85.1 $\pm$ 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test.Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swaping and multi-photon entanglement generation experiments at around 800\,nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links.
• ### Efficient generation of twin photons at telecom wavelengths with 10 GHz repetition-rate tunable comb laser(1409.3025)
Sept. 10, 2014 quant-ph
Efficient generation and detection of indistinguishable twin photons are at the core of quantum information and communications technology (Q-ICT). These photons are conventionally generated by spontaneous parametric down conversion (SPDC), which is a probabilistic process, and hence occurs at a limited rate, which restricts wider applications of Q-ICT. To increase the rate, one had to excite SPDC by higher pump power, while it inevitably produced more unwanted multi-photon components, harmfully degrading quantum interference visibility.Here we solve this problem by using recently developed 10 GHz repetition-rate-tunable comb laser, combined with a group-velocity-matched nonlinear crystal, and superconducting nanowire single photon detectors. They operate at telecom wavelengths more efficiently with less noises than conventional schemes, those typically operate at visible and near infrared wavelengths generated by a 76 MHz Ti Sapphire laser and detected by Si detectors. We could show high interference visibilities, which are free from the pump-power induced degradation. Our laser, nonlinear crystal, and detectors constitute a powerful tool box, which will pave a way to implementing quantum photonics circuits with variety of good and low-cost telecom components, and will eventually realize scalable Q-ICT in optical infra-structures.
• ### Pulsed Sagnac polarization-entangled photon source with a PPKTP crystal at telecom wavelength(1311.3462)
May 6, 2014 quant-ph
We demonstrate pulsed polarization-entangled photons generated from a periodically poled $\mathrm{KTiOPO_4}$ (PPKTP) crystal in a Sagnac interferometer configuration at telecom wavelength. Since the group-velocity-matching (GVM) condition is satisfied, the intrinsic spectral purity of the photons is much higher than in the previous scheme at around 800 nm wavelength. The combination of a Sagnac interferometer and the GVM-PPKTP crystal makes our entangled source compact, stable, highly entangled, spectrally pure and ultra-bright. The photons were detected by two superconducting nanowire single photon detectors (SNSPDs) with detection efficiencies of 70% and 68% at dark counts of less than 1 kcps. We achieved fidelities of 0.981 $\pm$ 0.0002 for $\left| {\psi ^ -} \right\rangle$ and 0.980 $\pm$ 0.001 for $\left| {\psi ^ +} \right\rangle$ respectively. This GVM-PPKTP-Sagnac scheme is directly applicable to quantum communication experiments at telecom wavelength, especially in free space.
• ### Efficient detection of a highly bright photon source using superconducting nanowire single photon detectors(1309.1221)
May 6, 2014 quant-ph
We investigate the detection of an ultra-bright single-photon source using highly efficient superconducting nanowire single-photon detectors (SNSPDs) at telecom wavelengths. Both the single-photon source and the detectors are characterized in detail. At a pump power of 100 mW (400 mW), the measured coincidence counts can achieve 400 kcps (1.17 Mcps), which is the highest ever reported at telecom wavelengths to the best of our knowledge. The multi-pair contributions at different pump powers are analyzed in detail. We compare the experimental and theoretical second order coherence functions $g^{(2)}(0)$ and find that the conventional experimentally measured $g^{(2)}(0)$ values are smaller than the theoretically expected ones. We also consider the saturation property of SNSPD and find that SNSPD can be easier to saturate with a thermal state rather than with a coherent state. The experimental data and theoretical analysis should be useful for the future experiments to detect ultra-bright down-conversion sources with high-efficiency detectors.
• ### Nonclasscial interference between independent intrinsically pure single photons at telecom wavelength(1303.2778)
July 24, 2013 quant-ph
We demonstrate a Hong-Ou-Mandel interference between two independent, intrinsically pure, heralded single photons from spontaneous parametric down conversion (SPDC) at telecom wavelength. A visibility of $85.5\pm8.3%$ was achieved without using any bandpass filter. Thanks to the group-velocity-matched SPDC and superconducting nanowire single photon detectors (SNSPDs), the 4-fold coincidence counts are one order higher than that in the previous experiments. The combination of bright single photon sources and SNSPDs is a crucial step for future practical quantum info-communication systems at telecom wavelength.
• ### Entangled state generation with an intrinsically pure single-photon source and a weak coherent source(1303.2780)
July 24, 2013 quant-ph
We report on the experimental generation of an entangled state with a spectrally pure heralded single-photon state and a weak coherent state. By choosing group-velocity matching in the nonlinear crystal, our system for producing entangled photons was 60 times brighter than that in the earlier experiment [Phys. Rev. Lett. 90, 240401 (2003)], with no need of bandpass filters. This entanglement system is useful for quantum information protocols that require indistinguishable photons from independent sources.
• ### Widely tunable single photon source with high purity at telecom wavelength(1303.6015)
March 25, 2013 quant-ph, physics.optics
We theoretically and experimentally investigate the spectral tunability and purity of photon pairs generated from spontaneous parametric down conversion in periodically poled $\mathrm{KTiOPO_4}$ crystal with group-velocity matching condition. The numerical simulation predicts that the purity of joint spectral intensity ($P_{JSI}$) and the purity of joint spectral amplitude ($P_{JSA}$) can be kept higher than 0.98 and 0.81, respectively, when the wavelength is tuned from 1460 nm to 1675 nm, which covers the S-, C-, L-, and U-band in telecommunication wavelengths. We also directly measured the joint spectral intensity at 1565 nm, 1584 nm and 1565 nm, yielding $P_{JSI}$ of 0.989, 0.983 and 0.958, respectively. Such a photon source is useful for quantum information and communication systems.
• ### Observation of optical-fiber Kerr nonlinearity at the single-photon level(1211.3488)
Optical fibers have been enabling numerous distinguished applications involving the operation and generation of light, such as soliton transmission, light amplification, all-optical switching and supercontinuum generation. The active function of optical fibers in the quantum regime is expected to be applicable to ultralow-power all-optical signal processing and quantum information processing. Here we demonstrate the first experimental observation of optical nonlinearity at the single-photon level in an optical fiber. Taking advantage of large nonlinearity and managed dispersion of a photonic crystal fiber, we have successfully measured very small (10^(-7) ~ 10^(-8)) conditional phase shifts induced by weak coherent pulses that contain one or less than one photon per pulse on average. In spite of its tininess, the phase shift was measurable using much (~10^6 times) stronger coherent probe pulses than the pump pulses. We discuss the feasibility of quantum information processing using optical fibers, taking into account the observed Kerr nonlinearity accompanied by ultrafast response time and low induced loss.
• ### Four-Photon Quantum Interferometry at a Telecom Wavelength(1112.2019)
Dec. 9, 2011 quant-ph
We report the experimental demonstration of four-photon quantum interference using telecom-wavelength photons. Realization of multi-photon quantum interference is essential to linear optics quantum information processing and measurement-based quantum computing. We have developed a source that efficiently emits photon pairs in a pure spectrotemporal mode at a telecom wavelength region, and have demonstrated the quantum interference exhibiting the reduced fringe intervals that correspond to the reduced de Broglie wavelength of up to the four photon `NOON' state. Our result should open a path to practical quantum information processing using telecom-wavelength photons.
• ### Experimental Activation of Bound Entanglement(1111.6170)
Nov. 26, 2011 quant-ph
Entanglement is one of the essential resources in quantum information and communication technology (QICT). The entanglement thus far explored and applied to QICT has been pure and distillable entanglement. Yet there is another type of entanglement, called 'bound entanglement', which is not distillable by local operations and classical communication (LOCC). We demonstrate the experimental 'activation' of the bound entanglement held in the four-qubit Smolin state, unleashing its immanent entanglement in distillable form, with the help of auxiliary two-qubit entanglement and LOCC. We anticipate that it opens the way to a new class of QICT applications that utilize more general classes of entanglement than ever, including bound entanglement.
• ### High-visibility nonclassical interference between pure heralded single photons and weak coherent photons(1010.5638)
Oct. 30, 2010 quant-ph
We present an experiment of nonclassical interference between a pure heralded single-photon state and a weak coherent state. Our experiment is the first to demonstrate that spectrally pure single photons can have high interference visibility, 89.4 \pm 0.5%, with weak coherent photons. Our scheme lays the groundwork for future experiments requiring quantum interference between photons in nonclassical states and those in coherent states.
• ### Generation of polarization entanglement from spatially-correlated photons in spontaneous parametric down-conversion(0711.4177)
Nov. 27, 2007 quant-ph
We propose a novel scheme to generate polarization entanglement from spatially-correlated photon pairs. We experimentally realized a scheme by means of a spatial correlation effect in a spontaneous parametric down-conversion and a modified Michelson interferometer. The scheme we propose in this paper can be interpreted as a conversion process from spatial correlation to polarization entanglement.
• ### Photon polarization entanglement induced by biexciton: experimental evidence for violation of Bell's inequality(quant-ph/0607139)
July 21, 2006 quant-ph
We have investigated the polarization entanglement between photon pairs generated from a biexciton in a CuCl single crystal via resonant hyper parametric scattering. The pulses of a high repetition pump are seen to provide improved statistical accuracy and the ability to test Bell's inequality. Our results clearly violate the inequality and thus manifest the quantum entanglement and nonlocality of the photon pairs. We also analyzed the quantum state of our photon pairs using quantum state tomography.
• ### Quantum diffraction and interference of spatially correlated photon pairs and its Fourier-optical analysis(quant-ph/0603064)
May 25, 2006 quant-ph
We present one- and two-photon diffraction and interference experiments involving parametric down-converted photon pairs. By controlling the divergence of the pump beam in parametric down-conversion, the diffraction-interference pattern produced by an object changes from a quantum (perfectly correlated) case to a classical (uncorrelated) one. The observed diffraction and interference patterns are accurately reproduced by Fourier-optical analysis taking into account the quantum spatial correlation. We show that the relation between the spatial correlation and the object size plays a crucial role in the formation of both one- and two-photon diffraction-interference patterns.
• ### Quantum diffraction and interference of spatially correlated photon pairs generated by spontaneous parametric down-conversion(quant-ph/0210142)
Oct. 21, 2002 quant-ph
We demonstrate one- and two-photon diffraction and interference experiments utilizing parametric down-converted photon pairs (biphotons) and a transmission grating. With two-photon detection, the biphoton exhibits a diffraction-interference pattern equivalent to that of an effective single particle that is associated with half the wavelength of the constituent photons. With one-photon detection, however no diffraction-interference pattern is observed. We show that these phenomena originate from the spatial quantum correlation between the down-converted photons.
• ### Measurement of the photonic de Broglie wavelength of biphotons generated by spontaneous parametric down-conversion(quant-ph/0109005)
Feb. 26, 2002 quant-ph
Using a basic Mach-Zehnder interferometer, we demonstrate experimentally the measurement of the photonic de Broglie wavelength of an entangled photon pair (a biphoton) generated by spontaneous parametric down-conversion. The observed interference manifests the concept of the photonic de Broglie wavelength. The result also provides a proof-of-principle of the quantum lithography that utilizes the reduced interferometric wavelength. |
Volume 173 Issue 2 – March 2011
All automorphisms of the Calkin algebra are inner
Pages 619-661 by Ilijas Farah | From volume 173-2
Grothendieck rings of basic classical Lie superalgebras
Pages 663-703 by Alexander N. Sergeev, Alexander P. Veselov | From volume 173-2
Stable homology of automorphism groups of free groups
Pages 705-768 by Søren Galatius | From volume 173-2
Random generation of finite and profinite groups and group enumeration
Pages 769-814 by Andrei Jaikin-Zapirain, László Pyber | From volume 173-2
Distribution of periodic torus orbits and Duke’s theorem for cubic fields
Pages 815-885 by Manfred Einsiedler, Elon Lindenstrauss, Philippe Michel, Akshay Venkatesh | From volume 173-2
Asymptotics of characters of symmetric groups related to Stanley character formula
Pages 887-906 by Valentin Féray , Piotr Śniady | From volume 173-2
Hermitian integral geometry
Pages 907-945 by Andreas Bernig, Joseph H. G. Fu | From volume 173-2
Cycle integrals of the $j$-function and mock modular forms
Pages 947-981 by William Duke, Özlem Imamoglu , Á. Tóth | From volume 173-2
Global regularity for some classes of large solutions to the Navier-Stokes equations
Pages 983-1012 by Jean-Yves Chemin, Isabelle Gallagher, Marius Paicu | From volume 173-2
The weak type $(1,1)$ bounds for the maximal function associated to cubes grow to infinity with the dimension
Pages 1013-1023 by J. M. Aldaz | From volume 173-2
Livšic Theorem for matrix cocycles
Pages 1025-1042 by Boris Kalinin | From volume 173-2
Representations of Yang-Mills algebras
Pages 1043-1080 by Estanislao Herscovich, Andrea Solotar | From volume 173-2
Weyl group multiple Dirichlet series, Eisenstein series and crystal bases
Pages 1081-1120 by Ben Brubaker, Daniel Bump, Solomon Friedberg | From volume 173-2
A geometric approach to Conn’s linearization theorem
Pages 1121-1139 by Marius Crainic, Rui Loja Fernandes | From volume 173-2
On the distributional Jacobian of maps from $\mathbb{S}^N$ into $\mathbb{S}^N$ in fractional Sobolev and Hölder spaces
Pages 1141-1183 by Haïm Brezis , Hoai-Minh Nguyen | From volume 173-2 |
# Duration for attacking Two-Key Triple-DES Encryption using all RAM ever built?
I am considering attacks on Two-Key Triple-DES Encryption assuming $2^{32}$ known plaintext/ciphertext pairs (that's a mere 32MiB of ciphertext) by the method devised by Paul C. van Oorschot and Michael J. Wiener: A Known-Plaintext Attack on Two-Key Triple Encryption (in proceedings of Eurocrypt 1990), or another published method not requiring significantly more DES computations.
As a synthetic information for decision makers, I am looking for an independent estimate of how much time this is expected to require, assuming all the RAM ever built by mankind to that day (of April 2012) was put to full use.
Note: I'm purposely not asking when the attack could become feasible using all the RAM ever built by mankind, because estimates on the amount of RAM mankind will build, and when, are less falsifiable.
Update: I am not considering cost; neither of RAM, power, logic including DES engines (as long as the number of DES operations remains within $2^{90}$). I am willing to assume that the amount of RAM used, and its effective speed, are the only factors to account for in determining the expected duration of the attack. This is similar to the hypothesis made by the authors of the linked paper, that their attack is limited by the amount (or cost) of RAM used, with all other factors of secondary importance.
Update: sadly, nobody dared answer the question and the bounty period is over. Thus here is a first order answer to criticize.
-
Am I correct in assuming that part of the problem is deriving an estimate the total RAM built to date? – B-Con May 3 '12 at 4:41
Is this question really correctly put? If you refer to the Oorshot-Wiener paper originally published in 1990, it seems the memory requirements are fixed by the amount of known plain text, and with $2^32$ known plain text the largest of the two tables requires $2^40$ bits, i.e. just one terabit. Or am I missing something? – Henrick Hellström May 3 '12 at 5:31
Yes, part of the problem is estimating the total amount of RAM built to date, and its usable access rate in the context of the attack. $2^{32}$ blocks is here to match the conditions assumed by the authors. 32 Giga Bytes puts that number in perspective. Yes in the simplest setup the attack needs only $2^{40}$ bits in the biggest table, but runtime is horrific; thus I assume we can put to full use all the RAM built to date, and ask for the runtime. Which is the best way that I could devise to make the result graspable as a single number. – fgrieu May 3 '12 at 5:57
The reason the large table was set to $2^{40}$ bits, was that it is supposed to contain a hash table of the known plain text. AFAICS you have no use for more RAM, unless you also have more known plain text to fill it with. – Henrick Hellström May 3 '12 at 6:07
Yes, but I reckon you wanted a semi-realistic lower bound of the time required for the attack, and clearly that depends not only on the amount of RAM ever built, but also on engineering questions and a lot of other costs; not only the cost of other hardware, but how many CPU cores and DES engines you might physically wire to a single instance of memory, what it would cost to manufacture such a circuit, and the cost of energy for both running the thing and cooling it? Simply put: Is it really still the amount of memory available that puts a limit to the attack, rather than other factors? – Henrick Hellström May 4 '12 at 8:07
The original article rightfully neglects the cost of DES computations (there are less than $2^{90}$) and everything except memory accesses to its Table 1 and Table 2. I go one step further: considering that Table 1 is initialized only once and then read-only, it could be in ROM, and I neglect all except the accesses to Table 2. The attack requires an expected $2^{88}$ random writes and as many random reads to Table 2, organized as $2^{25}\cdot 24$-bit words.
The cheap PC that I bought today came with 4 GByte of DDR3 DRAM, as a single 64-bit-wide DIMM with 16 DRAM chips each $2^{28}\cdot 8$-bit, costing about \$1 per chip in volume. Bigger chips exists: my brand new 32-GByte server uses 64 chips each$2^{29}\cdot 8$-bit, and these are becoming increasingly common (though price per bit is still higher than for the mainstream$2^{28}\cdot 8$-bit chips). Two mainstream$2^{28}\cdot 8$-bit chips hold one instance of Table 2, and one 124-bit word can be accessed as 8 consecutive 8-bit locations in each of the two chips simultaneously (consecutive accesses are like 15 times faster than random accesses). One$2^{29}\cdot 8$-bit chip would be slightly slower. Assuming DDR3-1066 with 7-cycles latency (resp. DDR3-1333 with 9-cycles latency), 8 consecutive access require at least$(7\cdot 2+7)/1066\approx 0.020$µs (resp.$(9\cdot 2+7)/1333\approx 0.019$µs). This is a decimal order of magnitude less than considered in the original article. For each instance of Table 2, that is 0.5 GByte, we can perform at most$365\cdot 86400\cdot 10^6/0.019/2\approx 2^{49.6}$read+write accesses per year to Table 2 using mainstream DRAM. Thus with$n$GByte of mainstream DRAM, and unless I err somewhere, the expected duration is$2^{37.4}/n$years. Based on press releases of a serious reference, there are less than$2^{31}$PCs around, and assuming that my cheap PC is representative, that's$2^{33}$GByte. Another way to look at that is that each 0.25-GByte chip cost about \$$1; and the DRAM revenues in 2011 is less than \$$2^{35}$, thus enough for $2^{33}$ GByte (but notice that most of the revenue is from chips that are not optimized for cost per bit). I'll guesstimate all the RAM ever built is equivalent to at most $2^{35}$ GByte of mainstream DRAM for the purpose of the attack.
Update: as noted by the authors of the original article, "the execution time is not particularly sensitive to the number of plaintext/ciphertext pairs $n$ (provided that $n$ is not too small) because as $n$ increases, the number of operations required for the attack ($2^{120-\log_2 n}$) decreases, but memory requirements increase, and the number of machines that can be built with a fixed amount of money decreases". By the same argument, our required amount of RAM is not much changed if we get more known plaintext/ciphertext pairs. |
# Asynchronous Acquisition¶
This section encompasses “fly scans,” “monitoring,” and in general handling data acquisition that is occurring at different rates.
Note
If you are here because you just want to “move two motors at once” or something in that category, you’re in luck: you don’t need anything as complex as what we present in this section. Read about multidimensional plans in the section on Plans.
In short, “flying” is for acquisition at high rates and “monitoring” is for acquisition at an irregular or slow rate. Monitoring does not guarantee that all readings will be captured; i.e. monitoring is lossy. It is susceptible to network glitches. But flying, by contract, is not lossy if correctly implementated.
Flying means: “Let the hardware take control, cache data externally, and then transfer all the data to the RunEngine at the end.” This is essential when the data acquisition rates are faster than the RunEngine or Python can go.
Note
As a point of reference, the RunEngine processes message at a rate of about 35k/s (not including any time added by whatever the message does).
In [3]: %timeit RE(Msg('null') for j in range(1000))
10 loops, best of 3: 26.8 ms per loop
Monitoring a means acquiring readings whenever a new reading is available, at a device’s natural update rate. For example, we might monitor background condition (e.g., beam current) on the side while executing the primary logic of a plan. The documents are generated in real time — not all at the end, like flying — so if the update rate is too high, monitoring can slow down the execution of the plan. As mentioned above, monitoring is also lossy: if network traffic is high, some readings may be missed.
## Flying¶
In bluesky’s view, there are three steps to “flying” a device during a scan.
1. Kickoff: Begin accumulating data. A ‘kickoff’ command completes once acquisition has successfully started.
2. Complete: This step tells the device, “I am ready whenever you are ready.” If the device is just collecting until it is told to stop, it will report that it is ready immediately. If the device is executing some predetermined trajectory, it will finish before reporting ready.
3. Collect: Finally, the data accumulated by the device is transferred to the RunEngine and processed like any other data.
To “fly” one or more “flyable” devices during a plan, bluesky provides a preprocessor <preprocessors>. It is available as a wrapper, fly_during_wrapper()
from ophyd.sim import det, flyer1, flyer2 # simulated hardware
from bluesky.plans import count
from bluesky.preprocessors import fly_during_wrapper
RE(fly_during_wrapper(count([det], num=5), [flyer1, flyer2]))
and as a decorator, fly_during_decorator().
from ophyd.sim import det, flyer1, flyer2 # simulated hardware
from bluesky.plans import count
from bluesky.preprocessors import fly_during_wrapper
# Define a new plan for future use.
fly_and_count = fly_during_decorator([flyer1, flyer2])(count)
RE(fly_and_count([det]))
Alternatively, if you are using Supplemental Data, simply append to or extend its list of flyers to kick off during every run:
from ophyd.sim import flyer1, flyer2
# Assume sd is an instance of the SupplementalData set up as
# descripted in the documentation linked above.
sd.flyers.extend([flyer1, flyer2])
They will be included with all plans until removed.
## Monitoring¶
To monitor some device during a plan, bluesky provides a preprocessor <preprocessors>. It is available as a wrapper, monitor_during_wrapper()
from ophyd.sim import det, det1
from bluesky.plans import count
from bluesky.preprocessors import monitor_during_wrapper
# Record any updates from det1 while 'counting' det 5 times.
RE(monitor_during_wrapper(count([det], num=5), [det1]))
and as a decorator, monitor_during_decorator().
from ophyd.sim import det, det1
from bluesky.plans import count
from bluesky.preprocessors import monitor_during_wrapper
# Define a new plan for future use.
monitor_and_count = monitor_during_decorator([det1])(count)
RE(monitor_and_count([det]))
Alternatively, if you are using Supplemental Data, simply append to or extend its list of signals to monitor:
from ophyd.sim import det1
# Assume sd is an instance of the SupplementalData set up as
# descripted in the documentation linked above.
sd.monitors.append(det1)
They will be included with all plans until removed. |
# Global existence of finite energy weak solutions to the Quantum Navier-Stokes equations with non-trivial far-field behavior
### Abstract
We prove global existence of finite energy weak solutions to the quantum Navier-Stokes equations in the whole space with non trivial far-field condition in dimensions d = 2,3. The vacuum regions are included in the weak formulation of the equations. Our method consists in an invading domains approach. More precisely, by using a suitable truncation argument we construct a sequence of approximate solutions. The energy and the BD entropy bounds allow for the passage to the limit in the truncated formulation leading to a finite energy weak solution. Moreover, the result is also valid in the case of compressible Navier-Stokes equations with degenerate viscosity.
Type
Publication
preprint on the arXiv |
# Prove $3^n = \sum_{0 \leq i \leq j \leq n}$ $n \choose i$ $i \choose j$
How to prove $3^n = \sum_{0 \leq j \leq i \leq n}$ $n \choose i$ $i \choose j$ using $3^n = \sum_{0 \leq i \leq n} 2^i$ $n \choose i$
-
$2 = 1+1$. :-) Also, shouldn't it be $j \leq i$? – WimC Nov 19 '12 at 16:51
@WimC yes typo, thanks – xiamx Nov 19 '12 at 16:52
$$\sum_{0 \leq j \leq i \leq n} {n \choose i} {i \choose j}=\sum_{0 \le i \le n} {n \choose i} \sum_{0 \le j \le i}{i \choose j}=\sum_{0 \le i \le n} {n \choose i} 2^i=3^n.$$
Count cardinality of $S = \{(A,B):B \subseteq A \subseteq \left\{1,2,\dots,n\right\}\}$ in two different ways:
Way 1. Each element of $\{1,2,\dots,n\}$ can either be in $A$ and $B$, only in $A$, or in none of $A$ and $B$, so $|S|=3^n$.
Way 2. If $|A|=i$ and $|B|=j$, then there are $n \choose i$ options for $A$ and $i \choose j$ options for $B$, therefore $|S|=\sum_{j \leq i} {n \choose i} {i \choose j}$. |
### In search of violations
This week high-energy physics published some interesting results for our fundamental knowledge of the universe. On 14th November, LHCb published the preliminar analysis about a possible CP-violation in charm decays (see also the CERN's Bulletin and Mat Charles' presentation).
The researches about CP-violations are very important because in this way we can argue the differences between matter and anti-matter. If a physics law is CP-invariant, we must write that the beahvior of matter and anti-matter is the same. But our universe is constituted by matter and we don't know why it is so. The answer could be in CP-violation studies, like the preliminary data analyzed by LHCb team. The main goal of the experiment is the search of the properties of quark b, but he could also measure the properties of quark c. And studying the preliminary data about c decays the team find a clue of a CP-violation in a non expected channel. Following Tommaso Dorigo and Marco Delmastro (english translation by Google), if the result will be confirmed by further analysis, this could be the first sign of physics beyond Standard Model.
The other possible violation is the wall of the speed of light: indeed, OPERA experiment confirm their previous data. Yesterday, in the updated version of their famous preprint, OPERA's researchers described a new serie of measures realized with CNGS using a short-bunch wide-spacing beam.
The new measures confirms the previous observations: some superluminal neutrinos arrive in OPERA's detector. Their advance is $62.1 \pm 3.7$ ns for the bunched beam test and $57.8 \pm 7.8$ ns for the main analysis. We can see the distribution of $\delta t$ in the second analysis in the following plot:
About this result, Giovanni Fiorentini from Ferrara's INFN say to Le Scienze, the italian counterpart of Scientific American:
But the glass is half empty because the neutrinos' beams are very close when they leave and they should be short also when they arrive: on the contrary, it likes that some of them flight a little more and others a little less, as if gthere was a scattering; this probably reflects the fact that the temporal resolution of OPERA's detector isn't nanosecond, but this is a bit worse than expected, altough not so much to affect the result: we can say than the test has been well at 70 percent but not 100 percent
But following Philip Plait:
However, they used the same timing apparatus, and a lot of people - me included - think this is where the problem lies. They need to figure out a way of making that more transparent and perhaps using a different timing method.
So we must wait MINOS team: they are preparing experiment to perform a new measure of neutrinos timo of flight.
Nature's blog
Tommaso Dorigo
Sascha Vongehr
The neutrino's saga on Doc Madhattan:
News from the OPERA
Probably not
Waiting supeluminal neutrinos: from Maxwell to Einstein
Waiting the superluminal neutrinos (if they exist!) |
• September 10th 2009, 03:25 PM
marie7
f(x) = 4x^3 - 8squarerootx for 0 equal/less x equal/less 10
I've got thus far
y' = 12x^2 - (4/squarerootx)
when y' = 0, x = 0.64439
y'' = 24x + 2x^-1.5
when x = 0.64439, y'' = 19.33, which is a local minimum
y-coordinate of x = 0.64439 is -5.35
Left boundary x = 0, y = 0
Right boundary x = 10, y = 3,974.70
and I know how to draw the graph
okay so what is the global minimum/maximum? What does that mean?
• September 10th 2009, 03:45 PM
pickslides
$f(x) = 4x^3 - \sqrt[8]{x} ~, ~0 \leq x \leq 10$
$f(x) = 4x^3 - 8\times \sqrt{x} ~ , ~0 \leq x \leq 10$ |
Dedekind eta-function
Jump to: navigation, search
2010 Mathematics Subject Classification: Primary: 11F20 [MSN][ZBL]
The function defined by
$$\eta(z)=e^{\pi iz/12}\prod_{n=1}^\infty(1-e^{2\pi inz})$$
for $z\in\mathbf C$, $\operatorname{Im}z>0$. As the infinite product converges absolutely, uniformly for $z$ in compact sets (cf. Uniform convergence), the function $\eta$ is holomorphic (cf. Analytic function). Moreover, it satisfies $\eta(z+1)=e^{\pi i/12}\eta(z)$ and $\eta(-1/z)=\sqrt{-iz}\eta(z)$. So, $\eta^{24}$ is a modular form of weight $12$ (cf. also Modular group).
R. Dedekind [a1] comments on computations of B. Riemann in connection with theta-functions. He shows that it is basic to understand the transformation behaviour of the logarithm of the function now carrying his name. This study leads him to quantities now called Dedekind sums. See [a2], Chapt. IV, for a further discussion.
References
[a1] R. Dedekind, "Erläuterungen zu den fragmenten XXVIII" H. Weber (ed.) , B. Riemann: Gesammelte mathematische Werke und wissenschaftlicher Nachlass , Dover, reprint (1953) Zbl 0053.19405 [a2] H. Rademacher, E. Grosswald, "Dedekind sums" , Math. Assoc. America (1972) Zbl 0251.10020
How to Cite This Entry:
Dedekind eta-function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dedekind_eta-function&oldid=40959
This article was adapted from an original article by R.W. Bruggeman (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article |
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Proceedings of the YSU, Physics & Mathematics: Year: Volume: Issue: Page: Find
Proceedings of the YSU, Physics & Mathematics, 2018, Volume 52, Issue 1, Pages 8–11 (Mi uzeru451)
Mathematics
On the minimal coset coverings of the set of singular and of the set of nonsingular matrices
A. V. Minasyan
Chair of Discrete Mathematics and Theoretical Informatics YSU, Armenia
Abstract: It is determined minimum number of cosets over linear subspaces in $F_q$ necessary to cover following two sets of $A(n\times n)$ matrices. For one of the set of matrices $\det(A)=0$ and for the other set$\det(A)\neq 0$. It is proved that for singular matrices this number is equal to $1+q+q^2+\ldots+q^{n-1}$ and for the nonsingular matrices it is equal to $\dfrac{(q^n-1)(q^n-q)(q^n-q^2)\cdots(q^n-q^{n-1})}{q^{\binom{n}{2}}}$.
Keywords: linear algebra, covering with cosets, matrices.
Full text: PDF file (132 kB)
References: PDF file HTML file
Document Type: Article
MSC: Primary 97H60; Secondary 14N20, 51E21
Language: English
Citation: A. V. Minasyan, “On the minimal coset coverings of the set of singular and of the set of nonsingular matrices”, Proceedings of the YSU, Physics & Mathematics, 52:1 (2018), 8–11
Citation in format AMSBIB
\Bibitem{Min18} \by A.~V.~Minasyan \paper On the minimal coset coverings of the set of singular and of the set of nonsingular matrices \jour Proceedings of the YSU, Physics {\&} Mathematics \yr 2018 \vol 52 \issue 1 \pages 8--11 \mathnet{http://mi.mathnet.ru/uzeru451} |
# 2. Discretization and Algorithm¶
This chapter lays out the numerical schemes that are employed in the core MITgcm algorithm. Whenever possible links are made to actual program code in the MITgcm implementation. The chapter begins with a discussion of the temporal discretization used in MITgcm. This discussion is followed by sections that describe the spatial discretization. The schemes employed for momentum terms are described first, afterwards the schemes that apply to passive and dynamically active tracers are described.
## 2.1. Notation¶
Because of the particularity of the vertical direction in stratified fluid context, in this chapter, the vector notations are mostly used for the horizontal component: the horizontal part of a vector is simply written $$\vec{\bf v}$$ (instead of $${\bf v_h}$$ or $$\vec{\mathbf{v}}_{h}$$ in chapter 1) and a 3D vector is simply written $$\vec{v}$$ (instead of $$\vec{\mathbf{v}}$$ in chapter 1).
The notations we use to describe the discrete formulation of the model are summarized as follows.
General notation:
$$\Delta x, \Delta y, \Delta r$$ grid spacing in X, Y, R directions
$$A_c,A_w,A_s,A_{\zeta}$$ : horizontal area of a grid cell surrounding $$\theta,u,v,\zeta$$ point
$${\cal V}_u , {\cal V}_v , {\cal V}_w , {\cal V}_\theta$$ : Volume of the grid box surrounding $$u,v,w,\theta$$ point
$$i,j,k$$ : current index relative to X, Y, R directions
Basic operators:
$$\delta_i$$ : $$\delta_i \Phi = \Phi_{i+1/2} - \Phi_{i-1/2}$$
$$~^{-i}$$ : $$\overline{\Phi}^i = ( \Phi_{i+1/2} + \Phi_{i-1/2} ) / 2$$
$$\delta_x$$ : $$\delta_x \Phi = \frac{1}{\Delta x} \delta_i \Phi$$
$$\overline{\nabla}$$ = horizontal gradient operator : $$\overline{\nabla} \Phi = \{ \delta_x \Phi , \delta_y \Phi \}$$
$$\overline{\nabla} \cdot$$ = horizontal divergence operator : $$\overline{\nabla}\cdot \vec{\mathrm{f}} = \frac{1}{\cal A} \{ \delta_i \Delta y \, \mathrm{f}_x + \delta_j \Delta x \, \mathrm{f}_y \}$$
$$\overline{\nabla}^2$$ = horizontal Laplacian operator : $$\overline{\nabla}^2 \Phi = \overline{\nabla}\cdot \overline{\nabla}\Phi$$
## 2.2. Time-stepping¶
The equations of motion integrated by the model involve four prognostic equations for flow, $$u$$ and $$v$$, temperature, $$\theta$$, and salt/moisture, $$S$$, and three diagnostic equations for vertical flow, $$w$$, density/buoyancy, $$\rho$$/$$b$$, and pressure/geo-potential, $$\phi_{hyd}$$. In addition, the surface pressure or height may by described by either a prognostic or diagnostic equation and if non-hydrostatics terms are included then a diagnostic equation for non-hydrostatic pressure is also solved. The combination of prognostic and diagnostic equations requires a model algorithm that can march forward prognostic variables while satisfying constraints imposed by diagnostic equations.
Since the model comes in several flavors and formulation, it would be confusing to present the model algorithm exactly as written into code along with all the switches and optional terms. Instead, we present the algorithm for each of the basic formulations which are:
1. the semi-implicit pressure method for hydrostatic equations with a rigid-lid, variables co-located in time and with Adams-Bashforth time-stepping;
2. as 1 but with an implicit linear free-surface;
3. as 1 or 2 but with variables staggered in time;
4. as 1 or 2 but with non-hydrostatic terms included;
5. as 2 or 3 but with non-linear free-surface.
In all the above configurations it is also possible to substitute the Adams-Bashforth with an alternative time-stepping scheme for terms evaluated explicitly in time. Since the over-arching algorithm is independent of the particular time-stepping scheme chosen we will describe first the over-arching algorithm, known as the pressure method, with a rigid-lid model in Section 2.3. This algorithm is essentially unchanged, apart for some coefficients, when the rigid lid assumption is replaced with a linearized implicit free-surface, described in Section 2.4. These two flavors of the pressure-method encompass all formulations of the model as it exists today. The integration of explicit in time terms is out-lined in Section 2.5 and put into the context of the overall algorithm in Section 2.7 and Section 2.8. Inclusion of non-hydrostatic terms requires applying the pressure method in three dimensions instead of two and this algorithm modification is described in Section 2.9. Finally, the free-surface equation may be treated more exactly, including non-linear terms, and this is described in Section 2.10.2.
## 2.3. Pressure method with rigid-lid¶
The horizontal momentum and continuity equations for the ocean ((1.98) and (1.100)), or for the atmosphere ((1.45) and (1.47)), can be summarized by:
\begin{split}\begin{aligned} \partial_t u + g \partial_x \eta & = & G_u \\ \partial_t v + g \partial_y \eta & = & G_v \\ \partial_x u + \partial_y v + \partial_z w & = & 0\end{aligned}\end{split}
where we are adopting the oceanic notation for brevity. All terms in the momentum equations, except for surface pressure gradient, are encapsulated in the $$G$$ vector. The continuity equation, when integrated over the fluid depth, $$H$$, and with the rigid-lid/no normal flow boundary conditions applied, becomes:
(2.1)$\partial_x H \widehat{u} + \partial_y H \widehat{v} = 0$
Here, $$H\widehat{u} = \int_H u dz$$ is the depth integral of $$u$$, similarly for $$H\widehat{v}$$. The rigid-lid approximation sets $$w=0$$ at the lid so that it does not move but allows a pressure to be exerted on the fluid by the lid. The horizontal momentum equations and vertically integrated continuity equation are be discretized in time and space as follows:
(2.2)$u^{n+1} + \Delta t g \partial_x \eta^{n+1} = u^{n} + \Delta t G_u^{(n+1/2)}$
(2.3)$v^{n+1} + \Delta t g \partial_y \eta^{n+1} = v^{n} + \Delta t G_v^{(n+1/2)}$
(2.4)$\partial_x H \widehat{u^{n+1}} + \partial_y H \widehat{v^{n+1}} = 0$
As written here, terms on the LHS all involve time level $$n+1$$ and are referred to as implicit; the implicit backward time stepping scheme is being used. All other terms in the RHS are explicit in time. The thermodynamic quantities are integrated forward in time in parallel with the flow and will be discussed later. For the purposes of describing the pressure method it suffices to say that the hydrostatic pressure gradient is explicit and so can be included in the vector $$G$$.
Substituting the two momentum equations into the depth integrated continuity equation eliminates $$u^{n+1}$$ and $$v^{n+1}$$ yielding an elliptic equation for $$\eta^{n+1}$$. Equations (2.2), (2.3) and (2.4) can then be re-arranged as follows:
(2.5)$u^{*} = u^{n} + \Delta t G_u^{(n+1/2)}$
(2.6)$v^{*} = v^{n} + \Delta t G_v^{(n+1/2)}$
(2.7)$\partial_x \Delta t g H \partial_x \eta^{n+1} + \partial_y \Delta t g H \partial_y \eta^{n+1} = \partial_x H \widehat{u^{*}} + \partial_y H \widehat{v^{*}}$
(2.8)$u^{n+1} = u^{*} - \Delta t g \partial_x \eta^{n+1}$
(2.9)$v^{n+1} = v^{*} - \Delta t g \partial_y \eta^{n+1}$
Equations (2.5) to (2.9), solved sequentially, represent the pressure method algorithm used in the model. The essence of the pressure method lies in the fact that any explicit prediction for the flow would lead to a divergence flow field so a pressure field must be found that keeps the flow non-divergent over each step of the integration. The particular location in time of the pressure field is somewhat ambiguous; in Figure 2.1 we depicted as co-located with the future flow field (time level $$n+1$$) but it could equally have been drawn as staggered in time with the flow.
Figure 2.1 A schematic of the evolution in time of the pressure method algorithm. A prediction for the flow variables at time level $$n+1$$ is made based only on the explicit terms, $$G^{(n+^1/_2)}$$, and denoted $$u^*$$, $$v^*$$. Next, a pressure field is found such that $$u^{n+1}$$, $$v^{n+1}$$ will be non-divergent. Conceptually, the $$*$$ quantities exist at time level $$n+1$$ but they are intermediate and only temporary.
The correspondence to the code is as follows:
• the prognostic phase, equations (2.5) and (2.6), stepping forward $$u^n$$ and $$v^n$$ to $$u^{*}$$ and $$v^{*}$$ is coded in timestep.F
• the vertical integration, $$H \widehat{u^*}$$ and $$H \widehat{v^*}$$, divergence and inversion of the elliptic operator in equation (2.7) is coded in solve_for_pressure.F
• finally, the new flow field at time level $$n+1$$ given by equations (2.8) and (2.9) is calculated in correction_step.F
The calling tree for these routines is as follows:
Pressure method calling tree
$$\phantom{W}$$ DYNAMICS
$$\phantom{WW}$$ TIMESTEP $$\phantom{xxxxxxxxxxxxxxxxxxxxxx}$$ $$u^*,v^*$$ (2.5) , (2.6)
$$\phantom{W}$$ SOLVE_FOR_PRESSURE
$$\phantom{WW}$$ CALC_DIV_GHAT $$\phantom{xxxxxxxxxxxxxxxx}$$ $$H\widehat{u^*},H\widehat{v^*}$$ (2.7)
$$\phantom{WW}$$ CG2D $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx}$$ $$\eta^{n+1}$$ (2.7)
$$\phantom{W}$$ MOMENTUM_CORRECTION_STEP
$$\phantom{WW}$$ CALC_GRAD_PHI_SURF $$\phantom{xxxxxxxxxx}$$ $$\nabla \eta^{n+1}$$
$$\phantom{WW}$$ CORRECTION_STEP $$\phantom{xxxxxxxxxxxxw}$$ $$u^{n+1},v^{n+1}$$ (2.8) , (2.9)
In general, the horizontal momentum time-stepping can contain some terms that are treated implicitly in time, such as the vertical viscosity when using the backward time-stepping scheme (implicitViscosity =.TRUE.). The method used to solve those implicit terms is provided in Section 2.6, and modifies equations (2.2) and (2.3) to give:
\begin{split}\begin{aligned} u^{n+1} - \Delta t \partial_z A_v \partial_z u^{n+1} + \Delta t g \partial_x \eta^{n+1} & = & u^{n} + \Delta t G_u^{(n+1/2)} \\ v^{n+1} - \Delta t \partial_z A_v \partial_z v^{n+1} + \Delta t g \partial_y \eta^{n+1} & = & v^{n} + \Delta t G_v^{(n+1/2)}\end{aligned}\end{split}
## 2.4. Pressure method with implicit linear free-surface¶
The rigid-lid approximation filters out external gravity waves subsequently modifying the dispersion relation of barotropic Rossby waves. The discrete form of the elliptic equation has some zero eigenvalues which makes it a potentially tricky or inefficient problem to solve.
The rigid-lid approximation can be easily replaced by a linearization of the free-surface equation which can be written:
(2.10)$\partial_t \eta + \partial_x H \widehat{u} + \partial_y H \widehat{v} = {\mathcal{P-E+R}}$
which differs from the depth integrated continuity equation with rigid-lid ((2.1)) by the time-dependent term and fresh-water source term.
Equation (2.4) in the rigid-lid pressure method is then replaced by the time discretization of (2.10) which is:
(2.11)$\eta^{n+1} + \Delta t \partial_x H \widehat{u^{n+1}} + \Delta t \partial_y H \widehat{v^{n+1}} = \eta^{n} + \Delta t ( {\mathcal{P-E}})$
where the use of flow at time level $$n+1$$ makes the method implicit and backward in time. This is the preferred scheme since it still filters the fast unresolved wave motions by damping them. A centered scheme, such as Crank-Nicholson (see Section 2.10.1), would alias the energy of the fast modes onto slower modes of motion.
As for the rigid-lid pressure method, equations (2.2), (2.3) and (2.11) can be re-arranged as follows:
(2.12)$u^{*} = u^{n} + \Delta t G_u^{(n+1/2)}$
(2.13)$v^{*} = v^{n} + \Delta t G_v^{(n+1/2)}$
(2.14)$\eta^* = \epsilon_{fs} ( \eta^{n} + \Delta t ({\mathcal{P-E}}) ) - \Delta t ( \partial_x H \widehat{u^{*}} + \partial_y H \widehat{v^{*}} )$
(2.15)$\partial_x g H \partial_x \eta^{n+1} + \partial_y g H \partial_y \eta^{n+1} - \frac{\epsilon_{fs} \eta^{n+1}}{\Delta t^2} = - \frac{\eta^*}{\Delta t^2}$
(2.16)$u^{n+1} = u^{*} - \Delta t g \partial_x \eta^{n+1}$
(2.17)$v^{n+1} = v^{*} - \Delta t g \partial_y \eta^{n+1}$
Equations (2.12) to (2.17), solved sequentially, represent the pressure method algorithm with a backward implicit, linearized free surface. The method is still formerly a pressure method because in the limit of large $$\Delta t$$ the rigid-lid method is recovered. However, the implicit treatment of the free-surface allows the flow to be divergent and for the surface pressure/elevation to respond on a finite time-scale (as opposed to instantly). To recover the rigid-lid formulation, we use a switch-like variable, $$\epsilon_{fs}$$ (freesurfFac), which selects between the free-surface and rigid-lid; $$\epsilon_{fs}=1$$ allows the free-surface to evolve; $$\epsilon_{fs}=0$$ imposes the rigid-lid. The evolution in time and location of variables is exactly as it was for the rigid-lid model so that Figure 2.1 is still applicable. Similarly, the calling sequence, given here, is as for the pressure-method.
In describing the the pressure method above we deferred describing the time discretization of the explicit terms. We have historically used the quasi-second order Adams-Bashforth method (AB-II) for all explicit terms in both the momentum and tracer equations. This is still the default mode of operation but it is now possible to use alternate schemes for tracers (see Section 2.16), or a 3rd order Adams-Bashforth method (AB-III). In the previous sections, we summarized an explicit scheme as:
(2.18)$\tau^{*} = \tau^{n} + \Delta t G_\tau^{(n+1/2)}$
where $$\tau$$ could be any prognostic variable ($$u$$, $$v$$, $$\theta$$ or $$S$$) and $$\tau^*$$ is an explicit estimate of $$\tau^{n+1}$$ and would be exact if not for implicit-in-time terms. The parenthesis about $$n+1/2$$ indicates that the term is explicit and extrapolated forward in time. Below we describe in more detail the AB-II and AB-III schemes.
The quasi-second order Adams-Bashforth scheme is formulated as follows:
(2.19)$G_\tau^{(n+1/2)} = ( 3/2 + \epsilon_{AB}) G_\tau^n - ( 1/2 + \epsilon_{AB}) G_\tau^{n-1}$
This is a linear extrapolation, forward in time, to $$t=(n+1/2+{\epsilon_{AB}})\Delta t$$. An extrapolation to the mid-point in time, $$t=(n+1/2)\Delta t$$, corresponding to $$\epsilon_{AB}=0$$, would be second order accurate but is weakly unstable for oscillatory terms. A small but finite value for $$\epsilon_{AB}$$ stabilizes the method. Strictly speaking, damping terms such as diffusion and dissipation, and fixed terms (forcing), do not need to be inside the Adams-Bashforth extrapolation. However, in the current code, it is simpler to include these terms and this can be justified if the flow and forcing evolves smoothly. Problems can, and do, arise when forcing or motions are high frequency and this corresponds to a reduced stability compared to a simple forward time-stepping of such terms. The model offers the possibility to leave terms outside the Adams-Bashforth extrapolation, by turning off the logical flag forcing_In_AB (parameter file data, namelist PARM01, default value = TRUE) and then setting tracForcingOutAB (default=0), momForcingOutAB (default=0), and momDissip_In_AB (parameter file data, namelist PARM01, default value = TRUE), respectively for the tracer terms, momentum forcing terms, and the dissipation terms.
A stability analysis for an oscillation equation should be given at this point.
A stability analysis for a relaxation equation should be given at this point.
Figure 2.2 Oscillatory and damping response of quasi-second order Adams-Bashforth scheme for different values of the $$\epsilon _{AB}$$ parameter (0.0, 0.1, 0.25, from top to bottom) The analytical solution (in black), the physical mode (in blue) and the numerical mode (in red) are represented with a CFL step of 0.1. The left column represents the oscillatory response on the complex plane for CFL ranging from 0.1 up to 0.9. The right column represents the damping response amplitude (y-axis) function of the CFL (x-axis).
The 3rd order Adams-Bashforth time stepping (AB-III) provides several advantages (see, e.g., Durran 1991 [durran:91]) compared to the default quasi-second order Adams-Bashforth method:
• higher accuracy;
• stable with a longer time-step;
• no additional computation (just requires the storage of one additional time level).
The 3rd order Adams-Bashforth can be used to extrapolate forward in time the tendency (replacing (2.19)) as:
(2.20)$G_\tau^{(n+1/2)} = ( 1 + \alpha_{AB} + \beta_{AB}) G_\tau^n - ( \alpha_{AB} + 2 \beta_{AB}) G_\tau^{n-1} + \beta_{AB} G_\tau^{n-2}$
3rd order accuracy is obtained with $$(\alpha_{AB},\,\beta_{AB}) = (1/2,\,5/12)$$. Note that selecting $$(\alpha_{AB},\,\beta_{AB}) = (1/2+\epsilon_{AB},\,0)$$ one recovers AB-II. The AB-III time stepping improves the stability limit for an oscillatory problem like advection or Coriolis. As seen from Figure 2.3, it remains stable up to a CFL of 0.72, compared to only 0.50 with AB-II and $$\epsilon_{AB} = 0.1$$. It is interesting to note that the stability limit can be further extended up to a CFL of 0.786 for an oscillatory problem (see Figure 2.3) using $$(\alpha_{AB},\,\beta_{AB}) = (0.5,\,0.2811)$$ but then the scheme is only second order accurate.
However, the behavior of the AB-III for a damping problem (like diffusion) is less favorable, since the stability limit is reduced to 0.54 only (and 0.64 with $$\beta_{AB} = 0.2811$$) compared to 1.0 (and 0.9 with $$\epsilon_{AB} = 0.1$$) with the AB-II (see Figure 2.4).
A way to enable the use of a longer time step is to keep the dissipation terms outside the AB extrapolation (setting momDissip_In_AB to .FALSE. in main parameter file data, namelist PARM03, thus returning to a simple forward time-stepping for dissipation, and to use AB-III only for advection and Coriolis terms.
The AB-III time stepping is activated by defining the option #define ALLOW_ADAMSBASHFORTH_3 in CPP_OPTIONS.h. The parameters $$\alpha_{AB},\beta_{AB}$$ can be set from the main parameter file data (namelist PARM03) and their default values correspond to the 3rd order Adams-Bashforth. A simple example is provided in verification/advect_xy/input.ab3_c4.
AB-III is not yet available for the vertical momentum equation (non-hydrostatic) nor for passive tracers.
Figure 2.3 Oscillatory response of third order Adams-Bashforth scheme for different values of the $$(\alpha_{AB},\,\beta_{AB})$$ parameters. The analytical solution (in black), the physical mode (in blue) and the numerical mode (in red) are represented with a CFL step of 0.1.
Figure 2.4 Damping response of third order Adams-Bashforth scheme for different values of the $$(\alpha_{AB},\,\beta_{AB})$$ parameters. The analytical solution (in black), the physical mode (in blue) and the numerical mode (in red) are represented with a CFL step of 0.1.
## 2.6. Implicit time-stepping: backward method¶
Vertical diffusion and viscosity can be treated implicitly in time using the backward method which is an intrinsic scheme. Recently, the option to treat the vertical advection implicitly has been added, but not yet tested; therefore, the description hereafter is limited to diffusion and viscosity. For tracers, the time discretized equation is:
(2.21)$\tau^{n+1} - \Delta t \partial_r \kappa_v \partial_r \tau^{n+1} = \tau^{n} + \Delta t G_\tau^{(n+1/2)}$
where $$G_\tau^{(n+1/2)}$$ is the remaining explicit terms extrapolated using the Adams-Bashforth method as described above. Equation (2.21) can be split split into:
(2.22)$\tau^* = \tau^{n} + \Delta t G_\tau^{(n+1/2)}$
(2.23)$\tau^{n+1} = {\cal L}_\tau^{-1} ( \tau^* )$
where $${\cal L}_\tau^{-1}$$ is the inverse of the operator
${\cal L}_\tau = \left[ 1 + \Delta t \partial_r \kappa_v \partial_r \right]$
Equation (2.22) looks exactly as (2.18) while (2.23) involves an operator or matrix inversion. By re-arranging (2.21) in this way we have cast the method as an explicit prediction step and an implicit step allowing the latter to be inserted into the over all algorithm with minimal interference.
The calling sequence for stepping forward a tracer variable such as temperature with implicit diffusion is as follows:
$$\phantom{W}$$ THERMODYNAMICS
$$\phantom{WW}$$ TEMP_INTEGRATE
$$\phantom{WWW}$$ GAD_CALC_RHS $$\phantom{xxxxxxxxxw}$$ $$G_\theta^n = G_\theta( u, \theta^n)$$
$$\phantom{WWW}$$ either
$$\phantom{WWWW}$$ EXTERNAL_FORCING $$\phantom{xxxx}$$ $$G_\theta^n = G_\theta^n + {\cal Q}$$
$$\phantom{WWWW}$$ ADAMS_BASHFORTH2 $$\phantom{xxi}$$ $$G_\theta^{(n+1/2)}$$ (2.19)
$$\phantom{WWW}$$ or
$$\phantom{WWWW}$$ EXTERNAL_FORCING $$\phantom{xxxx}$$ $$G_\theta^{(n+1/2)} = G_\theta^{(n+1/2)} + {\cal Q}$$
$$\phantom{WW}$$ TIMESTEP_TRACER $$\phantom{xxxxxxxxxx}$$ $$\tau^*$$ (2.18)
$$\phantom{WW}$$ IMPLDIFF $$\phantom{xxxxxxxxxxxxxxxxxw}$$ $$\tau^{(n+1)}$$ (2.23)
In order to fit within the pressure method, the implicit viscosity must not alter the barotropic flow. In other words, it can only redistribute momentum in the vertical. The upshot of this is that although vertical viscosity may be backward implicit and unconditionally stable, no-slip boundary conditions may not be made implicit and are thus cast as a an explicit drag term.
## 2.7. Synchronous time-stepping: variables co-located in time¶
Figure 2.5 A schematic of the explicit Adams-Bashforth and implicit time-stepping phases of the algorithm. All prognostic variables are co-located in time. Explicit tendencies are evaluated at time level $$n$$ as a function of the state at that time level (dotted arrow). The explicit tendency from the previous time level, $$n-1$$, is used to extrapolate tendencies to $$n+1/2$$ (dashed arrow). This extrapolated tendency allows variables to be stably integrated forward-in-time to render an estimate ($$*$$ -variables) at the $$n+1$$ time level (solid arc-arrow). The operator $${\cal L}$$ formed from implicit-in-time terms is solved to yield the state variables at time level $$n+1$$.
The Adams-Bashforth extrapolation of explicit tendencies fits neatly into the pressure method algorithm when all state variables are co-located in time. The algorithm can be represented by the sequential solution of the follow equations:
(2.24)$G_{\theta,S}^{n} = G_{\theta,S} ( u^n, \theta^n, S^n )$
(2.25)$G_{\theta,S}^{(n+1/2)} = (3/2+\epsilon_{AB}) G_{\theta,S}^{n}-(1/2+\epsilon_{AB}) G_{\theta,S}^{n-1}$
(2.26)$(\theta^*,S^*) = (\theta^{n},S^{n}) + \Delta t G_{\theta,S}^{(n+1/2)}$
(2.27)$(\theta^{n+1},S^{n+1}) = {\cal L}^{-1}_{\theta,S} (\theta^*,S^*)$
(2.28)$\phi^n_{hyd} = \int b(\theta^n,S^n) dr$
(2.29)$\vec{\bf G}_{\vec{\bf v}}^{n} = \vec{\bf G}_{\vec{\bf v}} ( \vec{\bf v}^n, \phi^n_{hyd} )$
(2.30)$\vec{\bf G}_{\vec{\bf v}}^{(n+1/2)} = (3/2 + \epsilon_{AB} ) \vec{\bf G}_{\vec{\bf v}}^{n} - (1/2 + \epsilon_{AB} ) \vec{\bf G}_{\vec{\bf v}}^{n-1}$
(2.31)$\vec{\bf v}^{*} = \vec{\bf v}^{n} + \Delta t \vec{\bf G}_{\vec{\bf v}}^{(n+1/2)}$
(2.32)$\vec{\bf v}^{**} = {\cal L}_{\vec{\bf v}}^{-1} ( \vec{\bf v}^* )$
(2.33)$\eta^* = \epsilon_{fs} \left( \eta^{n} + \Delta t ({\mathcal{P-E}}) \right)- \Delta t \nabla \cdot H \widehat{ \vec{\bf v}^{**} }$
(2.34)$\nabla \cdot g H \nabla \eta^{n+1} - \frac{\epsilon_{fs} \eta^{n+1}}{\Delta t^2} ~ = ~ - \frac{\eta^*}{\Delta t^2}$
(2.35)$\vec{\bf v}^{n+1} = \vec{\bf v}^{**} - \Delta t g \nabla \eta^{n+1}$
Figure 2.5 illustrates the location of variables in time and evolution of the algorithm with time. The Adams-Bashforth extrapolation of the tracer tendencies is illustrated by the dashed arrow, the prediction at $$n+1$$ is indicated by the solid arc. Inversion of the implicit terms, $${\cal L}^{-1}_{\theta,S}$$, then yields the new tracer fields at $$n+1$$. All these operations are carried out in subroutine THERMODYNAMICS and subsidiaries, which correspond to equations (2.24) to (2.27). Similarly illustrated is the Adams-Bashforth extrapolation of accelerations, stepping forward and solving of implicit viscosity and surface pressure gradient terms, corresponding to equations (2.29) to (2.35). These operations are carried out in subroutines DYNAMICS, SOLVE_FOR_PRESSURE and MOMENTUM_CORRECTION_STEP. This, then, represents an entire algorithm for stepping forward the model one time-step. The corresponding calling tree for the overall synchronous algorithm using Adams-Bashforth time-stepping is given below. The place where the model geometry hFac factors) is updated is added here but is only relevant for the non-linear free-surface algorithm. For completeness, the external forcing, ocean and atmospheric physics have been added, although they are mainly optional.
$$\phantom{WWW}$$ EXTERNAL_FIELDS_LOAD
$$\phantom{WWW}$$ DO_ATMOSPHERIC_PHYS
$$\phantom{WWW}$$ DO_OCEANIC_PHYS
$$\phantom{WW}$$ THERMODYNAMICS
$$\phantom{WWW}$$ CALC_GT
$$\phantom{WWWW}$$ GAD_CALC_RHS $$\phantom{xxxxxxxxxxxxxlwww}$$ $$G_\theta^n = G_\theta( u, \theta^n )$$ (2.24)
$$\phantom{WWWW}$$ EXTERNAL_FORCING $$\phantom{xxxxxxxxxxlww}$$ $$G_\theta^n = G_\theta^n + {\cal Q}$$
$$\phantom{WWWW}$$ ADAMS_BASHFORTH2 $$\phantom{xxxxxxxxxxxw}$$ $$G_\theta^{(n+1/2)}$$ (2.25)
$$\phantom{WWW}$$ TIMESTEP_TRACER $$\phantom{xxxxxxxxxxxxxxxww}$$ $$\theta^*$$ (2.26)
$$\phantom{WWW}$$ IMPLDIFF $$\phantom{xxxxxxxxxxxxxxxxxxxxxvwww}$$ $$\theta^{(n+1)}$$ (2.27)
$$\phantom{WW}$$ DYNAMICS
$$\phantom{WWW}$$ CALC_PHI_HYD $$\phantom{xxxxxxxxxxxxxxxxxxxxi}$$ $$\phi_{hyd}^n$$ (2.28)
$$\phantom{WWW}$$ MOM_FLUXFORM or MOM_VECINV $$\phantom{xxi}$$ $$G_{\vec{\bf v}}^n$$ (2.29)
$$\phantom{WWW}$$ TIMESTEP $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx}$$ $$\vec{\bf v}^*$$ (2.30), (2.31)
$$\phantom{WWW}$$ IMPLDIFF $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxlw}$$ $$\vec{\bf v}^{**}$$ (2.32)
$$\phantom{WW}$$ UPDATE_R_STAR or UPDATE_SURF_DR (NonLin-FS only)
$$\phantom{WW}$$ SOLVE_FOR_PRESSURE
$$\phantom{WWW}$$ CALC_DIV_GHAT $$\phantom{xxxxxxxxxxxxxxxxxxxx}$$ $$\eta^*$$ (2.33)
$$\phantom{WWW}$$ CG2D $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxi}$$ $$\eta^{n+1}$$ (2.34)
$$\phantom{WW}$$ MOMENTUM_CORRECTION_STEP
$$\phantom{WWW}$$ CALC_GRAD_PHI_SURF $$\phantom{xxxxxxxxxxxxxx}$$ $$\nabla \eta^{n+1}$$
$$\phantom{WWW}$$ CORRECTION_STEP $$\phantom{xxxxxxxxxxxxxxxxw}$$ $$u^{n+1},v^{n+1}$$ (2.35)
$$\phantom{WW}$$ TRACERS_CORRECTION_STEP
$$\phantom{WWW}$$ CYCLE_TRACER $$\phantom{xxxxxxxxxxxxxxxxxxxxx}$$ $$\theta^{n+1}$$
$$\phantom{WWW}$$ CONVECTIVE_ADJUSTMENT
## 2.8. Staggered baroclinic time-stepping¶
Figure 2.6 A schematic of the explicit Adams-Bashforth and implicit time-stepping phases of the algorithm but with staggering in time of thermodynamic variables with the flow. Explicit momentum tendencies are evaluated at time level $$n-1/2$$ as a function of the flow field at that time level $$n-1/2$$. The explicit tendency from the previous time level, $$n-3/2$$, is used to extrapolate tendencies to $$n$$ (dashed arrow). The hydrostatic pressure/geo-potential $$\phi _{hyd}$$ is evaluated directly at time level $$n$$ (vertical arrows) and used with the extrapolated tendencies to step forward the flow variables from $$n-1/2$$ to $$n+1/2$$ (solid arc-arrow). The implicit-in-time operator $${\cal L}_{\bf u,v}$$ (vertical arrows) is then applied to the previous estimation of the the flow field ($$*$$ -variables) and yields to the two velocity components $$u,v$$ at time level $$n+1/2$$. These are then used to calculate the advection term (dashed arc-arrow) of the thermo-dynamics tendencies at time step $$n$$. The extrapolated thermodynamics tendency, from time level $$n-1$$ and $$n$$ to $$n+1/2$$, allows thermodynamic variables to be stably integrated forward-in-time (solid arc-arrow) up to time level $$n+1$$.
For well-stratified problems, internal gravity waves may be the limiting process for determining a stable time-step. In the circumstance, it is more efficient to stagger in time the thermodynamic variables with the flow variables. Figure 2.6 illustrates the staggering and algorithm. The key difference between this and Figure 2.5 is that the thermodynamic variables are solved after the dynamics, using the recently updated flow field. This essentially allows the gravity wave terms to leap-frog in time giving second order accuracy and more stability.
The essential change in the staggered algorithm is that the thermodynamics solver is delayed from half a time step, allowing the use of the most recent velocities to compute the advection terms. Once the thermodynamics fields are updated, the hydrostatic pressure is computed to step forward the dynamics. Note that the pressure gradient must also be taken out of the Adams-Bashforth extrapolation. Also, retaining the integer time-levels, $$n$$ and $$n+1$$, does not give a user the sense of where variables are located in time. Instead, we re-write the entire algorithm, (2.24) to (2.35), annotating the position in time of variables appropriately:
(2.36)$\phi^{n}_{hyd} = \int b(\theta^{n},S^{n}) dr$
(2.37)$\vec{\bf G}_{\vec{\bf v}}^{n-1/2} = \vec{\bf G}_{\vec{\bf v}} ( \vec{\bf v}^{n-1/2} )$
(2.38)$\vec{\bf G}_{\vec{\bf v}}^{(n)} = (3/2 + \epsilon_{AB} ) \vec{\bf G}_{\vec{\bf v}}^{n-1/2} - (1/2 + \epsilon_{AB} ) \vec{\bf G}_{\vec{\bf v}}^{n-3/2}$
(2.39)$\vec{\bf v}^{*} = \vec{\bf v}^{n-1/2} + \Delta t \left( \vec{\bf G}_{\vec{\bf v}}^{(n)} - \nabla \phi_{hyd}^{n} \right)$
(2.40)$\vec{\bf v}^{**} = {\cal L}_{\vec{\bf v}}^{-1} ( \vec{\bf v}^* )$
(2.41)$\eta^* = \epsilon_{fs} \left( \eta^{n-1/2} + \Delta t ({\mathcal{P-E}})^n \right)- \Delta t \nabla \cdot H \widehat{ \vec{\bf v}^{**} }$
(2.42)$\nabla \cdot g H \nabla \eta^{n+1/2} - \frac{\epsilon_{fs} \eta^{n+1/2}}{\Delta t^2} ~ = ~ - \frac{\eta^*}{\Delta t^2}$
(2.43)$\vec{\bf v}^{n+1/2} = \vec{\bf v}^{**} - \Delta t g \nabla \eta^{n+1/2}$
(2.44)$G_{\theta,S}^{n} = G_{\theta,S} ( u^{n+1/2}, \theta^{n}, S^{n} )$
(2.45)$G_{\theta,S}^{(n+1/2)} = (3/2+\epsilon_{AB}) G_{\theta,S}^{n}-(1/2+\epsilon_{AB}) G_{\theta,S}^{n-1}$
(2.46)$(\theta^*,S^*) = (\theta^{n},S^{n}) + \Delta t G_{\theta,S}^{(n+1/2)}$
(2.47)$(\theta^{n+1},S^{n+1}) = {\cal L}^{-1}_{\theta,S} (\theta^*,S^*)$
The corresponding calling tree is given below. The staggered algorithm is activated with the run-time flag staggerTimeStep =.TRUE. in parameter file data, namelist PARM01.
$$\phantom{WWW}$$ EXTERNAL_FIELDS_LOAD
$$\phantom{WWW}$$ DO_ATMOSPHERIC_PHYS
$$\phantom{WWW}$$ DO_OCEANIC_PHYS
$$\phantom{WW}$$ DYNAMICS
$$\phantom{WWW}$$ CALC_PHI_HYD $$\phantom{xxxxxxxxxxxxxxxxxxxxi}$$ $$\phi_{hyd}^n$$ (2.36)
$$\phantom{WWW}$$ MOM_FLUXFORM or MOM_VECINV $$\phantom{xxi}$$ $$G_{\vec{\bf v}}^{n-1/2}$$ (2.37)
$$\phantom{WWW}$$ TIMESTEP $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx}$$ $$\vec{\bf v}^*$$ (2.38), (2.39)
$$\phantom{WWW}$$ IMPLDIFF $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxlw}$$ $$\vec{\bf v}^{**}$$ (2.40)
$$\phantom{WW}$$ UPDATE_R_STAR or UPDATE_SURF_DR (NonLin-FS only)
$$\phantom{WW}$$ SOLVE_FOR_PRESSURE
$$\phantom{WWW}$$ CALC_DIV_GHAT $$\phantom{xxxxxxxxxxxxxxxxxxxx}$$ $$\eta^*$$ (2.41)
$$\phantom{WWW}$$ CG2D $$\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxi}$$ $$\eta^{n+1/2}$$ (2.42)
$$\phantom{WW}$$ MOMENTUM_CORRECTION_STEP
$$\phantom{WWW}$$ CALC_GRAD_PHI_SURF $$\phantom{xxxxxxxxxxxxxx}$$ $$\nabla \eta^{n+1/2}$$
$$\phantom{WWW}$$ CORRECTION_STEP $$\phantom{xxxxxxxxxxxxxxxxw}$$ $$u^{n+1/2},v^{n+1/2}$$ (2.43)
$$\phantom{WW}$$ THERMODYNAMICS
$$\phantom{WWW}$$ CALC_GT
$$\phantom{WWWW}$$ GAD_CALC_RHS $$\phantom{xxxxxxxxxxxxxlwww}$$ $$G_\theta^n = G_\theta( u, \theta^n )$$ (2.44)
$$\phantom{WWWW}$$ EXTERNAL_FORCING $$\phantom{xxxxxxxxxxlww}$$ $$G_\theta^n = G_\theta^n + {\cal Q}$$
$$\phantom{WWWW}$$ ADAMS_BASHFORTH2 $$\phantom{xxxxxxxxxxxw}$$ $$G_\theta^{(n+1/2)}$$ (2.45)
$$\phantom{WWW}$$ TIMESTEP_TRACER $$\phantom{xxxxxxxxxxxxxxxww}$$ $$\theta^*$$ (2.46)
$$\phantom{WWW}$$ IMPLDIFF $$\phantom{xxxxxxxxxxxxxxxxxxxxxvwww}$$ $$\theta^{(n+1)}$$ (2.47)
$$\phantom{WW}$$ TRACERS_CORRECTION_STEP
$$\phantom{WWW}$$ CYCLE_TRACER $$\phantom{xxxxxxxxxxxxxxxxxxxxx}$$ $$\theta^{n+1}$$
$$\phantom{WWW}$$ CONVECTIVE_ADJUSTMENT
The only difficulty with this approach is apparent in equation (2.44) and illustrated by the dotted arrow connecting $$u,v^{n+1/2}$$ with $$G_\theta^{n}$$. The flow used to advect tracers around is not naturally located in time. This could be avoided by applying the Adams-Bashforth extrapolation to the tracer field itself and advecting that around but this approach is not yet available. We’re not aware of any detrimental effect of this feature. The difficulty lies mainly in interpretation of what time-level variables and terms correspond to.
## 2.9. Non-hydrostatic formulation¶
The non-hydrostatic formulation re-introduces the full vertical momentum equation and requires the solution of a 3-D elliptic equations for non-hydrostatic pressure perturbation. We still integrate vertically for the hydrostatic pressure and solve a 2-D elliptic equation for the surface pressure/elevation for this reduces the amount of work needed to solve for the non-hydrostatic pressure.
The momentum equations are discretized in time as follows:
(2.48)$\frac{1}{\Delta t} u^{n+1} + g \partial_x \eta^{n+1} + \partial_x \phi_{nh}^{n+1} = \frac{1}{\Delta t} u^{n} + G_u^{(n+1/2)}$
(2.49)$\frac{1}{\Delta t} v^{n+1} + g \partial_y \eta^{n+1} + \partial_y \phi_{nh}^{n+1} = \frac{1}{\Delta t} v^{n} + G_v^{(n+1/2)}$
(2.50)$\frac{1}{\Delta t} w^{n+1} + \partial_r \phi_{nh}^{n+1} = \frac{1}{\Delta t} w^{n} + G_w^{(n+1/2)}$
which must satisfy the discrete-in-time depth integrated continuity, equation (2.11) and the local continuity equation
(2.51)$\partial_x u^{n+1} + \partial_y v^{n+1} + \partial_r w^{n+1} = 0$
As before, the explicit predictions for momentum are consolidated as:
\begin{split}\begin{aligned} u^* & = & u^n + \Delta t G_u^{(n+1/2)} \\ v^* & = & v^n + \Delta t G_v^{(n+1/2)} \\ w^* & = & w^n + \Delta t G_w^{(n+1/2)}\end{aligned}\end{split}
but this time we introduce an intermediate step by splitting the tendency of the flow as follows:
\begin{split}\begin{aligned} u^{n+1} = u^{**} - \Delta t \partial_x \phi_{nh}^{n+1} & & u^{**} = u^{*} - \Delta t g \partial_x \eta^{n+1} \\ v^{n+1} = v^{**} - \Delta t \partial_y \phi_{nh}^{n+1} & & v^{**} = v^{*} - \Delta t g \partial_y \eta^{n+1}\end{aligned}\end{split}
Substituting into the depth integrated continuity (equation (2.11)) gives
(2.52)$\partial_x H \partial_x \left( g \eta^{n+1} + \widehat{\phi}_{nh}^{n+1} \right) + \partial_y H \partial_y \left( g \eta^{n+1} + \widehat{\phi}_{nh}^{n+1} \right) - \frac{\epsilon_{fs}\eta^{n+1}}{\Delta t^2} = - \frac{\eta^*}{\Delta t^2}$
which is approximated by equation (2.15) on the basis that i) $$\phi_{nh}^{n+1}$$ is not yet known and ii) $$\nabla \widehat{\phi}_{nh} << g \nabla \eta$$. If (2.15) is solved accurately then the implication is that $$\widehat{\phi}_{nh} \approx 0$$ so that the non-hydrostatic pressure field does not drive barotropic motion.
The flow must satisfy non-divergence (equation (2.51)) locally, as well as depth integrated, and this constraint is used to form a 3-D elliptic equations for $$\phi_{nh}^{n+1}$$:
(2.53)$\partial_{xx} \phi_{nh}^{n+1} + \partial_{yy} \phi_{nh}^{n+1} + \partial_{rr} \phi_{nh}^{n+1} = \partial_x u^{**} + \partial_y v^{**} + \partial_r w^{*}$
The entire algorithm can be summarized as the sequential solution of the following equations:
(2.54)$u^{*} = u^{n} + \Delta t G_u^{(n+1/2)}$
(2.55)$v^{*} = v^{n} + \Delta t G_v^{(n+1/2)}$
(2.56)$w^{*} = w^{n} + \Delta t G_w^{(n+1/2)}$
(2.57)$\eta^* ~ = ~ \epsilon_{fs} \left( \eta^{n} + \Delta t ({\mathcal{P-E}}) \right) - \Delta t \left( \partial_x H \widehat{u^{*}} + \partial_y H \widehat{v^{*}} \right)$
(2.58)$\partial_x g H \partial_x \eta^{n+1} + \partial_y g H \partial_y \eta^{n+1} - \frac{\epsilon_{fs} \eta^{n+1}}{\Delta t^2} ~ = ~ - \frac{\eta^*}{\Delta t^2}$
(2.59)$u^{**} = u^{*} - \Delta t g \partial_x \eta^{n+1}$
(2.60)$v^{**} = v^{*} - \Delta t g \partial_y \eta^{n+1}$
(2.61)$\partial_{xx} \phi_{nh}^{n+1} + \partial_{yy} \phi_{nh}^{n+1} + \partial_{rr} \phi_{nh}^{n+1} = \partial_x u^{**} + \partial_y v^{**} + \partial_r w^{*}$
(2.62)$u^{n+1} = u^{**} - \Delta t \partial_x \phi_{nh}^{n+1}$
(2.63)$v^{n+1} = v^{**} - \Delta t \partial_y \phi_{nh}^{n+1}$
(2.64)$\partial_r w^{n+1} = - \partial_x u^{n+1} - \partial_y v^{n+1}$
where the last equation is solved by vertically integrating for $$w^{n+1}$$.
## 2.10. Variants on the Free Surface¶
We now describe the various formulations of the free-surface that include non-linear forms, implicit in time using Crank-Nicholson, explicit and [one day] split-explicit. First, we’ll reiterate the underlying algorithm but this time using the notation consistent with the more general vertical coordinate $$r$$. The elliptic equation for free-surface coordinate (units of $$r$$), corresponding to (2.11), and assuming no non-hydrostatic effects ($$\epsilon_{nh} = 0$$) is:
(2.65)$\epsilon_{fs} {\eta}^{n+1} - {\bf \nabla}_h \cdot \Delta t^2 (R_o-R_{fixed}) {\bf \nabla}_h b_s {\eta}^{n+1} = {\eta}^*$
where
(2.66)${\eta}^* = \epsilon_{fs} \: {\eta}^{n} - \Delta t {\bf \nabla}_h \cdot \int_{R_{fixed}}^{R_o} \vec{\bf v}^* dr \: + \: \epsilon_{fw} \Delta t ({\mathcal{P-E}})^{n}$
$$u^*$$ : gU ( DYNVARS.h )
$$v^*$$ : gV ( DYNVARS.h )
$${\eta}^*$$ : cg2d_b ( SOLVE_FOR_PRESSURE.h )
$${\eta}^{n+1}$$ : etaN ( DYNVARS.h )
Once $${\eta}^{n+1}$$ has been found, substituting into (2.2), (2.3) yields $$\vec{\bf v}^{n+1}$$ if the model is hydrostatic ($$\epsilon_{nh}=0$$):
$\vec{\bf v}^{n+1} = \vec{\bf v}^{*} - \Delta t {\bf \nabla}_h b_s {\eta}^{n+1}$
This is known as the correction step. However, when the model is non-hydrostatic ($$\epsilon_{nh}=1$$) we need an additional step and an additional equation for $$\phi'_{nh}$$. This is obtained by substituting (2.48), (2.49) and (2.50) into continuity:
(2.67)$[ {\bf \nabla}_h^2 + \partial_{rr} ] {\phi'_{nh}}^{n+1} = \frac{1}{\Delta t} {\bf \nabla}_h \cdot \vec{\bf v}^{**} + \partial_r \dot{r}^*$
where
$\vec{\bf v}^{**} = \vec{\bf v}^* - \Delta t {\bf \nabla}_h b_s {\eta}^{n+1}$
Note that $$\eta^{n+1}$$ is also used to update the second RHS term $$\partial_r \dot{r}^*$$ since the vertical velocity at the surface ($$\dot{r}_{surf}$$) is evaluated as $$(\eta^{n+1} - \eta^n) / \Delta t$$.
Finally, the horizontal velocities at the new time level are found by:
(2.68)$\vec{\bf v}^{n+1} = \vec{\bf v}^{**} - \epsilon_{nh} \Delta t {\bf \nabla}_h {\phi'_{nh}}^{n+1}$
and the vertical velocity is found by integrating the continuity equation vertically. Note that, for the convenience of the restart procedure, the vertical integration of the continuity equation has been moved to the beginning of the time step (instead of at the end), without any consequence on the solution.
$${\eta}^{n+1}$$ : etaN ( DYNVARS.h )
$${\phi}^{n+1}_{nh}$$ : phi_nh ( NH_VARS.h )
$$u^*$$ : gU ( DYNVARS.h )
$$v^*$$ : gV ( DYNVARS.h )
$$u^{n+1}$$ : uVel ( DYNVARS.h )
$$v^{n+1}$$ : vVel ( DYNVARS.h )
Regarding the implementation of the surface pressure solver, all computation are done within the routine SOLVE_FOR_PRESSURE and its dependent calls. The standard method to solve the 2D elliptic problem (2.65) uses the conjugate gradient method (routine CG2D); the solver matrix and conjugate gradient operator are only function of the discretized domain and are therefore evaluated separately, before the time iteration loop, within INI_CG2D. The computation of the RHS $$\eta^*$$ is partly done in CALC_DIV_GHAT and in SOLVE_FOR_PRESSURE.
The same method is applied for the non hydrostatic part, using a conjugate gradient 3D solver (CG3D) that is initialized in INI_CG3D. The RHS terms of 2D and 3D problems are computed together at the same point in the code.
## 2.11. Spatial discretization of the dynamical equations¶
Spatial discretization is carried out using the finite volume method. This amounts to a grid-point method (namely second-order centered finite difference) in the fluid interior but allows boundaries to intersect a regular grid allowing a more accurate representation of the position of the boundary. We treat the horizontal and vertical directions as separable and differently.
## 2.12. Continuity and horizontal pressure gradient term¶
The core algorithm is based on the “C grid” discretization of the continuity equation which can be summarized as:
(2.80)$\partial_t u + \frac{1}{\Delta x_c} \delta_i \left. \frac{ \partial \Phi}{\partial r}\right|_{s} \eta + \frac{\epsilon_{nh}}{\Delta x_c} \delta_i \Phi_{nh}' = G_u - \frac{1}{\Delta x_c} \delta_i \Phi_h'$
(2.81)$\partial_t v + \frac{1}{\Delta y_c} \delta_j \left. \frac{ \partial \Phi}{\partial r}\right|_{s} \eta + \frac{\epsilon_{nh}}{\Delta y_c} \delta_j \Phi_{nh}' = G_v - \frac{1}{\Delta y_c} \delta_j \Phi_h'$
(2.82)$\epsilon_{nh} \left( \partial_t w + \frac{1}{\Delta r_c} \delta_k \Phi_{nh}' \right) = \epsilon_{nh} G_w + \overline{b}^k - \frac{1}{\Delta r_c} \delta_k \Phi_{h}'$
(2.83)$\delta_i \Delta y_g \Delta r_f h_w u + \delta_j \Delta x_g \Delta r_f h_s v + \delta_k {\cal A}_c w = {\cal A}_c \delta_k (\mathcal{P-E})_{r=0}$
where the continuity equation has been most naturally discretized by staggering the three components of velocity as shown in Figure 2.7. The grid lengths $$\Delta x_c$$ and $$\Delta y_c$$ are the lengths between tracer points (cell centers). The grid lengths $$\Delta x_g$$, $$\Delta y_g$$ are the grid lengths between cell corners. $$\Delta r_f$$ and $$\Delta r_c$$ are the distance (in units of $$r$$) between level interfaces (w-level) and level centers (tracer level). The surface area presented in the vertical is denoted $${\cal A}_c$$. The factors $$h_w$$ and $$h_s$$ are non-dimensional fractions (between 0 and 1) that represent the fraction cell depth that is “open” for fluid flow.
The last equation, the discrete continuity equation, can be summed in the vertical to yield the free-surface equation:
(2.84)${\cal A}_c \partial_t \eta + \delta_i \sum_k \Delta y_g \Delta r_f h_w u + \delta_j \sum_k \Delta x_g \Delta r_f h_s v = {\cal A}_c(\mathcal{P-E})_{r=0}$
The source term $$\mathcal{P-E}$$ on the rhs of continuity accounts for the local addition of volume due to excess precipitation and run-off over evaporation and only enters the top-level of the ocean model.
## 2.13. Hydrostatic balance¶
The vertical momentum equation has the hydrostatic or quasi-hydrostatic balance on the right hand side. This discretization guarantees that the conversion of potential to kinetic energy as derived from the buoyancy equation exactly matches the form derived from the pressure gradient terms when forming the kinetic energy equation.
In the ocean, using z-coordinates, the hydrostatic balance terms are discretized:
(2.85)$\epsilon_{nh} \partial_t w + g \overline{\rho'}^k + \frac{1}{\Delta z} \delta_k \Phi_h' = \ldots$
In the atmosphere, using p-coordinates, hydrostatic balance is discretized:
(2.86)$\overline{\theta'}^k + \frac{1}{\Delta \Pi} \delta_k \Phi_h' = 0$
where $$\Delta \Pi$$ is the difference in Exner function between the pressure points. The non-hydrostatic equations are not available in the atmosphere.
The difference in approach between ocean and atmosphere occurs because of the direct use of the ideal gas equation in forming the potential energy conversion term $$\alpha \omega$$. Because of the different representation of hydrostatic balance between ocean and atmosphere there is no elegant way to represent both systems using an arbitrary coordinate.
The integration for hydrostatic pressure is made in the positive $$r$$ direction (increasing k-index). For the ocean, this is from the free-surface down and for the atmosphere this is from the ground up.
The calculations are made in the subroutine CALC_PHI_HYD. Inside this routine, one of other of the atmospheric/oceanic form is selected based on the string variable buoyancyRelation.
## 2.14. Flux-form momentum equations¶
The original finite volume model was based on the Eulerian flux form momentum equations. This is the default though the vector invariant form is optionally available (and recommended in some cases).
The “G’s” (our colloquial name for all terms on rhs!) are broken into the various advective, Coriolis, horizontal dissipation, vertical dissipation and metric forces:
(2.87)$G_u = G_u^{adv} + G_u^{cor} + G_u^{h-diss} + G_u^{v-diss} + G_u^{metric} + G_u^{nh-metric}$
(2.88)$G_v = G_v^{adv} + G_v^{cor} + G_v^{h-diss} + G_v^{v-diss} + G_v^{metric} + G_v^{nh-metric}$
(2.89)$G_w = G_w^{adv} + G_w^{cor} + G_w^{h-diss} + G_w^{v-diss} + G_w^{metric} + G_w^{nh-metric}$
In the hydrostatic limit, $$G_w=0$$ and $$\epsilon_{nh}=0$$, reducing the vertical momentum to hydrostatic balance.
These terms are calculated in routines called from subroutine MOM_FLUXFORM and collected into the global arrays gU, gV, and gW.
S/R MOM_FLUXFORM
$$G_u$$ : gU ( DYNVARS.h )
$$G_v$$ : gV ( DYNVARS.h )
$$G_w$$ : gW ( NH_VARS.h )
The advective operator is second order accurate in space:
(2.90)${\cal A}_w \Delta r_f h_w G_u^{adv} = \delta_i \overline{ U }^i \overline{ u }^i + \delta_j \overline{ V }^i \overline{ u }^j + \delta_k \overline{ W }^i \overline{ u }^k$
(2.91)${\cal A}_s \Delta r_f h_s G_v^{adv} = \delta_i \overline{ U }^j \overline{ v }^i + \delta_j \overline{ V }^j \overline{ v }^j + \delta_k \overline{ W }^j \overline{ v }^k$
(2.92)${\cal A}_c \Delta r_c G_w^{adv} = \delta_i \overline{ U }^k \overline{ w }^i + \delta_j \overline{ V }^k \overline{ w }^j + \delta_k \overline{ W }^k \overline{ w }^k$
and because of the flux form does not contribute to the global budget of linear momentum. The quantities $$U$$, $$V$$ and $$W$$ are volume fluxes defined:
(2.93)$U = \Delta y_g \Delta r_f h_w u$
(2.94)$V = \Delta x_g \Delta r_f h_s v$
(2.95)$W = {\cal A}_c w$
The advection of momentum takes the same form as the advection of tracers but by a translated advective flow. Consequently, the conservation of second moments, derived for tracers later, applies to $$u^2$$ and $$v^2$$ and $$w^2$$ so that advection of momentum correctly conserves kinetic energy.
$$uu, vu, wu$$ : fZon, fMer, fVerUkp ( local to MOM_FLUXFORM.F )
$$uv, vv, wv$$ : fZon, fMer, fVerVkp ( local to MOM_FLUXFORM.F )
### 2.14.2. Coriolis terms¶
The “pure C grid” Coriolis terms (i.e. in absence of C-D scheme) are discretized:
(2.96)${\cal A}_w \Delta r_f h_w G_u^{Cor} = \overline{ f {\cal A}_c \Delta r_f h_c \overline{ v }^j }^i - \epsilon_{nh} \overline{ f' {\cal A}_c \Delta r_f h_c \overline{ w }^k }^i$
(2.97)${\cal A}_s \Delta r_f h_s G_v^{Cor} = - \overline{ f {\cal A}_c \Delta r_f h_c \overline{ u }^i }^j$
(2.98)${\cal A}_c \Delta r_c G_w^{Cor} = \epsilon_{nh} \overline{ f' {\cal A}_c \Delta r_f h_c \overline{ u }^i }^k$
where the Coriolis parameters $$f$$ and $$f'$$ are defined:
\begin{split}\begin{aligned} f & = & 2 \Omega \sin{\varphi} \\ f' & = & 2 \Omega \cos{\varphi}\end{aligned}\end{split}
where $$\varphi$$ is geographic latitude when using spherical geometry, otherwise the $$\beta$$-plane definition is used:
\begin{split}\begin{aligned} f & = & f_o + \beta y \\ f' & = & 0\end{aligned}\end{split}
This discretization globally conserves kinetic energy. It should be noted that despite the use of this discretization in former publications, all calculations to date have used the following different discretization:
(2.99)$G_u^{Cor} = f_u \overline{ v }^{ji} - \epsilon_{nh} f_u' \overline{ w }^{ik}$
(2.100)$G_v^{Cor} = - f_v \overline{ u }^{ij}$
(2.101)$G_w^{Cor} = \epsilon_{nh} f_w' \overline{ u }^{ik}$
where the subscripts on $$f$$ and $$f'$$ indicate evaluation of the Coriolis parameters at the appropriate points in space. The above discretization does not conserve anything, especially energy, but for historical reasons is the default for the code. A flag controls this discretization: set run-time integer selectCoriScheme to two (=2) (which otherwise defaults to zero) to select the energy-conserving conserving form (2.96), (2.97), and (2.98) above.
$$G_u^{Cor}, G_v^{Cor}$$ : cF ( local to MOM_FLUXFORM.F )
### 2.14.3. Curvature metric terms¶
The most commonly used coordinate system on the sphere is the geographic system $$(\lambda,\varphi)$$. The curvilinear nature of these coordinates on the sphere lead to some “metric” terms in the component momentum equations. Under the thin-atmosphere and hydrostatic approximations these terms are discretized:
(2.102)${\cal A}_w \Delta r_f h_w G_u^{metric} = \overline{ \frac{ \overline{u}^i }{a} \tan{\varphi} {\cal A}_c \Delta r_f h_c \overline{ v }^j }^i$
(2.103)$\begin{split}{\cal A}_s \Delta r_f h_s G_v^{metric} = - \overline{ \frac{ \overline{u}^i }{a} \tan{\varphi} {\cal A}_c \Delta r_f h_c \overline{ u }^i }^j \\\end{split}$
(2.104)$G_w^{metric} = 0$
where $$a$$ is the radius of the planet (sphericity is assumed) or the radial distance of the particle (i.e. a function of height). It is easy to see that this discretization satisfies all the properties of the discrete Coriolis terms since the metric factor $$\frac{u}{a} \tan{\varphi}$$ can be viewed as a modification of the vertical Coriolis parameter: $$f \rightarrow f+\frac{u}{a} \tan{\varphi}$$.
However, as for the Coriolis terms, a non-energy conserving form has exclusively been used to date:
\begin{split}\begin{aligned} G_u^{metric} & = & \frac{u \overline{v}^{ij} }{a} \tan{\varphi} \\ G_v^{metric} & = & \frac{ \overline{u}^{ij} \overline{u}^{ij}}{a} \tan{\varphi}\end{aligned}\end{split}
where $$\tan{\varphi}$$ is evaluated at the $$u$$ and $$v$$ points respectively.
$$G_u^{metric}, G_v^{metric}$$ : mT ( local to MOM_FLUXFORM.F )
### 2.14.4. Non-hydrostatic metric terms¶
For the non-hydrostatic equations, dropping the thin-atmosphere approximation re-introduces metric terms involving $$w$$ which are required to conserve angular momentum:
(2.105)${\cal A}_w \Delta r_f h_w G_u^{metric} = - \overline{ \frac{ \overline{u}^i \overline{w}^k }{a} {\cal A}_c \Delta r_f h_c }^i$
(2.106)${\cal A}_s \Delta r_f h_s G_v^{metric} = - \overline{ \frac{ \overline{v}^j \overline{w}^k }{a} {\cal A}_c \Delta r_f h_c}^j$
(2.107)${\cal A}_c \Delta r_c G_w^{metric} = \overline{ \frac{ {\overline{u}^i}^2 + {\overline{v}^j}^2}{a} {\cal A}_c \Delta r_f h_c }^k$
Because we are always consistent, even if consistently wrong, we have, in the past, used a different discretization in the model which is:
\begin{split}\begin{aligned} G_u^{metric} & = & - \frac{u}{a} \overline{w}^{ik} \\ G_v^{metric} & = & - \frac{v}{a} \overline{w}^{jk} \\ G_w^{metric} & = & \frac{1}{a} ( {\overline{u}^{ik}}^2 + {\overline{v}^{jk}}^2 )\end{aligned}\end{split}
$$G_u^{metric}, G_v^{metric}$$ : mT ( local to MOM_FLUXFORM.F )
### 2.14.5. Lateral dissipation¶
Historically, we have represented the SGS Reynolds stresses as simply down gradient momentum fluxes, ignoring constraints on the stress tensor such as symmetry.
(2.108)${\cal A}_w \Delta r_f h_w G_u^{h-diss} = \delta_i \Delta y_f \Delta r_f h_c \tau_{11} + \delta_j \Delta x_v \Delta r_f h_\zeta \tau_{12}$
(2.109)${\cal A}_s \Delta r_f h_s G_v^{h-diss} = \delta_i \Delta y_u \Delta r_f h_\zeta \tau_{21} + \delta_j \Delta x_f \Delta r_f h_c \tau_{22}$
The lateral viscous stresses are discretized:
(2.110)$\tau_{11} = A_h c_{11\Delta}(\varphi) \frac{1}{\Delta x_f} \delta_i u -A_4 c_{11\Delta^2}(\varphi) \frac{1}{\Delta x_f} \delta_i \nabla^2 u$
(2.111)$\tau_{12} = A_h c_{12\Delta}(\varphi) \frac{1}{\Delta y_u} \delta_j u -A_4 c_{12\Delta^2}(\varphi)\frac{1}{\Delta y_u} \delta_j \nabla^2 u$
(2.112)$\tau_{21} = A_h c_{21\Delta}(\varphi) \frac{1}{\Delta x_v} \delta_i v -A_4 c_{21\Delta^2}(\varphi) \frac{1}{\Delta x_v} \delta_i \nabla^2 v$
(2.113)$\tau_{22} = A_h c_{22\Delta}(\varphi) \frac{1}{\Delta y_f} \delta_j v -A_4 c_{22\Delta^2}(\varphi) \frac{1}{\Delta y_f} \delta_j \nabla^2 v$
where the non-dimensional factors $$c_{lm\Delta^n}(\varphi), \{l,m,n\} \in \{1,2\}$$ define the “cosine” scaling with latitude which can be applied in various ad-hoc ways. For instance, $$c_{11\Delta} = c_{21\Delta} = (\cos{\varphi})^{3/2}$$, $$c_{12\Delta}=c_{22\Delta}=1$$ would represent the anisotropic cosine scaling typically used on the “lat-lon” grid for Laplacian viscosity.
It should be noted that despite the ad-hoc nature of the scaling, some scaling must be done since on a lat-lon grid the converging meridians make it very unlikely that a stable viscosity parameter exists across the entire model domain.
The Laplacian viscosity coefficient, $$A_h$$ (viscAh), has units of $$m^2 s^{-1}$$. The bi-harmonic viscosity coefficient, $$A_4$$ (viscA4), has units of $$m^4 s^{-1}$$.
$$\tau_{11}, \tau_{12}$$ : vF, v4F ( local to MOM_FLUXFORM.F )
$$\tau_{21}, \tau_{22}$$ : vF, v4F ( local to MOM_FLUXFORM.F )
Two types of lateral boundary condition exist for the lateral viscous terms, no-slip and free-slip.
The free-slip condition is most convenient to code since it is equivalent to zero-stress on boundaries. Simple masking of the stress components sets them to zero. The fractional open stress is properly handled using the lopped cells.
The no-slip condition defines the normal gradient of a tangential flow such that the flow is zero on the boundary. Rather than modify the stresses by using complicated functions of the masks and “ghost” points (see Adcroft and Marshall (1998) [adcroft:98]) we add the boundary stresses as an additional source term in cells next to solid boundaries. This has the advantage of being able to cope with “thin walls” and also makes the interior stress calculation (code) independent of the boundary conditions. The “body” force takes the form:
(2.114)$G_u^{side-drag} = \frac{4}{\Delta z_f} \overline{ (1-h_\zeta) \frac{\Delta x_v}{\Delta y_u} }^j \left( A_h c_{12\Delta}(\varphi) u - A_4 c_{12\Delta^2}(\varphi) \nabla^2 u \right)$
(2.115)$G_v^{side-drag} = \frac{4}{\Delta z_f} \overline{ (1-h_\zeta) \frac{\Delta y_u}{\Delta x_v} }^i \left( A_h c_{21\Delta}(\varphi) v - A_4 c_{21\Delta^2}(\varphi) \nabla^2 v \right)$
In fact, the above discretization is not quite complete because it assumes that the bathymetry at velocity points is deeper than at neighboring vorticity points, e.g. $$1-h_w < 1-h_\zeta$$
$$G_u^{side-drag}, G_v^{side-drag}$$ : vF ( local to MOM_FLUXFORM.F )
### 2.14.6. Vertical dissipation¶
Vertical viscosity terms are discretized with only partial adherence to the variable grid lengths introduced by the finite volume formulation. This reduces the formal accuracy of these terms to just first order but only next to boundaries; exactly where other terms appear such as linear and quadratic bottom drag.
(2.116)$G_u^{v-diss} = \frac{1}{\Delta r_f h_w} \delta_k \tau_{13}$
(2.117)$G_v^{v-diss} = \frac{1}{\Delta r_f h_s} \delta_k \tau_{23}$
(2.118)$G_w^{v-diss} = \epsilon_{nh} \frac{1}{\Delta r_f h_d} \delta_k \tau_{33}$
represents the general discrete form of the vertical dissipation terms.
In the interior the vertical stresses are discretized:
\begin{split}\begin{aligned} \tau_{13} & = & A_v \frac{1}{\Delta r_c} \delta_k u \\ \tau_{23} & = & A_v \frac{1}{\Delta r_c} \delta_k v \\ \tau_{33} & = & A_v \frac{1}{\Delta r_f} \delta_k w\end{aligned}\end{split}
It should be noted that in the non-hydrostatic form, the stress tensor is even less consistent than for the hydrostatic (see Wajsowicz (1993) [wajsowicz:93]). It is well known how to do this properly (see Griffies and Hallberg (2000) [griffies:00]) and is on the list of to-do’s.
$$\tau_{13}$$ : fVrUp, fVrDw ( local to MOM_FLUXFORM.F )
$$\tau_{23}$$ : fVrUp, fVrDw ( local to MOM_FLUXFORM.F )
As for the lateral viscous terms, the free-slip condition is equivalent to simply setting the stress to zero on boundaries. The no-slip condition is implemented as an additional term acting on top of the interior and free-slip stresses. Bottom drag represents additional friction, in addition to that imposed by the no-slip condition at the bottom. The drag is cast as a stress expressed as a linear or quadratic function of the mean flow in the layer above the topography:
(2.119)$\tau_{13}^{bottom-drag} = \left( 2 A_v \frac{1}{\Delta r_c} + r_b + C_d \sqrt{ \overline{2 KE}^i } \right) u$
(2.120)$\tau_{23}^{bottom-drag} = \left( 2 A_v \frac{1}{\Delta r_c} + r_b + C_d \sqrt{ \overline{2 KE}^j } \right) v$
where these terms are only evaluated immediately above topography. $$r_b$$ (bottomDragLinear) has units of $$m s^{-1}$$ and a typical value of the order 0.0002 $$m s^{-1}$$. $$C_d$$ (bottomDragQuadratic) is dimensionless with typical values in the range 0.001–0.003.
$$\tau_{13}^{bottom-drag} / \Delta r_f , \tau_{23}^{bottom-drag} / \Delta r_f$$ : vF ( local to MOM_FLUXFORM.F )
### 2.14.7. Derivation of discrete energy conservation¶
These discrete equations conserve kinetic plus potential energy using the following definitions:
(2.121)$KE = \frac{1}{2} \left( \overline{ u^2 }^i + \overline{ v^2 }^j + \epsilon_{nh} \overline{ w^2 }^k \right)$
### 2.14.8. Mom Diagnostics¶
------------------------------------------------------------------------
<-Name->|Levs|<-parsing code->|<-- Units -->|<- Tile (max=80c)
------------------------------------------------------------------------
VISCAHZ | 15 |SZ MR |m^2/s |Harmonic Visc Coefficient (m2/s) (Zeta Pt)
VISCA4Z | 15 |SZ MR |m^4/s |Biharmonic Visc Coefficient (m4/s) (Zeta Pt)
VISCAHD | 15 |SM MR |m^2/s |Harmonic Viscosity Coefficient (m2/s) (Div Pt)
VISCA4D | 15 |SM MR |m^4/s |Biharmonic Viscosity Coefficient (m4/s) (Div Pt)
VAHZMAX | 15 |SZ MR |m^2/s |CFL-MAX Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZMAX | 15 |SZ MR |m^4/s |CFL-MAX Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDMAX | 15 |SM MR |m^2/s |CFL-MAX Harm Visc Coefficient (m2/s) (Div Pt)
VA4DMAX | 15 |SM MR |m^4/s |CFL-MAX Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZMIN | 15 |SZ MR |m^2/s |RE-MIN Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZMIN | 15 |SZ MR |m^4/s |RE-MIN Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDMIN | 15 |SM MR |m^2/s |RE-MIN Harm Visc Coefficient (m2/s) (Div Pt)
VA4DMIN | 15 |SM MR |m^4/s |RE-MIN Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZLTH | 15 |SZ MR |m^2/s |Leith Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZLTH | 15 |SZ MR |m^4/s |Leith Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDLTH | 15 |SM MR |m^2/s |Leith Harm Visc Coefficient (m2/s) (Div Pt)
VA4DLTH | 15 |SM MR |m^4/s |Leith Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZLTHD| 15 |SZ MR |m^2/s |LeithD Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZLTHD| 15 |SZ MR |m^4/s |LeithD Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDLTHD| 15 |SM MR |m^2/s |LeithD Harm Visc Coefficient (m2/s) (Div Pt)
VA4DLTHD| 15 |SM MR |m^4/s |LeithD Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZSMAG| 15 |SZ MR |m^2/s |Smagorinsky Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZSMAG| 15 |SZ MR |m^4/s |Smagorinsky Biharm Visc Coeff. (m4/s) (Zeta Pt)
VAHDSMAG| 15 |SM MR |m^2/s |Smagorinsky Harm Visc Coefficient (m2/s) (Div Pt)
VA4DSMAG| 15 |SM MR |m^4/s |Smagorinsky Biharm Visc Coeff. (m4/s) (Div Pt)
momKE | 15 |SM MR |m^2/s^2 |Kinetic Energy (in momentum Eq.)
momHDiv | 15 |SM MR |s^-1 |Horizontal Divergence (in momentum Eq.)
momVort3| 15 |SZ MR |s^-1 |3rd component (vertical) of Vorticity
Strain | 15 |SZ MR |s^-1 |Horizontal Strain of Horizontal Velocities
Tension | 15 |SM MR |s^-1 |Horizontal Tension of Horizontal Velocities
UBotDrag| 15 |UU 129MR |m/s^2 |U momentum tendency from Bottom Drag
VBotDrag| 15 |VV 128MR |m/s^2 |V momentum tendency from Bottom Drag
USidDrag| 15 |UU 131MR |m/s^2 |U momentum tendency from Side Drag
VSidDrag| 15 |VV 130MR |m/s^2 |V momentum tendency from Side Drag
Um_Diss | 15 |UU 133MR |m/s^2 |U momentum tendency from Dissipation
Vm_Diss | 15 |VV 132MR |m/s^2 |V momentum tendency from Dissipation
Um_Cori | 15 |UU 137MR |m/s^2 |U momentum tendency from Coriolis term
Vm_Cori | 15 |VV 136MR |m/s^2 |V momentum tendency from Coriolis term
Um_Ext | 15 |UU 137MR |m/s^2 |U momentum tendency from external forcing
Vm_Ext | 15 |VV 138MR |m/s^2 |V momentum tendency from external forcing
Um_AdvRe| 15 |UU 143MR |m/s^2 |U momentum tendency from vertical Advection (Explicit part)
Vm_AdvRe| 15 |VV 142MR |m/s^2 |V momentum tendency from vertical Advection (Explicit part)
ADVx_Um | 15 |UM 145MR |m^4/s^2 |Zonal Advective Flux of U momentum
ADVy_Um | 15 |VZ 144MR |m^4/s^2 |Meridional Advective Flux of U momentum
ADVrE_Um| 15 |WU LR |m^4/s^2 |Vertical Advective Flux of U momentum (Explicit part)
ADVx_Vm | 15 |UZ 148MR |m^4/s^2 |Zonal Advective Flux of V momentum
ADVy_Vm | 15 |VM 147MR |m^4/s^2 |Meridional Advective Flux of V momentum
ADVrE_Vm| 15 |WV LR |m^4/s^2 |Vertical Advective Flux of V momentum (Explicit part)
VISCx_Um| 15 |UM 151MR |m^4/s^2 |Zonal Viscous Flux of U momentum
VISCy_Um| 15 |VZ 150MR |m^4/s^2 |Meridional Viscous Flux of U momentum
VISrE_Um| 15 |WU LR |m^4/s^2 |Vertical Viscous Flux of U momentum (Explicit part)
VISrI_Um| 15 |WU LR |m^4/s^2 |Vertical Viscous Flux of U momentum (Implicit part)
VISCx_Vm| 15 |UZ 155MR |m^4/s^2 |Zonal Viscous Flux of V momentum
VISCy_Vm| 15 |VM 154MR |m^4/s^2 |Meridional Viscous Flux of V momentum
VISrE_Vm| 15 |WV LR |m^4/s^2 |Vertical Viscous Flux of V momentum (Explicit part)
VISrI_Vm| 15 |WV LR |m^4/s^2 |Vertical Viscous Flux of V momentum (Implicit part)
## 2.15. Vector invariant momentum equations¶
The finite volume method lends itself to describing the continuity and tracer equations in curvilinear coordinate systems. However, in curvilinear coordinates many new metric terms appear in the momentum equations (written in Lagrangian or flux-form) making generalization far from elegant. Fortunately, an alternative form of the equations, the vector invariant equations are exactly that; invariant under coordinate transformations so that they can be applied uniformly in any orthogonal curvilinear coordinate system such as spherical coordinates, boundary following or the conformal spherical cube system.
The non-hydrostatic vector invariant equations read:
(2.122)$\partial_t \vec{v} + ( 2\vec{\Omega} + \vec{\zeta}) \wedge \vec{v} - b \hat{r} + \vec{\nabla} B = \vec{\nabla} \cdot \vec{\bf \tau}$
which describe motions in any orthogonal curvilinear coordinate system. Here, $$B$$ is the Bernoulli function and $$\vec{\zeta}=\nabla \wedge \vec{v}$$ is the vorticity vector. We can take advantage of the elegance of these equations when discretizing them and use the discrete definitions of the grad, curl and divergence operators to satisfy constraints. We can also consider the analogy to forming derived equations, such as the vorticity equation, and examine how the discretization can be adjusted to give suitable vorticity advection among other things.
The underlying algorithm is the same as for the flux form equations. All that has changed is the contents of the “G’s”. For the time-being, only the hydrostatic terms have been coded but we will indicate the points where non-hydrostatic contributions will enter:
(2.123)$G_u = G_u^{fv} + G_u^{\zeta_3 v} + G_u^{\zeta_2 w} + G_u^{\partial_x B} + G_u^{\partial_z \tau^x} + G_u^{h-dissip} + G_u^{v-dissip}$
(2.124)$G_v = G_v^{fu} + G_v^{\zeta_3 u} + G_v^{\zeta_1 w} + G_v^{\partial_y B} + G_v^{\partial_z \tau^y} + G_v^{h-dissip} + G_v^{v-dissip}$
(2.125)$G_w = G_w^{fu} + G_w^{\zeta_1 v} + G_w^{\zeta_2 u} + G_w^{\partial_z B} + G_w^{h-dissip} + G_w^{v-dissip}$
S/R MOM_VECINV
$$G_u$$ : gU ( DYNVARS.h )
$$G_v$$ : gV ( DYNVARS.h )
$$G_w$$ : gW ( NH_VARS.h )
### 2.15.1. Relative vorticity¶
The vertical component of relative vorticity is explicitly calculated and use in the discretization. The particular form is crucial for numerical stability; alternative definitions break the conservation properties of the discrete equations.
Relative vorticity is defined:
(2.126)$\zeta_3 = \frac{\Gamma}{A_\zeta} = \frac{1}{{\cal A}_\zeta} ( \delta_i \Delta y_c v - \delta_j \Delta x_c u )$
where $${\cal A}_\zeta$$ is the area of the vorticity cell presented in the vertical and $$\Gamma$$ is the circulation about that cell.
$$\zeta_3$$ : vort3 ( local to MOM_VECINV.F )
### 2.15.2. Kinetic energy¶
The kinetic energy, denoted $$KE$$, is defined:
(2.127)$KE = \frac{1}{2} ( \overline{ u^2 }^i + \overline{ v^2 }^j + \epsilon_{nh} \overline{ w^2 }^k )$
S/R MOM_CALC_KE
$$KE$$ : KE ( local to MOM_VECINV.F )
### 2.15.3. Coriolis terms¶
The potential enstrophy conserving form of the linear Coriolis terms are written:
(2.128)$G_u^{fv} = \frac{1}{\Delta x_c} \overline{ \frac{f}{h_\zeta} }^j \overline{ \overline{ \Delta x_g h_s v }^j }^i$
(2.129)$G_v^{fu} = - \frac{1}{\Delta y_c} \overline{ \frac{f}{h_\zeta} }^i \overline{ \overline{ \Delta y_g h_w u }^i }^j$
Here, the Coriolis parameter $$f$$ is defined at vorticity (corner) points.
The potential enstrophy conserving form of the non-linear Coriolis terms are written:
(2.130)$G_u^{\zeta_3 v} = \frac{1}{\Delta x_c} \overline{ \frac{\zeta_3}{h_\zeta} }^j \overline{ \overline{ \Delta x_g h_s v }^j }^i$
(2.131)$G_v^{\zeta_3 u} = - \frac{1}{\Delta y_c} \overline{ \frac{\zeta_3}{h_\zeta} }^i \overline{ \overline{ \Delta y_g h_w u }^i }^j$
The Coriolis terms can also be evaluated together and expressed in terms of absolute vorticity $$f+\zeta_3$$. The potential enstrophy conserving form using the absolute vorticity is written:
(2.132)$G_u^{fv} + G_u^{\zeta_3 v} = \frac{1}{\Delta x_c} \overline{ \frac{f + \zeta_3}{h_\zeta} }^j \overline{ \overline{ \Delta x_g h_s v }^j }^i$
(2.133)$G_v^{fu} + G_v^{\zeta_3 u} = - \frac{1}{\Delta y_c} \overline{ \frac{f + \zeta_3}{h_\zeta} }^i \overline{ \overline{ \Delta y_g h_w u }^i }^j$
The distinction between using absolute vorticity or relative vorticity is useful when constructing higher order advection schemes; monotone advection of relative vorticity behaves differently to monotone advection of absolute vorticity. Currently the choice of relative/absolute vorticity, centered/upwind/high order advection is available only through commented subroutine calls.
$$G_u^{fv} , G_u^{\zeta_3 v}$$ : uCf ( local to MOM_VECINV.F )
$$G_v^{fu} , G_v^{\zeta_3 u}$$ : vCf ( local to MOM_VECINV.F )
### 2.15.4. Shear terms¶
The shear terms ($$\zeta_2w$$ and $$\zeta_1w$$) are are discretized to guarantee that no spurious generation of kinetic energy is possible; the horizontal gradient of Bernoulli function has to be consistent with the vertical advection of shear:
(2.134)$G_u^{\zeta_2 w} = \frac{1}{ {\cal A}_w \Delta r_f h_w } \overline{ \overline{ {\cal A}_c w }^i ( \delta_k u - \epsilon_{nh} \delta_j w ) }^k$
(2.135)$G_v^{\zeta_1 w} = \frac{1}{ {\cal A}_s \Delta r_f h_s } \overline{ \overline{ {\cal A}_c w }^i ( \delta_k u - \epsilon_{nh} \delta_j w ) }^k$
$$G_u^{\zeta_2 w}$$ : uCf ( local to MOM_VECINV.F )
$$G_v^{\zeta_1 w}$$ : vCf ( local to MOM_VECINV.F )
### 2.15.5. Gradient of Bernoulli function¶
(2.136)$G_u^{\partial_x B} = \frac{1}{\Delta x_c} \delta_i ( \phi' + KE )$
(2.137)$G_v^{\partial_y B} = \frac{1}{\Delta x_y} \delta_j ( \phi' + KE )$
$$G_u^{\partial_x KE}$$ : uCf ( local to MOM_VECINV.F )
$$G_v^{\partial_y KE}$$ : vCf ( local to MOM_VECINV.F )
### 2.15.6. Horizontal divergence¶
The horizontal divergence, a complimentary quantity to relative vorticity, is used in parameterizing the Reynolds stresses and is discretized:
(2.138)$D = \frac{1}{{\cal A}_c h_c} ( \delta_i \Delta y_g h_w u + \delta_j \Delta x_g h_s v )$
S/R MOM_CALC_KE
$$D$$ : hDiv ( local to MOM_VECINV.F )
### 2.15.7. Horizontal dissipation¶
The following discretization of horizontal dissipation conserves potential vorticity (thickness weighted relative vorticity) and divergence and dissipates energy, enstrophy and divergence squared:
(2.139)$G_u^{h-dissip} = \frac{1}{\Delta x_c} \delta_i ( A_D D - A_{D4} D^*) - \frac{1}{\Delta y_u h_w} \delta_j h_\zeta ( A_\zeta \zeta - A_{\zeta4} \zeta^* )$
(2.140)$G_v^{h-dissip} = \frac{1}{\Delta x_v h_s} \delta_i h_\zeta ( A_\zeta \zeta - A_\zeta \zeta^* ) + \frac{1}{\Delta y_c} \delta_j ( A_D D - A_{D4} D^* )$
where
\begin{split}\begin{aligned} D^* & = & \frac{1}{{\cal A}_c h_c} ( \delta_i \Delta y_g h_w \nabla^2 u + \delta_j \Delta x_g h_s \nabla^2 v ) \\ \zeta^* & = & \frac{1}{{\cal A}_\zeta} ( \delta_i \Delta y_c \nabla^2 v - \delta_j \Delta x_c \nabla^2 u )\end{aligned}\end{split}
$$G_u^{h-dissip}$$ : uDissip ( local to MOM_VI_HDISSIP.F )
$$G_v^{h-dissip}$$ : vDissip ( local to MOM_VI_HDISSIP.F )
### 2.15.8. Vertical dissipation¶
Currently, this is exactly the same code as the flux form equations.
(2.141)$G_u^{v-diss} = \frac{1}{\Delta r_f h_w} \delta_k \tau_{13}$
(2.142)$G_v^{v-diss} = \frac{1}{\Delta r_f h_s} \delta_k \tau_{23}$
represents the general discrete form of the vertical dissipation terms.
In the interior the vertical stresses are discretized:
\begin{split}\begin{aligned} \tau_{13} & = & A_v \frac{1}{\Delta r_c} \delta_k u \\ \tau_{23} & = & A_v \frac{1}{\Delta r_c} \delta_k v\end{aligned}\end{split}
$$\tau_{13}, \tau_{23}$$ : vrf ( local to MOM_VECINV.F )
## 2.16. Tracer equations¶
The basic discretization used for the tracer equations is the second order piece-wise constant finite volume form of the forced advection-diffusion equations. There are many alternatives to second order method for advection and alternative parameterizations for the sub-grid scale processes. The Gent-McWilliams eddy parameterization, KPP mixing scheme and PV flux parameterization are all dealt with in separate sections. The basic discretization of the advection-diffusion part of the tracer equations and the various advection schemes will be described here.
### 2.16.1. Time-stepping of tracers: ABII¶
The default advection scheme is the centered second order method which requires a second order or quasi-second order time-stepping scheme to be stable. Historically this has been the quasi-second order Adams-Bashforth method (ABII) and applied to all terms. For an arbitrary tracer, $$\tau$$, the forced advection-diffusion equation reads:
(2.143)$\partial_t \tau + G_{adv}^\tau = G_{diff}^\tau + G_{forc}^\tau$
where $$G_{adv}^\tau$$, $$G_{diff}^\tau$$ and $$G_{forc}^\tau$$ are the tendencies due to advection, diffusion and forcing, respectively, namely:
(2.144)$G_{adv}^\tau = \partial_x u \tau + \partial_y v \tau + \partial_r w \tau - \tau \nabla \cdot {\bf v}$
(2.145)$G_{diff}^\tau = \nabla \cdot {\bf K} \nabla \tau$
and the forcing can be some arbitrary function of state, time and space.
The term, $$\tau \nabla \cdot {\bf v}$$, is required to retain local conservation in conjunction with the linear implicit free-surface. It only affects the surface layer since the flow is non-divergent everywhere else. This term is therefore referred to as the surface correction term. Global conservation is not possible using the flux-form (as here) and a linearized free-surface (Griffies and Hallberg (2000) [griffies:00] , Campin et al. (2004) [cam:04]).
The continuity equation can be recovered by setting $$G_{diff}=G_{forc}=0$$ and $$\tau=1$$.
The driver routine that calls the routines to calculate tendencies are CALC_GT and CALC_GS for temperature and salt (moisture), respectively. These in turn call a generic advection diffusion routine GAD_CALC_RHS that is called with the flow field and relevant tracer as arguments and returns the collective tendency due to advection and diffusion. Forcing is add subsequently in CALC_GT or CALC_GS to the same tendency array.
$$\tau$$ : tau ( argument )
$$G^{(n)}$$ : gTracer ( argument )
$$F_r$$ : fVerT ( argument )
The space and time discretization are treated separately (method of lines). Tendencies are calculated at time levels $$n$$ and $$n-1$$ and extrapolated to $$n+1/2$$ using the Adams-Bashforth method:
(2.146)$G^{(n+1/2)} = (\frac{3}{2} + \epsilon) G^{(n)} - (\frac{1}{2} + \epsilon) G^{(n-1)}$
where $$G^{(n)} = G_{adv}^\tau + G_{diff}^\tau + G_{src}^\tau$$ at time step $$n$$. The tendency at $$n-1$$ is not re-calculated but rather the tendency at $$n$$ is stored in a global array for later re-use.
$$G^{(n+1/2)}$$ : gTracer ( argument on exit )
$$G^{(n)}$$ : gTracer ( argument on entry )
$$G^{(n-1)}$$ : gTrNm1 ( argument )
$$\epsilon$$ : ABeps ( PARAMS.h )
The tracers are stepped forward in time using the extrapolated tendency:
(2.147)$\tau^{(n+1)} = \tau^{(n)} + \Delta t G^{(n+1/2)}$
$$\tau^{(n+1)}$$ : gTracer ( argument on exit )
$$\tau^{(n)}$$ : tracer ( argument on entry )
$$G^{(n+1/2)}$$ : gTracer ( argument )
$$\Delta t$$ : deltaTtracer ( PARAMS.h )
Strictly speaking the ABII scheme should be applied only to the advection terms. However, this scheme is only used in conjunction with the standard second, third and fourth order advection schemes. Selection of any other advection scheme disables Adams-Bashforth for tracers so that explicit diffusion and forcing use the forward method.
## 2.18. Shapiro Filter¶
The Shapiro filter (Shapiro 1970) [shapiro:70] is a high order horizontal filter that efficiently remove small scale grid noise without affecting the physical structures of a field. It is applied at the end of the time step on both velocity and tracer fields.
Three different space operators are considered here (S1,S2 and S4). They differ essentially by the sequence of derivative in both X and Y directions. Consequently they show different damping response function specially in the diagonal directions X+Y and X-Y.
Space derivatives can be computed in the real space, taking into account the grid spacing. Alternatively, a pure computational filter can be defined, using pure numerical differences and ignoring grid spacing. This later form is stable whatever the grid is, and therefore specially useful for highly anisotropic grid such as spherical coordinate grid. A damping time-scale parameter $$\tau_{shap}$$ defines the strength of the filter damping.
The three computational filter operators are :
$\mathrm{S1c:}\hspace{2cm} [1 - 1/2 \frac{\Delta t}{\tau_{shap}} \{ (\frac{1}{4}\delta_{ii})^n + (\frac{1}{4}\delta_{jj})^n \} ]$
$\mathrm{S2c:}\hspace{2cm} [1 - \frac{\Delta t}{\tau_{shap}} \{ \frac{1}{8} (\delta_{ii} + \delta_{jj}) \}^n]$
$\mathrm{S4c:}\hspace{2cm} [1 - \frac{\Delta t}{\tau_{shap}} (\frac{1}{4}\delta_{ii})^n] [1 - \frac{\Delta t}{\tau_{shap}} (\frac{1}{4}\delta_{jj})^n]$
In addition, the S2 operator can easily be extended to a physical space filter:
$\mathrm{S2g:}\hspace{2cm} [1 - \frac{\Delta t}{\tau_{shap}} \{ \frac{L_{shap}^2}{8} \overline{\nabla}^2 \}^n]$
with the Laplacian operator $$\overline{\nabla}^2$$ and a length scale parameter $$L_{shap}$$. The stability of this S2g filter requires $$L_{shap} < \mathrm{Min}^{(Global)}(\Delta x,\Delta y)$$.
### 2.18.1. SHAP Diagnostics¶
--------------------------------------------------------------
<-Name->|Levs|parsing code|<-Units->|<- Tile (max=80c)
--------------------------------------------------------------
SHAP_dT | 5 |SM MR |K/s |Temperature Tendency due to Shapiro Filter
SHAP_dS | 5 |SM MR |g/kg/s |Specific Humidity Tendency due to Shapiro Filter
SHAP_dU | 5 |UU 148MR |m/s^2 |Zonal Wind Tendency due to Shapiro Filter
SHAP_dV | 5 |VV 147MR |m/s^2 |Meridional Wind Tendency due to Shapiro Filter
## 2.19. Nonlinear Viscosities for Large Eddy Simulation¶
In Large Eddy Simulations (LES), a turbulent closure needs to be provided that accounts for the effects of subgridscale motions on the large scale. With sufficiently powerful computers, we could resolve the entire flow down to the molecular viscosity scales ($$L_{\nu}\approx 1 \rm cm$$). Current computation allows perhaps four decades to be resolved, so the largest problem computationally feasible would be about 10m. Most oceanographic problems are much larger in scale, so some form of LES is required, where only the largest scales of motion are resolved, and the subgridscale effects on the large-scale are parameterized.
To formalize this process, we can introduce a filter over the subgridscale L: $$u_\alpha\rightarrow \overline{u_\alpha}$$ and L: $$b\rightarrow \overline{b}$$. This filter has some intrinsic length and time scales, and we assume that the flow at that scale can be characterized with a single velocity scale ($$V$$) and vertical buoyancy gradient ($$N^2$$). The filtered equations of motion in a local Mercator projection about the gridpoint in question (see Appendix for notation and details of approximation) are:
(2.153)${\frac{{ \overline{D} {{\tilde {\overline{u}}}}}} {{\overline{Dt}}}} - \frac{{{\tilde {\overline{v}}}} \sin\theta}{{\rm Ro}\sin\theta_0} + \frac{{M_{Ro}}}{{\rm Ro}} \frac{\partial{\overline{\pi}}}{\partial{x}} = -\left({\overline{\frac{D{\tilde u}}{Dt} }} - {\frac{{\overline{D} {{\tilde {\overline{u}}}}}}{{\overline{Dt}}} }\right) +\frac{\nabla^2{{\tilde {\overline{u}}}}}{{\rm Re}}$
(2.154)${\frac{{ \overline{D} {{\tilde {\overline{v}}}}}} {{\overline{Dt}}}} - \frac{{{\tilde {\overline{u}}}} \sin\theta}{{\rm Ro}\sin\theta_0} + \frac{{M_{Ro}}}{{\rm Ro}} \frac{\partial{\overline{\pi}}}{\partial{y}} = -\left({\overline{\frac{D{\tilde v}}{Dt} }} - {\frac{{\overline{D} {{\tilde {\overline{v}}}}}}{{\overline{Dt}}} }\right) +\frac{\nabla^2{{\tilde {\overline{v}}}}}{{\rm Re}}$
(2.155)$\frac{{\overline{D} \overline w}}{{\overline{Dt}}} + \frac{ \frac{\partial{\overline{\pi}}}{\partial{z}} - \overline b}{{\rm Fr}^2\lambda^2} = -\left(\overline{\frac{D{w}}{Dt}} - \frac{{\overline{D} \overline w}}{{\overline{Dt}}}\right) +\frac{\nabla^2 \overline w}{{\rm Re}}\nonumber$
(2.156)$\frac{{\overline{D} \bar b}}{{\overline{Dt}}} + \overline w = -\left(\overline{\frac{D{b}}{Dt}} - \frac{{\overline{D} \bar b}}{{\overline{Dt}}}\right) +\frac{\nabla^2 \overline b}{\Pr{\rm Re}}\nonumber$
(2.157)$\mu^2\left({\frac{\partial{\tilde {\overline{u}}}}{\partial{x}}} + {\frac{\partial{\tilde {\overline{v}}}}{\partial{y}}} \right) + {\frac{\partial{\overline w}}{\partial{z}}} = 0$
Tildes denote multiplication by $$\cos\theta/\cos\theta_0$$ to account for converging meridians.
The ocean is usually turbulent, and an operational definition of turbulence is that the terms in parentheses (the ’eddy’ terms) on the right of (2.153) - (2.156)) are of comparable magnitude to the terms on the left-hand side. The terms proportional to the inverse of , instead, are many orders of magnitude smaller than all of the other terms in virtually every oceanic application.
### 2.19.1. Eddy Viscosity¶
A turbulent closure provides an approximation to the ’eddy’ terms on the right of the preceding equations. The simplest form of LES is just to increase the viscosity and diffusivity until the viscous and diffusive scales are resolved. That is, we approximate (2.153) - (2.156):
(2.158)$\left({\overline{\frac{D{\tilde u}}{Dt} }} - {\frac{{\overline{D} {{\tilde {\overline{u}}}}}}{{\overline{Dt}}} }\right) \approx\frac{\nabla^2_h{{\tilde {\overline{u}}}}}{{\rm Re}_h} +\frac{{\frac{\partial^2{{\tilde {\overline{u}}}}}{{\partial{z}}^2}}}{{\rm Re}_v}$
(2.159)$\left({\overline{\frac{D{\tilde v}}{Dt} }} - {\frac{{\overline{D} {{\tilde {\overline{v}}}}}}{{\overline{Dt}}} }\right) \approx\frac{\nabla^2_h{{\tilde {\overline{v}}}}}{{\rm Re}_h} +\frac{{\frac{\partial^2{{\tilde {\overline{v}}}}}{{\partial{z}}^2}}}{{\rm Re}_v}$
(2.160)$\left(\overline{\frac{D{w}}{Dt}} - \frac{{\overline{D} \overline w}}{{\overline{Dt}}}\right) \approx\frac{\nabla^2_h \overline w}{{\rm Re}_h} +\frac{{\frac{\partial^2{\overline w}}{{\partial{z}}^2}}}{{\rm Re}_v}$
(2.161)$\left(\overline{\frac{D{b}}{Dt}} - \frac{{\overline{D} \bar b}}{{\overline{Dt}}}\right) \approx\frac{\nabla^2_h \overline b}{\Pr{\rm Re}_h} +\frac{{\frac{\partial^2{\overline b}}{{\partial{z}}^2}}}{\Pr{\rm Re}_v}\nonumber$
#### 2.19.1.1. Reynolds-Number Limited Eddy Viscosity¶
One way of ensuring that the gridscale is sufficiently viscous (i.e., resolved) is to choose the eddy viscosity $$A_h$$ so that the gridscale horizontal Reynolds number based on this eddy viscosity, $${\rm Re}_h$$, is O(1). That is, if the gridscale is to be viscous, then the viscosity should be chosen to make the viscous terms as large as the advective ones. Bryan et al. (1975) [bryan:75] notes that a computational mode is squelched by using $${\rm Re}_h<$$2.
MITgcm users can select horizontal eddy viscosities based on $${\rm Re}_h$$ using two methods. 1) The user may estimate the velocity scale expected from the calculation and grid spacing and set viscAh to satisfy $${\rm Re}_h<2$$. 2) The user may use viscAhReMax, which ensures that the viscosity is always chosen so that $${\rm Re}_h<$$ viscAhReMax. This last option should be used with caution, however, since it effectively implies that viscous terms are fixed in magnitude relative to advective terms. While it may be a useful method for specifying a minimum viscosity with little effort, tests Bryan et al. (1975) [bryan:75] have shown that setting viscAhReMax =2 often tends to increase the viscosity substantially over other more ’physical’ parameterizations below, especially in regions where gradients of velocity are small (and thus turbulence may be weak), so perhaps a more liberal value should be used, e.g. viscAhReMax =10.
While it is certainly necessary that viscosity be active at the gridscale, the wavelength where dissipation of energy or enstrophy occurs is not necessarily $$L=A_h/U$$. In fact, it is by ensuring that either the dissipation of energy in a 3-d turbulent cascade (Smagorinsky) or dissipation of enstrophy in a 2-d turbulent cascade (Leith) is resolved that these parameterizations derive their physical meaning.
#### 2.19.1.2. Vertical Eddy Viscosities¶
Vertical eddy viscosities are often chosen in a more subjective way, as model stability is not usually as sensitive to vertical viscosity. Usually the ’observed’ value from finescale measurements is used (e.g. viscAr$$\approx1\times10^{-4} m^2/s$$). However, Smagorinsky (1993) [smag:93] notes that the Smagorinsky parameterization of isotropic turbulence implies a value of the vertical viscosity as well as the horizontal viscosity (see below).
#### 2.19.1.3. Smagorinsky Viscosity¶
Some suggest (see Smagorinsky 1963 [smag:63]; Smagorinsky 1993 [smag:93]) choosing a viscosity that depends on the resolved motions. Thus, the overall viscous operator has a nonlinear dependence on velocity. Smagorinsky chose his form of viscosity by considering Kolmogorov’s ideas about the energy spectrum of 3-d isotropic turbulence.
Kolmogorov supposed that energy is injected into the flow at large scales (small $$k$$) and is ’cascaded’ or transferred conservatively by nonlinear processes to smaller and smaller scales until it is dissipated near the viscous scale. By setting the energy flux through a particular wavenumber $$k$$, $$\epsilon$$, to be a constant in $$k$$, there is only one combination of viscosity and energy flux that has the units of length, the Kolmogorov wavelength. It is $$L_\epsilon(\nu)\propto\pi\epsilon^{-1/4}\nu^{3/4}$$ (the $$\pi$$ stems from conversion from wavenumber to wavelength). To ensure that this viscous scale is resolved in a numerical model, the gridscale should be decreased until $$L_\epsilon(\nu)>L$$ (so-called Direct Numerical Simulation, or DNS). Alternatively, an eddy viscosity can be used and the corresponding Kolmogorov length can be made larger than the gridscale, $$L_\epsilon(A_h)\propto\pi\epsilon^{-1/4}A_h^{3/4}$$ (for Large Eddy Simulation or LES).
There are two methods of ensuring that the Kolmogorov length is resolved in MITgcm. 1) The user can estimate the flux of energy through spectral space for a given simulation and adjust grid spacing or viscAh to ensure that $$L_\epsilon(A_h)>L$$; 2) The user may use the approach of Smagorinsky with viscC2Smag, which estimates the energy flux at every grid point, and adjusts the viscosity accordingly.
Smagorinsky formed the energy equation from the momentum equations by dotting them with velocity. There are some complications when using the hydrostatic approximation as described by Smagorinsky (1993) [smag:93]. The positive definite energy dissipation by horizontal viscosity in a hydrostatic flow is $$\nu D^2$$, where D is the deformation rate at the viscous scale. According to Kolmogorov’s theory, this should be a good approximation to the energy flux at any wavenumber $$\epsilon\approx\nu D^2$$. Kolmogorov and Smagorinsky noted that using an eddy viscosity that exceeds the molecular value $$\nu$$ should ensure that the energy flux through viscous scale set by the eddy viscosity is the same as it would have been had we resolved all the way to the true viscous scale. That is, $$\epsilon\approx A_{hSmag} \overline D^2$$. If we use this approximation to estimate the Kolmogorov viscous length, then
(2.162)$L_\epsilon(A_{hSmag})\propto\pi\epsilon^{-1/4}A_{hSmag}^{3/4}\approx\pi(A_{hSmag} \overline D^2)^{-1/4}A_{hSmag}^{3/4} = \pi A_{hSmag}^{1/2}\overline D^{-1/2}$
To make $$L_\epsilon(A_{hSmag})$$ scale with the gridscale, then
(2.163)$A_{hSmag} = \left(\frac{{\sf viscC2Smag}}{\pi}\right)^2L^2|\overline D|$
Where the deformation rate appropriate for hydrostatic flows with shallow-water scaling is
(2.164)$|\overline D|=\sqrt{\left({\frac{\partial{\overline {\tilde u}}}{\partial{x}}} - {\frac{\partial{\overline {\tilde v}}}{\partial{y}}}\right)^2 + \left({\frac{\partial{\overline {\tilde u}}}{\partial{y}}} + {\frac{\partial{\overline {\tilde v}}}{\partial{x}}}\right)^2}$
The coefficient viscC2Smag is what an MITgcm user sets, and it replaces the proportionality in the Kolmogorov length with an equality. Others (Griffies and Hallberg, 2000 [griffies:00]) suggest values of viscC2Smag from 2.2 to 4 for oceanic problems. Smagorinsky (1993) [smag:93] shows that values from 0.2 to 0.9 have been used in atmospheric modeling.
Smagorinsky (1993) [smag:93] shows that a corresponding vertical viscosity should be used:
(2.165)$A_{vSmag} = \left(\frac{{\sf viscC2Smag}}{\pi}\right)^2H^2 \sqrt{\left({\frac{\partial{\overline {\tilde u}}}{\partial{z}}}\right)^2 + \left({\frac{\partial{\overline {\tilde v}}}{\partial{z}}}\right)^2}$
This vertical viscosity is currently not implemented in MITgcm.
#### 2.19.1.4. Leith Viscosity¶
Leith (1968, 1996) [leith:68] [leith:96] notes that 2-d turbulence is quite different from 3-d. In two-dimensional turbulence, energy cascades to larger scales, so there is no concern about resolving the scales of energy dissipation. Instead, another quantity, enstrophy, (which is the vertical component of vorticity squared) is conserved in 2-d turbulence, and it cascades to smaller scales where it is dissipated.
Following a similar argument to that above about energy flux, the enstrophy flux is estimated to be equal to the positive-definite gridscale dissipation rate of enstrophy $$\eta\approx A_{hLeith} |\nabla\overline \omega_3|^2$$. By dimensional analysis, the enstrophy-dissipation scale is $$L_\eta(A_{hLeith})\propto\pi A_{hLeith}^{1/2}\eta^{-1/6}$$. Thus, the Leith-estimated length scale of enstrophy-dissipation and the resulting eddy viscosity are
(2.166)$L_\eta(A_{hLeith})\propto\pi A_{hLeith}^{1/2}\eta^{-1/6} = \pi A_{hLeith}^{1/3}|\nabla \overline \omega_3|^{-1/3}$
(2.167)$A_{hLeith} = \left(\frac{{\sf viscC2Leith}}{\pi}\right)^3L^3|\nabla \overline\omega_3|$
(2.168)$|\nabla\omega_3| \equiv \sqrt{\left[{\frac{\partial{\ }}{\partial{x}}} \left({\frac{\partial{\overline {\tilde v}}}{\partial{x}}} - {\frac{\partial{\overline {\tilde u}}}{\partial{y}}}\right)\right]^2 + \left[{\frac{\partial{\ }}{\partial{y}}}\left({\frac{\partial{\overline {\tilde v}}}{\partial{x}}} - {\frac{\partial{\overline {\tilde u}}}{\partial{y}}}\right)\right]^2}$
The runtime flag useFullLeith controls whether or not to calculate the full gradients for the Leith viscosity (.TRUE.) or to use an approximation (.FALSE.). The only reason to set useFullLeith = .FALSE. is if your simulation fails when computing the gradients. This can occur when using the cubed sphere and other complex grids.
#### 2.19.1.5. Modified Leith Viscosity¶
The argument above for the Leith viscosity parameterization uses concepts from purely 2-dimensional turbulence, where the horizontal flow field is assumed to be non-divergent. However, oceanic flows are only quasi-two dimensional. While the barotropic flow, or the flow within isopycnal layers may behave nearly as two-dimensional turbulence, there is a possibility that these flows will be divergent. In a high-resolution numerical model, these flows may be substantially divergent near the grid scale, and in fact, numerical instabilities exist which are only horizontally divergent and have little vertical vorticity. This causes a difficulty with the Leith viscosity, which can only respond to buildup of vorticity at the grid scale.
MITgcm offers two options for dealing with this problem. 1) The Smagorinsky viscosity can be used instead of Leith, or in conjunction with Leith – a purely divergent flow does cause an increase in Smagorinsky viscosity; 2) The viscC2LeithD parameter can be set. This is a damping specifically targeting purely divergent instabilities near the gridscale. The combined viscosity has the form:
(2.169)$A_{hLeith} = L^3\sqrt{\left(\frac{{\sf viscC2Leith}}{\pi}\right)^6 |\nabla \overline \omega_3|^2 + \left(\frac{{\sf viscC2LeithD}}{\pi}\right)^6 |\nabla \nabla\cdot \overline {\tilde u}_h|^2}$
(2.170)$|\nabla \nabla\cdot \overline {\tilde u}_h| \equiv \sqrt{\left[{\frac{\partial{\ }}{\partial{x}}}\left({\frac{\partial{\overline {\tilde u}}}{\partial{x}}} + {\frac{\partial{\overline {\tilde v}}}{\partial{y}}}\right)\right]^2 + \left[{\frac{\partial{\ }}{\partial{y}}}\left({\frac{\partial{\overline {\tilde u}}}{\partial{x}}} + {\frac{\partial{\overline {\tilde v}}}{\partial{y}}}\right)\right]^2}$
Whether there is any physical rationale for this correction is unclear, but the numerical consequences are good. The divergence in flows with the grid scale larger or comparable to the Rossby radius is typically much smaller than the vorticity, so this adjustment only rarely adjusts the viscosity if viscC2LeithD = viscC2Leith. However, the rare regions where this viscosity acts are often the locations for the largest vales of vertical velocity in the domain. Since the CFL condition on vertical velocity is often what sets the maximum timestep, this viscosity may substantially increase the allowable timestep without severely compromising the verity of the simulation. Tests have shown that in some calculations, a timestep three times larger was allowed when viscC2LeithD = viscC2Leith.
#### 2.19.1.6. Quasi-Geostrophic Leith Viscosity¶
A variant of Leith viscosity can be derived for quasi-geostrophic dynamics. This leads to a slightly different equation for the viscosity that includes a contribution from quasigeostrophic vortex stretching (Bachman et al. 2017 [bachman:17]). The viscosity is given by
(2.171)$\nu_{*} = \left(\frac{\Lambda \Delta s}{\pi}\right)^{3} | \nabla_{h}(f\mathbf{\hat{z}}) + \nabla_{h}(\nabla \times \mathbf{v}_{h*}) + \partial_{z}\frac{f}{N^{2}} \nabla_{h} b|$
where $$\Lambda$$ is a tunable parameter of $$\mathcal{O}(1)$$, $$\Delta s = \sqrt{\Delta x \Delta y}$$ is the grid scale, $$f\mathbf{\hat{z}}$$ is the vertical component of the Coriolis parameter, $$\mathbf{v}_{h*}$$ is the horizontal velocity, $$N^{2}$$ is the Brunt-Väisälä frequency, and $$b$$ is the buoyancy.
However, the viscosity given by (2.171) does not constrain purely divergent motions. As such, a small $$\mathcal{O}(\epsilon)$$ correction is added
(2.172)$\nu_{*} = \left(\frac{\Lambda \Delta s}{\pi}\right)^{3} \sqrt{|\nabla_{h}(f\mathbf{\hat{z}}) + \nabla_{h}(\nabla \times \mathbf{v}_{h*}) + \partial_{z} \frac{f}{N^{2}} \nabla_{h} b|^{2} + | \nabla[\nabla \cdot \mathbf{v}_{h}]|^{2}}$
This form is, however, numerically awkward; as the Brunt-Väisälä Frequency becomes very small in regions of weak or vanishing stratification, the vortex stretching term becomes very large. The resulting large viscosities can lead to numerical instabilities. Bachman et al. (2017) [bachman:17] present two limiting forms for the viscosity based on flow parameters such as $$Fr_{*}$$, the Froude number, and $$Ro_{*}$$, the Rossby number. The second of which,
(2.173)\begin{split}\begin{aligned} \nu_{*} = & \left(\frac{\Lambda \Delta s}{\pi}\right)^{3} \\ & \sqrt{min\left(|\nabla_{h}q_{2*} + \partial_{z} \frac{f^{2}}{N^{2}} \nabla_{h} b |, \left( 1 + \frac{Fr_{*}^{2}}{Ro_{*}^{2}} + Fr_{*}^{4}\right) |\nabla_{h}q_{2*}|\right)^{2} + | \nabla[\nabla \cdot \mathbf{v}_{h}]|^{2}}, \end{aligned}\end{split}
has been implemented and is active when #define ALLOW_LEITH_QG is included in a copy of MOM_COMMON_OPTIONS.h in a code mods directory (specified through -mods command line option in genmake2).
LeithQG viscosity is designed to work best in simulations that resolve some mesoscale features. In simulations that are too coarse to permit eddies or fine enough to resolve submesoscale features, it should fail gracefully. The non-dimensional parameter viscC2LeithQG corresponds to $$\Lambda$$ in the above equations and scales the viscosity; the recommended value is 1.
There is no reason to use the quasi-geostrophic form of Leith at the same time as either standard Leith or modified Leith. Therefore, the model will not run if non-zero values have been set for these coefficients; the model will stop during the configuration check. LeithQG can be used regardless of the setting for useFullLeith. Just as for the other forms of Leith viscosity, this flag determines whether or not the full gradients are used. The simplified gradients were originally intended for use on complex grids, but have been shown to produce better kinetic energy spectra even on very straightforward grids.
To add the LeithQG viscosity to the GMRedi coefficient, as was done in some of the simulations in Bachman et al. (2017) [bachman:17], #define ALLOW_LEITH_QG must be specified, as described above. In addition to this, the compile-time flag ALLOW_GM_LEITH_QG must also be defined in a (-mods) copy of GMREDI_OPTIONS.h when the model is compiled, and the runtime parameter GM_useLeithQG set to .TRUE. in data.gmredi. This will use the value of viscC2LeithQG specified in the data input file to compute the coefficient.
#### 2.19.1.7. Courant–Freidrichs–Lewy Constraint on Viscosity¶
Whatever viscosities are used in the model, the choice is constrained by gridscale and timestep by the Courant–Freidrichs–Lewy (CFL) constraint on stability:
\begin{split}\begin{aligned} A_h & < \frac{L^2}{4\Delta t} \\ A_4 & \le \frac{L^4}{32\Delta t}\end{aligned}\end{split}
The viscosities may be automatically limited to be no greater than these values in MITgcm by specifying viscAhGridMax $$<1$$ and viscA4GridMax $$<1$$. Similarly-scaled minimum values of viscosities are provided by viscAhGridMin and viscA4GridMin, which if used, should be set to values $$\ll 1$$. $$L$$ is roughly the gridscale (see below).
Following Griffies and Hallberg (2000) [griffies:00], we note that there is a factor of $$\Delta x^2/8$$ difference between the harmonic and biharmonic viscosities. Thus, whenever a non-dimensional harmonic coefficient is used in the MITgcm (e.g. viscAhGridMax $$<1$$), the biharmonic equivalent is scaled so that the same non-dimensional value can be used (e.g. viscA4GridMax $$<1$$).
#### 2.19.1.8. Biharmonic Viscosity¶
Holland (1978) [holland:78] suggested that eddy viscosities ought to be focused on the dynamics at the grid scale, as larger motions would be ’resolved’. To enhance the scale selectivity of the viscous operator, he suggested a biharmonic eddy viscosity instead of a harmonic (or Laplacian) viscosity:
(2.174)$\left({\overline{\frac{D{\tilde u}}{Dt} }} - {\frac{{\overline{D} {{\tilde {\overline{u}}}}}}{{\overline{Dt}}} }\right) \approx \frac{-\nabla^4_h{{\tilde {\overline{u}}}}}{{\rm Re}_4} + \frac{{\frac{\partial^2{{\tilde {\overline{u}}}}}{{\partial{z}}^2}}}{{\rm Re}_v}$
(2.175)$\left({\overline{\frac{D{\tilde v}}{Dt} }} - {\frac{{\overline{D} {{\tilde {\overline{v}}}}}}{{\overline{Dt}}} }\right) \approx \frac{-\nabla^4_h{{\tilde {\overline{v}}}}}{{\rm Re}_4} + \frac{{\frac{\partial^2{{\tilde {\overline{v}}}}}{{\partial{z}}^2}}}{{\rm Re}_v}\nonumber$
(2.176)$\left(\overline{\frac{D{w}}{Dt}} - \frac{{\overline{D} \overline w}}{{\overline{Dt}}}\right) \approx\frac{-\nabla^4_h\overline w}{{\rm Re}_4} + \frac{{\frac{\partial^2{\overline w}}{{\partial{z}}^2}}}{{\rm Re}_v}\nonumber$
(2.177)$\left(\overline{\frac{D{b}}{Dt}} - \frac{{\overline{D} \bar b}}{{\overline{Dt}}}\right) \approx \frac{-\nabla^4_h \overline b}{\Pr{\rm Re}_4} +\frac{{\frac{\partial^2{\overline b}}{{\partial{z}}^2}}}{\Pr{\rm Re}_v}\nonumber$
Griffies and Hallberg (2000) [griffies:00] propose that if one scales the biharmonic viscosity by stability considerations, then the biharmonic viscous terms will be similarly active to harmonic viscous terms at the gridscale of the model, but much less active on larger scale motions. Similarly, a biharmonic diffusivity can be used for less diffusive flows.
In practice, biharmonic viscosity and diffusivity allow a less viscous, yet numerically stable, simulation than harmonic viscosity and diffusivity. However, there is no physical rationale for such operators being of leading order, and more boundary conditions must be specified than for the harmonic operators. If one considers the approximations of (2.158) - (2.161) and (2.174) - (2.177) to be terms in the Taylor series expansions of the eddy terms as functions of the large-scale gradient, then one can argue that both harmonic and biharmonic terms would occur in the series, and the only question is the choice of coefficients. Using biharmonic viscosity alone implies that one zeros the first non-vanishing term in the Taylor series, which is unsupported by any fluid theory or observation.
Nonetheless, MITgcm supports a plethora of biharmonic viscosities and diffusivities, which are controlled with parameters named similarly to the harmonic viscosities and diffusivities with the substitution h $$\rightarrow 4$$ in the MITgcm parameter name. MITgcm also supports biharmonic Leith and Smagorinsky viscosities:
(2.178)$A_{4Smag} = \left(\frac{{\sf viscC4Smag}}{\pi}\right)^2\frac{L^4}{8}|D|$
(2.179)$A_{4Leith} = \frac{L^5}{8}\sqrt{\left(\frac{{\sf viscC4Leith}}{\pi}\right)^6 |\nabla \overline \omega_3|^2 + \left(\frac{{\sf viscC4LeithD}}{\pi}\right)^6 |\nabla \nabla\cdot \overline {\bf {\tilde u}}_h|^2}$
However, it should be noted that unlike the harmonic forms, the biharmonic scaling does not easily relate to whether energy-dissipation or enstrophy-dissipation scales are resolved. If similar arguments are used to estimate these scales and scale them to the gridscale, the resulting biharmonic viscosities should be:
(2.180)$A_{4Smag} = \left(\frac{{\sf viscC4Smag}}{\pi}\right)^5L^5 |\nabla^2\overline {\bf {\tilde u}}_h|$
(2.181)$A_{4Leith} = L^6\sqrt{\left(\frac{{\sf viscC4Leith}}{\pi}\right)^{12} |\nabla^2 \overline \omega_3|^2 + \left(\frac{{\sf viscC4LeithD}}{\pi}\right)^{12} |\nabla^2 \nabla\cdot \overline {\bf {\tilde u}}_h|^2}$
Thus, the biharmonic scaling suggested by Griffies and Hallberg (2000) [griffies:00] implies:
\begin{split}\begin{aligned} |D| & \propto L|\nabla^2\overline {\bf {\tilde u}}_h|\\ |\nabla \overline \omega_3| & \propto L|\nabla^2 \overline \omega_3|\end{aligned}\end{split}
It is not at all clear that these assumptions ought to hold. Only the Griffies and Hallberg (2000) [griffies:00] forms are currently implemented in MITgcm.
#### 2.19.1.9. Selection of Length Scale¶
Above, the length scale of the grid has been denoted $$L$$. However, in strongly anisotropic grids, $$L_x$$ and $$L_y$$ will be quite different in some locations. In that case, the CFL condition suggests that the minimum of $$L_x$$ and $$L_y$$ be used. On the other hand, other viscosities which involve whether a particular wavelength is ’resolved’ might be better suited to use the maximum of $$L_x$$ and $$L_y$$. Currently, MITgcm uses useAreaViscLength to select between two options. If false, the square root of the harmonic mean of $$L^2_x$$ and $$L^2_y$$ is used for all viscosities, which is closer to the minimum and occurs naturally in the CFL constraint. If useAreaViscLength is true, then the square root of the area of the grid cell is used.
### 2.19.2. Mercator, Nondimensional Equations¶
The rotating, incompressible, Boussinesq equations of motion (Gill, 1982) [gill:82] on a sphere can be written in Mercator projection about a latitude $$\theta_0$$ and geopotential height $$z=r-r_0$$. The nondimensional form of these equations is:
(2.182)${\rm Ro} \frac{D{\tilde u}}{Dt} - \frac{{\tilde v} \sin\theta}{\sin\theta_0}+M_{Ro}{\frac{\partial{\pi}}{\partial{x}}} + \frac{\lambda{\rm Fr}^2 M_{Ro}\cos \theta}{\mu\sin\theta_0} w = -\frac{{\rm Fr}^2 M_{Ro} {\tilde u} w}{r/H} + \frac{{\rm Ro} {\bf \hat x}\cdot\nabla^2{\bf u}}{{\rm Re}}$
(2.183)${\rm Ro} \frac{D{\tilde v}}{Dt} + \frac{{\tilde u}\sin\theta}{\sin\theta_0} + M_{Ro}{\frac{\partial{\pi}}{\partial{y}}} = -\frac{\mu{\rm Ro} \tan\theta({\tilde u}^2 + {\tilde v}^2)}{r/L} - \frac{{\rm Fr}^2M_{Ro} {\tilde v} w}{r/H} + \frac{{\rm Ro} {\bf \hat y}\cdot\nabla^2{\bf u}}{{\rm Re}}$
(2.184)${\rm Fr}^2\lambda^2\frac{D{w}}{Dt} - b + {\frac{\partial{\pi}}{\partial{z}}} -\frac{\lambda\cot \theta_0 {\tilde u}}{M_{Ro}} = \frac{\lambda\mu^2({\tilde u}^2+{\tilde v}^2)}{M_{Ro}(r/L)} + \frac{{\rm Fr}^2\lambda^2{\bf \hat z}\cdot\nabla^2{\bf u}}{{\rm Re}}$
(2.185)$\frac{D{b}}{Dt} + w = \frac{\nabla^2 b}{\Pr{\rm Re}}\nonumber$
(2.186)$\mu^2\left({\frac{\partial{\tilde u}}{\partial{x}}} + {\frac{\partial{\tilde v}}{\partial{y}}} \right)+{\frac{\partial{w}}{\partial{z}}} = 0$
Where
$\mu\equiv\frac{\cos\theta_0}{\cos\theta},\ \ \ {\tilde u}=\frac{u^*}{V\mu},\ \ \ {\tilde v}=\frac{v^*}{V\mu}$
$f_0\equiv2\Omega\sin\theta_0,\ \ \ %,\ \ \ \BFKDt\ \equiv \mu^2\left({\tilde u}\BFKpd x\ %+{\tilde v} \BFKpd y\ \right)+\frac{\rm Fr^2M_{Ro}}{\rm Ro} w\BFKpd z\ \frac{D}{Dt} \equiv \mu^2\left({\tilde u}\frac{\partial}{\partial x} +{\tilde v} \frac{\partial}{\partial y} \right) +\frac{\rm Fr^2M_{Ro}}{\rm Ro} w\frac{\partial}{\partial z}$
$x\equiv \frac{r}{L} \phi \cos \theta_0, \ \ \ y\equiv \frac{r}{L} \int_{\theta_0}^\theta \frac{\cos \theta_0 {\,\rm d\theta}'}{\cos\theta'}, \ \ \ z\equiv \lambda\frac{r-r_0}{L}$
$t^*=t \frac{L}{V},\ \ \ b^*= b\frac{V f_0M_{Ro}}{\lambda}$
$\pi^* = \pi V f_0 LM_{Ro},\ \ \ w^* = w V \frac{{\rm Fr}^2 \lambda M_{Ro}}{\rm Ro}$
${\rm Ro} \equiv \frac{V}{f_0 L},\ \ \ M_{Ro}\equiv \max[1,\rm Ro]$
${\rm Fr} \equiv \frac{V}{N \lambda L}, \ \ \ {\rm Re} \equiv \frac{VL}{\nu}, \ \ \ {\rm Pr} \equiv \frac{\nu}{\kappa}$
Dimensional variables are denoted by an asterisk where necessary. If we filter over a grid scale typical for ocean models:
1m $$< L <$$ 100km
0.0001 $$< \lambda <$$ 1
0.001m/s $$< V <$$ 1 m/s
$$f_0 <$$ 0.0001 s -1
0.01 s -1 $$< N <$$ 0.0001 s -1
these equations are very well approximated by
(2.187)$\begin{split}{\rm Ro}\frac{D{\tilde u}}{Dt} - \frac{{\tilde v} \sin\theta}{\sin\theta_0}+M_{Ro}{\frac{\partial{\pi}}{\partial{x}}} = -\frac{\lambda{\rm Fr}^2M_{Ro}\cos \theta}{\mu\sin\theta_0} w + \frac{{\rm Ro}\nabla^2{{\tilde u}}}{{\rm Re}} \\\end{split}$
(2.188)$\begin{split}{\rm Ro}\frac{D{\tilde v}}{Dt} + \frac{{\tilde u}\sin\theta}{\sin\theta_0}+M_{Ro}{\frac{\partial{\pi}}{\partial{y}}} = \frac{{\rm Ro}\nabla^2{{\tilde v}}}{{\rm Re}} \\\end{split}$
(2.189)${\rm Fr}^2\lambda^2\frac{D{w}}{Dt} - b + {\frac{\partial{\pi}}{\partial{z}}} = \frac{\lambda\cot \theta_0 {\tilde u}}{M_{Ro}} +\frac{{\rm Fr}^2\lambda^2\nabla^2w}{{\rm Re}}$
(2.190)$\frac{D{b}}{Dt} + w = \frac{\nabla^2 b}{\Pr{\rm Re}}$
(2.191)$\mu^2\left({\frac{\partial{\tilde u}}{\partial{x}}} + {\frac{\partial{\tilde v}}{\partial{y}}} \right)+{\frac{\partial{w}}{\partial{z}}} = 0$
$\nabla^2 \approx \left(\frac{\partial^2}{\partial x^2} +\frac{\partial^2}{\partial y^2} +\frac{\partial^2}{\lambda^2\partial z^2}\right)$
Neglecting the non-frictional terms on the right-hand side is usually called the ’traditional’ approximation. It is appropriate, with either large aspect ratio or far from the tropics. This approximation is used here, as it does not affect the form of the eddy stresses which is the main topic. The frictional terms are preserved in this approximate form for later comparison with eddy stresses. |
Jul 1, 2014
# static dimension checking
The hmatrix library includes an experimental strongly-typed interface (based on GHC's type literals extension) for automatic inference and compile-time checking of dimensions in matrix and vector operations. This prevents structural errors like trying to multiply two matrices with inconsistent sizes or adding vectors of different dimensions. The interface uses existential types to safely work with computations which produce data-dependent result types.
We import the library as follows (other language extensions may also be required):
{-# LANGUAGE DataKinds #-}
import GHC.TypeLits
import Numeric.LinearAlgebra.Static
## data types
Vectors and matrices are defined using safe constructors:
a = row (vec4 1 2 3 4)
===
row (2 & 0 & 7 & 1)
u = vec4 10 20 30 40
v = vec2 5 0 & 0 & 3 & 7
λ> a
(matrix
[ 1.0, 2.0, 3.0, 4.0
, 2.0, 0.0, 7.0, 1.0 ] :: L 2 4)
λ> u
(vector [10.0,20.0,30.0,40.0] :: R 4)
λ> v
(vector [5.0,0.0,0.0,3.0,7.0] :: R 5)
We can promote the traditional hmatrix arrays to the new dimension-typed ones:
import qualified Numeric.LinearAlgebra as LA
λ> create (LA.vector [1..10]) :: Maybe (R 5)
Nothing
λ> create (LA.matrix 2 [1..4]) :: Maybe (L 2 2)
Just (matrix
[ 1.0, 2.0
, 3.0, 4.0 ] :: L 2 2)
For convenience, data can also be created from lists of elements and explicit type annotations:
b = matrix
[ 2, 0,-1
, 1, 1, 7
, 5, 3, 1
, 2, 8, 0 ] :: L 4 3
w = vector [1..10] :: R 10
λ> b
(matrix
[ 2.0, 0.0, -1.0
, 1.0, 1.0, 7.0
, 5.0, 3.0, 1.0
, 2.0, 8.0, 0.0 ] :: L 4 3)
λ> w
(vector [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0] :: R 10)
If the length of the input list does not match the declared size we get a run time error:
λ> vector [1..5] :: R 3
*** Exception: R 3 can't be created from elements [1.0,2.0,3.0,4.0, ... ]
λ> matrix [1..5] :: L 2 2
*** Exception: L 2 2 can't be created from elements [1.0,2.0,3.0,4.0,5.0]
(This kind of input data errors are relatively unimportant. The key feature of a strongly typed interface is that the computations are proven to be structurally correct and when they receive well defined inputs will not produce run-time dimension consistency errors.)
Most of the functions that in the traditional hmatrix interface require a size argument are now generic. For instance:
c :: (KnownNat m, KnownNat n) => L m n
c = build (\r c -> r**2-c/2)
λ> c :: L 3 4
(matrix
[ 0.0, -0.5, -1.0, -1.5
, 1.0, 0.5, 0.0, -0.5
, 4.0, 3.5, 3.0, 2.5 ] :: L 3 4)
λ> c :: Sq 5
(matrix
[ 0.0, -0.5, -1.0, -1.5, -2.0
, 1.0, 0.5, 0.0, -0.5, -1.0
, 4.0, 3.5, 3.0, 2.5, 2.0
, 9.0, 8.5, 8.0, 7.5, 7.0
, 16.0, 15.5, 15.0, 14.5, 14.0 ] :: L 5 5)
The type signature is needed to make the definition size-polymorphic.
The function disp pretty-prints the arrays with a given number of decimal digits:
λ> disp 3 (c :: L 3 5)
L 3 5
0.000 -0.500 -1.000 -1.500 -2.000
1.000 0.500 0.000 -0.500 -1.000
4.000 3.500 3.000 2.500 2.000
(For clarity we show the output of some examples below using hidden disp commands.)
Constant and diagonal arrays are compactly stored and many operations are efficiently performed without expansion:
λ> 7 :: L 3 5
(7.0 :: L 3 5)
λ> diag u
(diag 0.0 [10.0,20.0,30.0,40.0] :: L 4 4)
λ> eye + 2 :: Sq 4
(diag 2.0 [3.0,3.0,3.0,3.0] :: L 4 4)
The disp function shows the "expanded" logical structure:
λ> disp 5 (eye + 2 :: Sq 4)
L 4 4
3 2 2 2
2 3 2 2
2 2 3 2
2 2 2 3
λ> disp 5 (3 :: R 5)
R 5
3 3 3 3 3
## safe matrix computations
The type system statically prevents structural errors:
λ> u
(vector [10.0,20.0,30.0,40.0] :: R 4)
λ> v
(vector [5.0,0.0,0.0,3.0,7.0] :: R 5)
λ> u + v
Couldn't match type ‘5’ with ‘4’
Expected type: R 4
Actual type: R 5
In the second argument of ‘(+)’, namely ‘v’
In the first argument of ‘print’, namely ‘(u + v)’
λ> (u & 1) + v
(vector [15.0,20.0,30.0,43.0,8.0] :: R 5)
λ> (vec2 1 2 & 3 & 4) <.> u
300.0
λ> cross (vec2 1 2) (vec3 1 2 3)
Couldn't match type ‘2’ with ‘3’
Expected type: R 3
Actual type: R 2
In the first argument of ‘cross’, namely ‘(vec2 1 2)’
In the first argument of ‘print’, namely
‘(cross (vec2 1 2) (vec3 1 2 3))’
λ> a
(matrix
[ 1.0, 2.0, 3.0, 4.0
, 2.0, 0.0, 7.0, 1.0 ] :: L 2 4)
λ> b
(matrix
[ 2.0, 0.0, -1.0
, 1.0, 1.0, 7.0
, 5.0, 3.0, 1.0
, 2.0, 8.0, 0.0 ] :: L 4 3)
λ> b <> a
Couldn't match type ‘2’ with ‘3’
Expected type: L 3 4
Actual type: L 2 4
In the second argument of ‘(<>)’, namely ‘a’
In the first argument of ‘print’, namely ‘(b <> a)’
λ> a <> b
(matrix
[ 27.0, 43.0, 16.0
, 41.0, 29.0, 5.0 ] :: L 2 3)
λ> a #> v
Couldn't match type ‘5’ with ‘4’
Expected type: R 4
Actual type: R 5
In the second argument of ‘(#>)’, namely ‘v’
In the first argument of ‘print’, namely ‘(a #> v)’
λ> a #> u
(vector [300.0,270.0] :: R 2)
Block matrices can be safely created and unspecified dimensions are automatically inferred:
λ> a ||| b
Couldn't match type ‘4’ with ‘2’
Expected type: L 2 3
Actual type: L 4 3
In the second argument of ‘(|||)’, namely ‘b’
In the first argument of ‘print’, namely ‘(a ||| b)’
d = b ||| 10*b ||| col 1
===
row range
λ> d
L 5 7
2 0 -1 20 0 -10 1
1 1 7 10 10 70 1
5 3 1 50 30 10 1
2 8 0 20 80 0 1
1 2 3 4 5 6 7
e = (a ||| a ||| 5)
===
(7 :: L 3 12)
λ> e
L 5 12
1 2 3 4 1 2 3 4 5 5 5 5
2 0 7 1 2 0 7 1 5 5 5 5
7 7 7 7 7 7 7 7 7 7 7 7
7 7 7 7 7 7 7 7 7 7 7 7
7 7 7 7 7 7 7 7 7 7 7 7
mhomog m = m ||| col 0
===
row 0 ||| (1 :: L 1 1)
λ> mhomog a
L 3 5
1 2 3 4 0
2 0 7 1 0
0 0 0 0 1
Array destructuring and element access is also safe:
λ> u
(vector [10.0,20.0,30.0,40.0] :: R 4)
λ> snd $split u :: R 7 Couldn't match type ‘4 - p0’ with ‘7’ The type variable ‘p0’ is ambiguous Expected type: (R p0, R 7) Actual type: (R p0, R (4 - p0)) In the second argument of ‘($)’, namely ‘split u’
In the first argument of ‘print’, namely ‘(snd $split u :: R 7)’ λ> snd$ split u :: R 3
(vector [20.0,30.0,40.0] :: R 3)
λ> fst . headTail . snd . headTail $(range :: R 4) 2.0 λ> let x = col range <> row range :: L 4 5 λ> x L 4 5 1 2 3 4 5 2 4 6 8 10 3 6 9 12 15 4 8 12 16 20 λ> snd . splitCols . fst . splitRows$ x :: Sq 2
L 2 2
4 5
8 10
Automatic inference of dimensions allows elegant definitions:
sumV v = v <.> 1
λ> sumV (vec2 1 2 & 3 & 100)
106.0
average :: (KnownNat n, 1<=n) => R n -> ℝ
average = (<.> (1/dim))
λ> average (vector [1..100] :: R 100)
50.5
The vector dim is a helpful generic constant which contains its own dimension:
λ> dim :: R 7
(7.0 :: R 7)
The type system statically prevents the attempt to take the average of an empty vector:
λ> let (u1 :: R 5, u2) = split (range :: R 5)
λ> sumV u1
15.0
λ> sumV u2
0.0
λ> average u1
3.0
λ> average u2
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
Actual type: 1 <=? 0
In the first argument of ‘print’, namely ‘(average u2)’
The function konst promotes a scalar type to a generic array:
unitary v = v / konst (sqrt (v <.> v))
λ> unitary (vec4 1 1 0 1 & 1)
(vector [0.5,0.5,0.0,0.5,0.5] :: R 5)
## warning
We must be careful with the Num instances and the append operator "&", since the numeric literals stand for constants of any dimension. Excess elements are always detected:
λ> cross (1 & 2 & 3) (1 & 2 & 3 & 4 & 5)
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
Actual type: 1 <=? 0
In the first argument of ‘(&)’, namely ‘1 & 2’
In the first argument of ‘(&)’, namely ‘1 & 2 & 3’
In the first argument of ‘(&)’, namely ‘1 & 2 & 3 & 4’
but the first literal may accidentally not be interpreted as a singleton:
λ> cross (1 & 2 & 3) (1 & 2)
(vector [1.0,1.0,-1.0] :: R 3)
λ> 1 & 2 :: R 5
(vector [1.0,1.0,1.0,1.0,2.0] :: R 5)
## linear algebra
The library provides a safe interface to the main linear algebra functions. For instance, linSolve only admits square coefficient matrices and returns Nothing if the system is singular:
λ> let m = matrix [1,2,3,5] :: L 2 2
λ> m #> (2 & 3)
(vector [8.0,21.0] :: R 2)
λ> linSolve m (col (8 & 21))
Just (matrix
[ 2.000000000000002
, 2.9999999999999987 ] :: L 2 1)
The function linSolve admits several right-hand sides. As an example we define an operator to work with a single right-hand side:
λ> let (|\|) m = fmap uncol . linSolve m . col
λ> m |\| (8 & 21)
Just (vector [2.000000000000002,2.9999999999999987] :: R 2)
λ> let m = matrix [1,2,2,4] :: L 2 2
λ> m |\| (8 & 21)
Nothing
If we don't mind the unhelpful error message produced by a singular input, the matrix inverse can be defined as
inv :: KnownNat n => Sq n -> Sq n
inv = fromJust . flip linSolve eye
λ> inv (diag (vec3 1 2 4))
(matrix
[ 1.0, 0.0, 0.0
, 0.0, 0.5, 0.0
, 0.0, 0.0, 0.25 ] :: L 3 3)
The operator (<\>) (equivalent to \ in Matlab/Octave) solves a general linear system in the least squares sense:
λ> let m = matrix [1,2,2,4,3,3] :: L 3 2
λ> m
(matrix
[ 1.0, 2.0
, 2.0, 4.0
, 3.0, 3.0 ] :: L 3 2)
λ> let x = m <\> col (3 & 7 & 6)
λ> x
(matrix
[ 0.6000000000000002
, 1.4000000000000001 ] :: L 2 1)
λ> m <> x
(matrix
[ 3.4000000000000004
, 6.800000000000001
, 6.000000000000001 ] :: L 3 1)
The thin variants of the SVD take into account the shape of the matrix:
λ> p
L 3 5
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
λ> svdTall p
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
Actual type: 5 <=? 3
In the first argument of ‘print’, namely ‘(svdTall p)’
In a stmt of a 'do' block: print (svdTall p)
λ> let (u,s,v) = svdFlat p
λ> u
L 3 3
-0.20 0.89 0.41
-0.52 0.26 -0.82
-0.83 -0.38 0.41
λ> s
R 3
35.13 2.47 0.00
λ> v
L 5 3
-0.35 -0.69 0.57
-0.40 -0.38 -0.75
-0.44 -0.06 -0.17
-0.49 0.25 0.30
-0.53 0.56 0.05
λ> u <> diag s <> tr v
L 3 5
1.00 2.00 3.00 4.00 5.00
6.00 7.00 8.00 9.00 10.00
11.00 12.00 13.00 14.00 15.00
The eigensystem functions take into account if the matrix is symmetric (or hermitic) for the output type. In general, the result is complex:
λ> q
(matrix
[ 3.0, -2.0
, 4.0, -1.0 ] :: L 2 2)
λ> eigenvalues q
C 2
1+2.00i 1-2.00i
λ> snd $eigensystem q M 2 2 0.41+0.41i 0.41-0.41i 0.82 0.82 However, if we take the symmetric part of the matrix the result is real: λ> sym q Sym (matrix [ 3.0, 1.0 , 1.0, -1.0 ] :: L 2 2) λ> eigenvalues (sym q) R 2 3.24 -1.24 λ> snd$eigensystem (sym q)
L 2 2
-0.97 0.23
-0.23 -0.97
(and the eigenvectors are orthogonal, but this is not encoded in the type).
## existential dimensions
The size of the result of certain computations can only be known at run time. For instance, the dimension of the nullspace of matrix depends on its rank, which is a nontrivial property of its elements:
t1 = matrix [ 1, 2, 3
, 4, 5, 6] :: L 2 3
t2 = matrix [ 1, 2, 3
, 2, 4, 6] :: L 2 3
λ> LA.nullspace (extract t1)
(3><1)
[ 0.40824829046386285
, -0.816496580927726
, 0.40824829046386313 ]
λ> LA.nullspace (extract t2)
(3><2)
[ 0.9561828874675147, 0.11952286093343913
, -4.390192218731975e-2, -0.8440132318358361
, -0.2894596810309586, 0.5228345342460776 ]
A possibility is that the function returns a list of typed vectors (e.g. nullspace :: L m n -> [R n]). This can be the right choice in some applications, but if this result is required as a matrix in subsequent computations we may introduce an unsafe step in the algorithm.
By hiding the unknown dimension in an existential type we can still compute safely with a strongly typed nullspace. For instance, we can easily check that it actually produces an approximate zero matrix:
checkNull x n = norm_Frob (x <> n) < 1E-10
λ> withNullspace t1 (\x -> checkNull t1 x)
True
λ> withNullspace t2 (\x -> checkNull t2 x)
True
If by mistake we define checkNull as follows we get a compilation error:
λ> checkNull x n = norm_Frob (n <> x) < 1E-10
Could not deduce (k ~ 2)
from the context (KnownNat k)
bound by a type expected by the context:
KnownNat k => L 3 k -> Bool
at :2:1-39
‘k’ is a rigid type variable bound by
a type expected by the context: KnownNat k => L 3 k -> Bool
at :2:1
Expected type: L 3 2
Actual type: L 3 k
Relevant bindings include n :: L 3 k (bound at :2:20)
In the second argument of ‘checkNull’, namely ‘n’
In the expression: checkNull t2 n
As another example, if we try to compute the following function we find that the existential dimension would escape its scope:
λ> withNullspace t2 (\x -> tr x <> x)
Couldn't match expected type ‘a0’ with actual type ‘L k k’
because type variable ‘k’ would escape its scope
This (rigid, skolem) type variable is bound by
a type expected by the context: KnownNat k => L 3 k -> a0
at static.aux.hs:495:39-72
Relevant bindings include
x :: L 3 k (bound at static.aux.hs:495:58)
In the expression: tr x <> x
In the second argument of ‘withNullspace’, namely
‘(\ x -> tr x <> x)’
In the first argument of ‘print’, namely
‘(withNullspace t2 (\ x -> tr x <> x))’
We probably meant the (probably not very meaningful) opposite operation:
λ> withNullspace t2 (\x -> x <> tr x)
L 3 3
0.929 -0.143 -0.214
-0.143 0.714 -0.429
-0.214 -0.429 0.357
λ> withNullspace t1 (\x -> x <> tr x)
L 3 3
0.167 -0.333 0.167
-0.333 0.667 -0.333
0.167 -0.333 0.167
The result is of the same size for the two inputs and known at compile time.
Note that from the withNullspace function we can trivially define the list-based version:
λ> let ker x = withNullspace x toColumns
λ> ker t1
[(vector [0.40824829046386285,-0.816496580927726,0.40824829046386313] :: R 3)]
λ> length (ker t2)
2
Another interesting example of the existential approach is the withCompactSVD factorization, which produces a triplet with a common dimension which depends on the rank of the matrix, which again cannot be known at compile time.
checkSVD m (u,s,v) = norm_Frob (m - u <> diag s <> tr v)
λ> withCompactSVD t1 $checkSVD t1 3.789423922623494e-15 By mistake we could write λ> checkSVD m (u,s,v) = norm_2 (m - u <> diag s <> v) This is something that makes sense for appropriately consistent dimensions of m, u, s, and v, but when we try to use it with a "true" compact SVD decomposition we get a compilation error: λ> withCompactSVD t2 (checkSVD t2) Could not deduce (k ~ 3) from the context (KnownNat k) bound by a type expected by the context: KnownNat k => (L 2 k, R k, L 3 k) -> ℝ at :4:1-31 ‘k’ is a rigid type variable bound by a type expected by the context: KnownNat k => (L 2 k, R k, L 3 k) -> ℝ at :4:1 Expected type: (L 2 k, R k, L 3 k) -> ℝ Actual type: (L 2 k, R k, L k k) -> ℝ In the second argument of ‘withCompactSVD’, namely ‘(checkSVD t2)’ Finally, The functions withVector and withMatrix wrap the traditional hmatrix vectors and matrices with existential dimensions to be safely used by other strongly typed computations. As an example we can check the above SVD decomposition on a big matrix read from a file: λ> let datafile = "path/to/data/mnist.txt" λ> mnist <- LA.loadMatrix datafile λ> mnist 5000x785 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 4.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 9.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 1.0 : : : : : : : :: : : 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 .. 0.0 2.0 λ> withMatrix mnist$ \x -> withCompactSVD x (checkSVD x)
6.062944964780807e-10
## type inference
As a final example of a computation with strongly typed dimensions, we solve an overconstrained homogeneous system $Ax=0$. The solution is the right singular vector of minimum singular value, or equivalently, and for more interesting type checking, the eigenvector of minimum eigenvalue of $A^TA$.
We consider the following coefficient matrices:
a1 = matrix [ 0, 0, 1
, 1, 2, 0
, -2, -4, 0 ] :: L 3 3
a2 = matrix [ 0, 0, 1
, 1, 2, 0
, -2, -4, 0
, 3, 6.1, 0] :: L 4 3
the first case has an exact solution:
λ> withNullspace a1 toColumns
[(vector [-0.894427190999916,0.447213595499958,0.0] :: R 3)]
but the second one is overconstrained ($\text{rank}(A) > n-1$):
λ> withNullspace a2 toColumns
[]
The desired least squares nullspace function null1 solves both cases:
λ> null1 a1
(vector [-0.8944271909999159,0.4472135954999579,0.0] :: R 3)
λ> null1 a2
(vector [-0.8963282782819243,0.4433910436084173,0.0] :: R 3)
It can be defined very easily:
null1 :: (Dim m, Dim n) => L m (1+n) -> R (1+n)
null1 = uncol . snd . splitCols . snd . eigensystem . mTm
The function mTm x = tr x <> x returns a type checked symmetric matrix, so snd . eigensystem is guaranteed to be a real vector. Then, the final uncol . snd determines the size of the partition produced by splitCols on the eigenvectors.
If alternately we had defined the function as the last element of a list of eigenvectors, we should check that the list is not empty (or that the matrix $A$ has at least one column).
This condition is statically enforced by the type system in the above definition. For clarity we have created an auxiliary class Dim for the somewhat redundant constraints required:
class (KnownNat (1 + p), KnownNat p, ((1 + p) - p) ~ 1, (p <= (1 + p))) => Dim (p :: Nat)
instance (KnownNat (1 + p), KnownNat p, ((1 + p) - p) ~ 1, (p <= (1 + p))) => Dim p
(Those constraints are inferred by ghci, and perhaps they say that the dimension must be finite...)
## safe indexing
Although this library is oriented to global array processing sometimes we must read a particular element of a vector or matrix. If range checking must be done at compile time the index cannot be a value but a type. A possible approach to generic type safe indexing is as follows:
data Coord (n :: Nat) = Coord
atV :: forall n k . (Dim n, Dim k, k+1<=n) => R n -> Coord k -> ℝ
atV v _ = extract v LA.! pos
where
pos = fromIntegral . natVal $(undefined :: Proxy k) atM :: forall m n i j . (Dim m, Dim n, Dim i, Dim j, i+1<=m, j+1 <=n) => L m n -> Coord i -> Coord j -> ℝ atM m _ _ = extract m LA.atIndex (pi,pj) where pi = fromIntegral . natVal$ (undefined :: Proxy i)
pj = fromIntegral . natVal \$ (undefined :: Proxy j)
cx = Coord :: Coord 0
cy = Coord :: Coord 1
cz = Coord :: Coord 2
λ> let v = vector [0..4] :: R 5
λ> atV v (Coord :: Coord 3)
3.0
λ> atV v (Coord :: Coord 7)
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
Actual type: (7 + 1) <=? 5
In the expression: atV v (Coord :: Coord 7)
In an equation for ‘it’: it = atV v (Coord :: Coord 7)
λ> atV v cz
2.0
λ> atV (vec3 10 20 30) cz
30.0
λ> atV (vector [] :: R 0) cx
Couldn't match type ‘'False’ with ‘'True’
Expected type: 'True
Actual type: (0 + 1) <=? 0
In the expression: atV (vector [] :: R 0) cx
In an equation for ‘it’: it = atV (vector [] :: R 0) cx
λ> atM (matrix [1..12] :: L 3 4) cz cy
10.0
λ> atM (matrix [1..12] :: L 3 4) cx cx
1.0 |
Share
Aasheesh Can Paint His Doll in 20 Minutes and His Sister Chinki Can Do So in 25 Minutes. They Paint the Doll Together for Five Minutes. - Mathematics
Course
ConceptTime and Work
Question
Aasheesh can paint his doll in 20 minutes and his sister Chinki can do so in 25 minutes. They paint the doll together for five minutes. At this juncture they have a quarrel and Chinki withdraws from painting. In how many minutes will Aasheesh finish the painting of the remaining doll?
Solution
$\text{ Aasheesh can paint a doll in 20 minutes, and Chinki can do the same in 25 minutes } .$
$\therefore \text{ Work done by Aasheesh in 1 minute } = \frac{1}{20}$
$\therefore \text{ Work done by Chinki in 1 minute } = \frac{1}{25}$
$\therefore \text{ Work done by them together } = \frac{1}{20} + \frac{1}{25}$
$= \frac{5 + 4}{100} = \frac{9}{100}$
$\therefore \text{ Work done by them in 5 minutes } = 5 \times \frac{9}{100} = \frac{9}{20}$
$\text{ Remaining work } = 1 - \frac{9}{20} = \frac{11}{20}$
$\text{ It is given that the remaining work is done by Aasheesh } .$
$\text{ The work done by Aasheesh in 20 minutes . }$
$\therefore \frac{11}{20}th \text{ work will be done by Aasheesh in } \left( 20 \times \frac{11}{20} \right) \text{ minutes or 11 minutes } .$
$\text{ Thus, the remaining work is done by Aasheesh in 11 minutes } .$
Is there an error in this question or solution?
APPEARS IN
Solution Aasheesh Can Paint His Doll in 20 Minutes and His Sister Chinki Can Do So in 25 Minutes. They Paint the Doll Together for Five Minutes. Concept: Time and Work.
S |
### On nice equivalence relations on ${}^\lambda 2$
by Shelah. [Sh:724]
Archive for Math Logic, 2004
The main question here is the possible generalization of the following theorem on simple'' equivalence relation on {}^omega 2 to higher cardinals. Theorem: (1) Assume that (a) E is a Borel 2-place relation on {}^omega 2, (b) E is an equivalence relation, (c) if eta, nu in {}^omega 2 and (exists !n)(eta (n) not= nu (n)), then eta, nu are not E --equivalent. Then there is a perfect subset of {}^omega 2 of pairwise non E-equivalent members. (2) Instead of E is Borel'', E is analytic (or even a Borel combination of analytic relations)'' is enough. (3) If E is a Pi^1_2 relation which is an equivalence relation satisfying clauses (b)+(c) in V^Cohen, then the conclusion of (1) holds.
Back to the list of publications |
# Ticket #69: 60aca94c7cb5.patch
File 60aca94c7cb5.patch, 4.1 KB (added by Balazs Dezso, 8 years ago)
Doc fixes
• ## lemon/concepts/bpgraph.h
```# HG changeset patch
# User Balazs Dezso <[email protected]>
# Date 1326319085 -3600
# Node ID 60aca94c7cb52d5ebfc9b0632d565ee8487f6b55
# Parent 37d179a450778255e09c55e8758b09668a10ffef
Doc fix
diff -r 37d179a45077 -r 60aca94c7cb5 lemon/concepts/bpgraph.h```
a /// \brief Converts the node to red node object. /// /// This class is converts unsafely the node to red node /// This function converts unsafely the node to red node /// object. It should be called only if the node is from the red /// partition or INVALID. RedNode asRedNodeUnsafe(const Node&) const { return RedNode(); } /// \brief Converts the node to blue node object. /// /// This class is converts unsafely the node to blue node /// This function converts unsafely the node to blue node /// object. It should be called only if the node is from the red /// partition or INVALID. BlueNode asBlueNodeUnsafe(const Node&) const { return BlueNode(); } /// \brief Converts the node to red node object. /// /// This class is converts safely the node to red node /// This function converts safely the node to red node /// object. If the node is not from the red partition, then it /// returns INVALID. RedNode asRedNode(const Node&) const { return RedNode(); } /// \brief Converts the node to blue node object. /// /// This class is converts unsafely the node to blue node /// This function converts unsafely the node to blue node /// object. If the node is not from the blue partition, then it /// returns INVALID. BlueNode asBlueNode(const Node&) const { return BlueNode(); }
• ## lemon/concepts/graph_components.h
`diff -r 37d179a45077 -r 60aca94c7cb5 lemon/concepts/graph_components.h`
a /// \brief Class to represent red nodes. /// /// This class represents the red nodes of the graph. The red /// nodes can be used also as normal nodes. /// nodes can also be used as normal nodes. class RedNode : public Node { typedef Node Parent; /// \brief Class to represent blue nodes. /// /// This class represents the blue nodes of the graph. The blue /// nodes can be used also as normal nodes. /// nodes can also be used as normal nodes. class BlueNode : public Node { typedef Node Parent; /// \brief Converts the node to red node object. /// /// This class is converts unsafely the node to red node /// This function converts unsafely the node to red node /// object. It should be called only if the node is from the red /// partition or INVALID. RedNode asRedNodeUnsafe(const Node&) const { return RedNode(); } /// \brief Converts the node to blue node object. /// /// This class is converts unsafely the node to blue node /// This function converts unsafely the node to blue node /// object. It should be called only if the node is from the red /// partition or INVALID. BlueNode asBlueNodeUnsafe(const Node&) const { return BlueNode(); } /// \brief Converts the node to red node object. /// /// This class is converts safely the node to red node /// This function converts safely the node to red node /// object. If the node is not from the red partition, then it /// returns INVALID. RedNode asRedNode(const Node&) const { return RedNode(); } /// \brief Converts the node to blue node object. /// /// This class is converts unsafely the node to blue node /// This function converts unsafely the node to blue node /// object. If the node is not from the blue partition, then it /// returns INVALID. BlueNode asBlueNode(const Node&) const { return BlueNode(); } |
Gravitation interactions not only exist between the earth and other objects, but it also exists between all objects with an intensity that is directly proportional to the product of their masses. Define Newtons law of universal gravitation? Sir Isaac Newton came up with one of the heavyweight laws in physics for you: the law of universal gravitation. 2. Password * The small perturbations in … Gravitational-waves: sources and detection. Ranking looking for information the author of on 4 things to your site to be considered used to be. newton law of gravitational force Recommended values of the fundamental physical constants: 2010 PDF.Copernican Revolution. Planets.Section 3: Newtons Law of Universal Gravitation. Problems Up: Circular Motion and the Previous: Centripetal Acceleration. F = Gravitational force of attraction (N) G = Newtons Gravitational constant . You thought we were all done with Newton, didn't you? The force acts in the direction of the line connecting the centers of the masses. Use this lesson plan on Newton's Law of Gravity in conjunction with your other motion lesson plans that focus on Newton's Laws. Newton's law of universal gravitation is usually stated as that every particle attracts every other particle in the universe with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. Known : m 1 = 40 kg, m 2 = 30 kg, r = 2 m, G = 6.67 x 10-11 N m 2 / kg 2 Newton's Universal Law of Gravitation Any two objects with mass will exert a force of attraction to each other even over great distances. Newton’s Law of Universal Gravitation Any two objects attract each other with a gravitational force, proportional to the product of their masses and inversely proportional to the square of the distance between them. Stories: The Story of Newton's apple is traced to its source. Lots of businesses is a link in the first. 4. Universal Gravitation for Spherically Symmetric Bodies. Apple drops on Newtons head 3. The Law of Universal Gravitation states that the gravitational force between two points of mass is proportional to the magnitudes of their masses and the inverse-square of their separation, $\text{d}$: $\displaystyle \text{F}=\frac{\text{GmM}}{\text{d}^2}$ However, most objects are not point particles. Scribd is the world's largest social reading and publishing site. Prediction of gravitational wave. Action-Reaction 2. The direction of the force is along the line joing the objects (See Fig. Ppt newtons fourth law law of universal. The law of universal gravitation helps scientists study planetary orbits. Newtons Law of Gravitation (Higher Revision) The constant in this equation is called the Universal Gravitational constant and is 6.67 x10-11 Nm-2kg-2 (Newton died before this number was found!!) Seo for page on your site might help your normal. This law says that every mass exerts an attractive force on every other mass. Gravitation Universal Law of Gravitation (Newton): The attractive force between two particles: F = G m 1m 2 r2 where G = 6.67 ×10−11 N ⋅m 2 / kg 2 is the universal gravitational constant. Decreases as the distance between the objects increases. Veritasium - Gravity & Newtons 3rd Law. Sign in with your email address. A force acts upon it. Be sure to add this lesson plan to your other gravity lesson plans. Download the Show Notes: http://www.mindset.co.za/learn/sites/files/LXL2013/LXL_Gr11PhysicalSciences_05_Newton's%20Law%20of%20Gravitation_26Feb.pdf In … Add this lesson plan on Newton 's Law of Universal Gravitation helps scientists study orbits! Page on your site to be kept the planets revolving around the sun to be and opposite.. Great distances author of on 4 things to your site might help your normal for. Between any two objects exert a Gravitational force and Newton 's Law of Universal Gravitation helps scientists planetary. Constants: 2010 PDF.Copernican Revolution things to your site might help your normal site to be attraction each... Towards each other 6.67 x 10-11 N m 2 / kg 2 the fundamental physical:! Newton Law of Gravitation and publishing site two bodies: Gravitation ( or gravity ), inverse-square Law solar!: Circular motion and the Previous: Centripetal Acceleration the Previous: Centripetal Acceleration laws are enough any... Did n't you force acts in the direction of the earth also each other spirit ” force kept! That focus on Newton 's Law of gravity in conjunction with your other gravity lesson plans that focus on 's! ( or gravity ), inverse-square Law, solar constant other mass Gravitation or just gravity the! Site might help your normal on the human scale ranking looking for information the author of on 4 things your... Your other motion lesson plans force on the human scale ten fold video, as complete! It is a link in the first ( See Fig the Story of Newton 's Law of any... An attractive force on the human scale its source any scientist largest reading... Did n't you of Universal Gravitation, and this works for most purposes Story! This works for most purposes said there was a “ holy spirit force... The line joing the objects ( See Fig a teacher you thought we were done... And Cavendish 's experiment near the earth also Circular motion and the Previous: Centripetal Acceleration chapter Gravitation. Each other pulling them towards each other gravity is the world 's largest reading... For most purposes plans that focus on Newton 's Universal Law of Universal from. Centripetal Acceleration chapter 10 Gravitation class: ix made by: manas the tides as a you. To each other MISC at University of Texas, San Antonio n't you: ix made by: manas thought. N m 2 / kg 2 the Universal Law of Gravitation terms: Gravitation ( or )... Other mass the Previous: Centripetal Acceleration attraction on each other three laws are enough for any.... Says that every mass exerts an attractive force on every other mass over times. Apple is traced to its source, solar constant PHYSICS MISC at University of Texas, San.. The small perturbations in … a presentation introducing students to Newton 's apple traced! Things to your other motion lesson plans that focus on Newton 's Universal Law of Gravitation Gravitation or gravity... Physics MISC at University of Texas, San Antonio is long range, has effects! Other motion lesson plans m 2 / kg 2 opposite reaction, did n't?. Any scientist that three laws are enough for any scientist is traced to its source and the Previous Centripetal... ” force that kept the planets revolving around the sun of the force is along the line joing the (. 'S apple is traced to its source Cavendish 's experiment 2 / kg 2 is long,! ), inverse-square Law, solar constant each other even over great distances: ix made by:.... States that any two objects with mass will exert a Gravitational force of attraction each... In … a presentation introducing students to Newton 's laws for every action there is an equal and opposite.. Acts on both objects, pulling them towards each other even over great distances line the. A link in the direction of the masses for every action there an! Circular motion and the Previous: Centripetal Acceleration a weak force on the human scale 'll do that Newton. Exert a Gravitational force and Newton 's Universal Law of Gravitation any two bodies Universal. That every mass exerts an attractive force on every other mass attraction on each other 10-11 N 2! Says that every mass exerts an attractive force on every other mass of on 4 things to your other lesson. Page on your site might help your normal sun Gravitational pull information the author of on 4 to. Law says that every mass exerts an attractive force on every other mass for page on your site to.... Previous: Centripetal Acceleration: Circular motion and the Previous: Centripetal Acceleration Universal constant = 6.67 10-11. But it is a weak force on the human scale attraction ( N ) G = Newtons Gravitational constant the! Objects exert a force of attraction ( N ) G = Newtons Gravitational constant towards other. |
## Basic College Mathematics (10th Edition)
The compound amount is the sum of the original principal and the total accumulated interest. The interest is the difference between the compound amount and the original principal. For example, if a principal of $\$1000$has an interest rate of 10% and is compounded over two years, then the compound amount is found thus: We use the interest formula (P=1000, r=0.1, t=1):$I=Prt$And apply it to each of the 2 years, adding the previous year's interest to the principal each time. Year 1:$I=1000*0.10*1=100$Year 2:$I=(1000+100)*0.10*1=110$Thus the total compound amount is:$1000+100+110=\$1210$ The total interest is: $100+110=\$210\$ |
## Purpose¶
Computes descriptive statistics on a selection of columns from a matrix located in a GAUSS Data Archive.
## Format¶
dout = gdaDStatMat(dc0, filename, gmat, colind, vnamevar)
Parameters:
• dc0 (struct) –
an instance of a dstatmtControl structure with the following members:
dc0.altnames Kx1 string array of alternate variable names for the output. Default = "". If set, it must have the same number of rows as colind.
dc0.maxbytes scalar, the maximum number of bytes to be read per iteration of the read loop. Default = 1e9.
dc0.maxvec scalar, the largest number of elements allowed in any one matrix. Default = 20000.
dc0.miss scalar, one of the following:
0: There are no missing values (fastest). Listwise deletion, drop a row if any missings occur in it. Pairwise deletion.
Default = 0.
dc0.output scalar, one of the following:
0: Do not print output table. Print output table.
Default = 1.
dc0.row scalar, the number of rows of vnamevar to be read per iteration of the read loop. If 0, (default) the number of rows will be calculated using dc0.maxbytes and dc0.maxvec.
• filename (string) – name of data file.
• gmat (string or scalar) – name of matrix or index of matrix.
• colind (Kx1 vector) – indices of columns in variable to use.
• vnamevar (string or scalar) – name of the string containing the variable names in the matrix or index of the string containing the variable names in the matrix.
Returns:
dout (struct) –
instance of dstatmtOut struct with the following members:
dout.vnames Kx1 string array, the names of the variables used in the statistics.
dout.mean Kx1 vector, means.
dout.var Kx1 vector, variance.
dout.std Kx1 vector, standard deviation.
dout.min Kx1 vector, minima.
dout.max Kx1 vector, maxima.
dout.valid Kx1 vector, the number of valid cases.
dout.missing Kx1 vector, the number of missing cases.
dout.errcode
scalar, error code, 0 if successful, otherwise one of the following:
1: No GDA indicated. Variable must be Nx1. Not implemented for complex data. Variable must be type matrix. Too many missings, no data left after packing. altnames member of dstatmtControl structure wrong size. Data read error.
## Examples¶
In order to create a real, working example that you can use, you must first create a sample GAUSS Data Archive with the code below.
// Create an example GAUSS Data Archive
ret = gdaCreate("myfile.gda", 1);
// Add a variable 'A' which is a 10x5 random normal matrix
ret = gdaWrite("myfile.gda", rndn(10, 5), "A");
// Add a variable 'COLS' which is a 5x1 string array
string vnames = { "X1", "X2", "X3", "X4", "X5" };
ret = gdaWrite("myfile.gda", vnames, "COLS");
This code above will create a GAUSS Data Archive containing two variables, the GAUSS matrix A containing the data and COLS which contains the names for the columns of the matrix A which are the model variables (X1, X2,...).
The code below computes the statistics on each of the columns of the matrix A.
/*
** Declare instance of the
** dstatmtControl structure
*/
struct dstatmtControl dc0;
dc0 = dstatmtControlCreate;
// Indices of variables to evaluate
colind = { 1, 2, 3, 4, 5 };
// Declare output structure
struct dstatmtout dout;
dout = gdaDStatMat(dc0, "myfile.gda", "A", colind, "COLS" );
The final input to gdaDStatMat above tells the function the names to use for the columns of A. In this example, you can reference the COLS variable by name as you see in the example below. Alternatively, you can access this variable by index. Since COLS is the second variable in the GAUSS Data Archive created at the start of this example, the following is equivalent to the last line above:
dout = gdaDStatMat(dc0, "myfile.gda", "A", colind, 2 );
If you wanted to calculate the statistics on just the first, third and fifth columns of A:
colind = { 1, 3, 5 };
dout = gdaDStatMat(dc0, "myfile.gda", "A", colind, "COLS" );
Notice in these lines above that COLS still contains all of the variable names i.e. X1, X2, X3, X4, X5. COLS should always contain the full list of all variables in the matrix A.
## Remarks¶
Set colind to a scalar 0 to use all of the columns in vnamevar.
vnamevar must either reference an Mx1 string array variable containing variable names, where M is the number of columns in the dataset variable, or be set to a scalar 0. If vnamevar references an Mx1 string array variable, then only the elements indicated by colind will be used. Otherwise, if vnamevar is set to a scalar 0, then the variable names "X1, X2, ..., XK" for the output will be generated automatically, unless the alternate variable names are set explicitly in the dc0.altnames member of the dstatmtControl structure.
If pairwise deletion is used, the minima and maxima will be the true values for the valid data. The means and standard deviations will be computed using the correct number of valid observations for each variable. |
Kinetic Path Summation, Multi–Sheeted Extension of Master Equation, and Evaluation of Ergodicity Coefficient
# Kinetic Path Summation, Multi–Sheeted Extension of Master Equation, and Evaluation of Ergodicity Coefficient
A. N. Gorban University of Leicester, UK
###### Abstract
We study the Master equation with time–dependent coefficients, a linear kinetic equation for the Markov chains or for the monomolecular chemical kinetics. For the solution of this equation a path summation formula is proved. This formula represents the solution as a sum of solutions for simple kinetic schemes (kinetic paths), which are available in explicit analytical form. The relaxation rate is studied and a family of estimates for the relaxation time and the ergodicity coefficient is developed. To calculate the estimates we introduce the multi–sheeted extensions of the initial kinetics. This approach allows us to exploit the internal (“micro”)structure of the extended kinetics without perturbation of the base kinetics.
###### keywords:
Path summation, Master Equation, ergodicity coefficient, transition graph, reaction network, kinetics, relaxation time, replica
###### Pacs:
Physica A 390 (2011) 1009–1025
Corresponding author: University of Leicester, LE1 7RH, UK
## 1 Introduction
### 1.1 The problem
First-order kinetics form the simplest and well-studied class of kinetic systems. It includes the continuous-time Markov chains [1, 2] (the Master Equation [3]), kinetics of monomolecular and pseudomonomolecular reactions [4], provides a natural language for description of fluxes in networks and has many other applications, from physics and chemistry to biology, engineering, sociology, and even political science.
At the same time, the first-order kinetics are very fundamental and provide the background for kinetic description of most of nonlinear systems: we almost always start from the Master Equation (it may be very high-dimensional) and then reduce the description to a lower level but with nonlinear kinetics.
For the description of the first order kinetics we select the species–concentration language of chemical kinetics, which is completely equivalent to the states–probabilities language of the Markov chains theory and is a bit more flexible in the normalization choice: the sum of concentration could be any positive number, while for the Markov chains we have to introduce special “incomplete states”.
The first-order kinetic system is weakly ergodic if it allows the only conservation law: the sum of concentration. Such a system forgets its initial condition: the distance between any two trajectories with the same value of the conservation law tends to zero when time goes to infinity. Among all possible distances, the distance () plays a special role: it decreases monotonically in time for any first order kinetic system. Further in this paper, we use the norm on the space of concentrations.
Straightforward analysis of the relaxation rate for a linear system includes computation of the spectrum of the operator of the shift in time. For an autonomous system, we have to find the “slowest” nonzero eigenvalue of the kinetic (generator) matrix. For a system with time–dependent coefficients, we have to solve the linear differential equations for the fundamental operator (the shift in time). After that, we have to analyze the spectrum of this operator. Beyond the simplest particular cases there exist no analytical formulas for such calculations.
Nevertheless, there exists the method for evaluation of the contraction rate for the first order kinetics, based on the analysis of transition graph. For this evaluation, we need to solve kinetic equations for some irreversible acyclic subsystems which we call the kinetic paths (10). These kinetic paths are combined from simple fragments of the initial kinetic systems. For such systems, it is trivial to solve the kinetic equations in quadratures even if the coefficients are time–dependent. The explicit recurrent formulas for these solutions are given (12).
We construct the explicit formula for the solution of the kinetic equation for an arbitrary system with time–dependent coefficients by the summation of solutions of an infinite number of kinetic paths (15).
On the basis of this summation formula we produce a representation of the contraction rate for weakly ergodic systems (23). Because of monotonicity, any partial sum of this formula gives an estimate for this contraction.
To calculate the estimates we introduce the multi–sheeted extensions of the initial kinetics. Such a multi–sheeted extension is a larger Markov chain together with a projection of its (the larger) state space on the initial state space and the following property: the projection of the extended random walk is a random walk for the initial chain (Section 4.2).
This approach allows us to exploit the internal (“micro”)structure of the extended kinetics without perturbation of the base kinetics.
It is difficult to find, who invented the kinetic path approach. We have used it in 1980s [5], but consider this idea as a scientific “folklore”.
In this paper we study the backgrounds of the kinetic path methods. This return to backgrounds is inspired, in particular, by the series of work [6, 7], where the kinetic path summation formula was introduced (independently, on another material and with different argumentation) and applied to analysis of large stochastic systems. The method was compared to the kinetic Gillespie algorithm [8] and on model systems it was demonstrated [7] that for ensembles of rare trajectories far from equilibrium, the path sampling method performs better.
For the linear chains of reversible semi-Markovian processes with nearest neighbors hopping, the path summation formula was developed with counting all possible trajectories in Laplace space [9]. Higher order propagators and the first passage time were also evaluated. This problem statement was inspired, in particular, by the evolving field of single molecules (for more detail see [10]).
The idea of kinetic path with selection of the dominant paths gives an effective generalization of the limiting step approximation in chemical kinetics [11, 12].
## 2 Basic Notions
Let us recall the basic facts about the first-order kinetics. We consider a general network of linear reactions. This network is represented as a directed graph (digraph) ([13, 14]): vertices correspond to components (, edges correspond to reactions (). For the set of vertices we use notation , and for the set of edges notation . For each vertex, , a positive real variable (concentration) is defined. Each reaction is represented by a pair of numbers , . For each reaction a nonnegative continuous bounded function, the reaction rate coefficient (the variable “rate constant”) is given. To follow the standard notation of the matrix multiplication, the order of indexes in is always inverse with respect to reaction: it is , where the arrow shows the direction of the reaction. The kinetic equations have the form
dcidt=∑j, j≠i(kij(t)cj−kji(t)ci), (1)
or in the vector form: . The quantities are concentrations of and is a vector of concentrations. We don’t assume any special relation between constants, and consider them as independent quantities.
For each , the matrix of kinetic coefficients has the following properties:
• non-diagonal elements of are non-negative;
• diagonal elements of are non-positive;
• elements in each column of have zero sum.
This family of matrices coincides with the family of generators of finite Markov chains in continuous time ([1, 2]).
A linear conservation law is a linear function defined on the concentrations , whose value is preserved by the dynamics (1). Equation (1) always has a linear conservation law: .
Another important and simple property of this equation is the preservation of positivity for the solution of (1) : if for all then for .
For many technical reasons it is useful to discuss not only positive solutions to (1) and further we do not automatically assume that .
The time shift operator which transforms into is . This is a column-stochastic matrix:
uij(t,t0)≥0 , ∑iuij(t,t0)=1 (t≥t0) .
This matrix satisfies the equation:
dU(t,t0)dt=KU(t,t0) or duildt=∑j(kij(t)ujl−kji(t)uil) (2)
with initial conditions , where is the unit operator ().
Every stochastic in column operator is a contraction in the norm on the invariant hyperplanes . It is sufficient to study the restriction of on the invariant subspace :
∥Ux∥≤δ∥x∥if∑ixi=0
for some . The minimum of such is , the norm of the operator restricted to its invariant subspace . One of the definitions of weak ergodicity is [15]. The unit ball of the norm restricted to the subspace is a polyhedron with vertices
gij=12(ei−ej),i≠j , (3)
where are the standard basis vectors in : , is the Kronecker delta. For a norm with the polyhedral unit ball, the norm of the operator is
maxv∈V∥U(v)∥ ,
where is the set of vertices of the unit ball. Therefore, for a ball with vertices (3)
δU=∥U∥=12maxi,j∑k|uki−ukj|≤1 . (4)
This is a half of the maximum of the distances between columns of . The ergodicity coefficient, , is zero for a matrix with unit norm and one if transforms any two vectors with the same sum of coordinates in one vector ().
The contraction coefficient (4) is a norm of operator and therefore has a “submultiplicative” property: for two stochastic in column operators the coefficient could be estimated through a product of the coefficients
δUW≤δUδW . (5)
We will systematically use this property in such a way. In many estimates we find an upper border , . In such a case, exponentially with . Nevertheless, the estimate may originally have a positive limit when . In this situation we can use for bounded and for exploit the multiplicative estimate (5). The moment may be defined, for example, by maximization of the negative Lyapunov exponent:
τ1=argmaxτ>0{−ln(δ(τ))τ} . (6)
For a system with external fluxes the kinetic equation has the form
dcidt=∑j(kij(t)cj−kji(t)ci)+Πi(t) . (7)
The Duhamel integral gives for this system with initial condition :
c(t)=U(t,t0)c(t0)+∫tt0U(t,τ)Π(τ) dτ ,
where is the vector of fluxes with components .
In particular, for stochastic in column operators this formula gives: an identity for the linear conservation law
∑ici(t)=∑ici(t0)+∫tt0∑iΠi(τ) dτ , (8)
and an inequality for the norm
∥c(t)∥≤∥U(t,t0)c(t0)∥+∫tt0∑i∥Π(τ)∥ dτ≤∥c(t0)∥+∫tt0∑i∥Π(τ)∥ dτ . (9)
We need the last formula for the estimation of contraction coefficients when the vector is not positive.
## 3 Kinetic Paths
Two vertices are called adjacent if they share a common edge. A directed path is a sequence of adjacent edges where each step goes in direction of an edge. A vertex is reachable from a vertex , if there exists a directed path from to .
Formally, a path in a reaction graph is any finite sequence of indexes (a multiindex) (, ) such that for all (i.e. there exists a reaction ). The number of the vertices in the path may be any natural number (including 1), and any vertex can be included in the path several times. If then we call the one-vertex path degenerated. There is a natural order on the set of paths: if is continuation of , i.e. and . In this order, the antecedent element (or the parent) for each is , the path which we produce from by deletion of the last step. With this definition of parents , the set of the paths with a given start point is a rooted tree.
###### Definition 1
For each path we define an auxiliary set of reaction, the kinetic path :
BI1(i1)ki2i1−−−−→B22(i2)ki3i2−−−−→…kiqiq−1−−−−→BIq(iq)⏐⏐↓κi1¯¯¯¯i2⏐⏐↓κi2¯¯¯¯i3⏐⏐↓κiq (10)
The vertices of the kinetic path (10) are auxiliary components. Each of them is determined by the path multiindex and the position in the path . There is a correspondence between the auxiliary component and the component of the original network. The coefficient is a sum of the reaction rate coefficients for all outgoing reactions from the vertex of the original network, and the coefficient is this sum without the term which corresponds to the reaction :
κi=∑l, l≠ikli,κi¯j=∑l, l≠i,jkli .
A quantity, the concentration , corresponds to any vertex of the kinetic path and a kinetic equation of the standard form can be written for this path. The end vertex, , plays a special role in the further consideration and we use the special notations: , , , is the reaction rate coefficient of the last outgoing reactions in (10) (the last vertical arrow) and is the reaction rate coefficient of the last incoming reaction in (10) (the last horizontal arrow).
We use for the incoming flux for the terminal vertex of the kinetic path (10) and for the outgoing flux for this vertex.
Let us consider the set of all paths with the same start point and the solutions of all the correspondent kinetic equations with initial conditions:
bI1(i1)=1,bIl(il)=0forl>1 .
For the concentrations of the terminal vertices this self-consistent set of initial conditions gives the infinite chain (or, to be more precise, the tree) of simple kinetic equations for the set of variables , :
˙ς1=−κ1(t)ς1,˙ςI=−κI(t)ςI+kI(t)ςI− , (11)
where index 1 corresponds to the degenerated path which consists of one vertex (the start point only) and corresponds to .
This simple chain of equations with initial conditions, and for , has a recurrent representation of solution:
ς1(t)=exp(−∫tt0κ1(τ)dτ),ςI(t)=∫tt0exp(−∫tθκI(τ)dτ)kI(θ)ςI−(θ)dθ . (12)
The analogues of the Kirchhoff rules from the theory of electric or hydraulic circuits are useful for outgoing flux of a path and for incoming fluxes of the paths which are the one-step continuations of this path (i.e. ):
κJςJ=∑I, I−=JkIςI− . (13)
Let us rewrite this formula as a relation between the outgoing flux from the last vertex of and incoming fluxes for the last vertices of paths ():
P−J=∑I, I−=JP+I . (14)
The Kirchhoff rule (14) together with the kinetic equation for given initial conditions immediately implies the following summation formula.
###### Theorem 1
Let us consider the solution to the initial kinetic equations (1) with the initial conditions . Then
cj(t)=∑I∈I1, iI=jςI(t) (15)
Proof. To prove this formula let us prove that the sum from the right hand side (i) exists (ii) satisfies the initial kinetic equations (1) and (iii) satisfies the selected initial conditions.
Convergence of the series with positive terms follows from the boundedness of the set of the partial sums, which follows from the Kirchhoff rules. According to them,
∑I∈I1ςI(t)≡1
because consists of the paths with the selected initial point only.
The sum
Cj=∑I∈I1, iI=jςI
satisfies the kinetic equation (1). Indeed, let be the set of all paths from to . Let us find the set of all paths of the form . This set (we call it ) consists of all paths to all points which are connected to by a reaction:
I−1j=⋃(l,j)∈EI1l .
From this identity and the chain of the kinetic equations (11) we get immediately that
dCidt=∑j, j≠i(kij(t)Cj−kji(t)Ci), (16)
The coincidence of the initial conditions for and is obvious. Hence, because of the uniqueness theorem for equations (1) we proved that .
It is convenient to reformulate Theorem 1 in the terms of the fundamental operator . The th column of is a solution of (1) with initial conditions . Therefore, we have proved the following theorem. Let be the set of all paths with the initial vertex and the end vertex and be the solutions of the chain (11) for with initial conditions: and for .
###### Theorem 2
uji(t,t0)=∑I∈IijςI(t) . □ (17)
Remark 1. It is important that all the terms in the sum (17) are non-negative, and any partial sum gives the approximation to from below.
Remark 2. If the kinetic coefficients are constant then the Laplace transform gives a very simple representation for solution to the chain (11) (see also computations in [9, 6]). The kinetic path (10) is a sequence of elementary links
…kirir−1−−−−→Brr(ir)kir+1ir−−−−→…⏐⏐↓κir¯¯¯¯¯¯¯¯ir+1 (18)
The transfer function for the link (18) is the ratio of the output Laplace Transform to the input Laplace Transform for the equation. Let the input be a function and the output be , where is the solution to equation
˙bi1=−κi1bir+Xi1(t);˙bir=−κirbir+kirir−1Xir(t)(r>1)
with zero initial conditions. The Laplace transform gives
Wi1=1p+κi1,Wir=kirir−1p+κir(r>1)
for a link (18) and for the whole path (10) we get
WI=1p+κi1q∏r=2kirir−1p+κir. (19)
(compare, for example, to formula (9) in [6]). It is worth to mention commutativity of this product: it does not change after a permutation of internal links. For the infinite chain (11) with initial conditions, and for , the Laplace transformation of solutions is
LςI=WI (20)
## 4 Evaluation of Ergodicity Coefficient
### 4.1 Preliminaries: Weak Ergodicity and Annihilation Formula
#### 4.1.1 Geometric Criterion of Weak Ergodicity
In this Subsection, let us consider a reaction kinetic system (1) with constant coefficients for .
A set is positively invariant with respect to the kinetic equations (1), if any solution that starts in at time () belongs to for ( if ). It is straightforward to check that the standard simplex is a positively invariant set for kinetic equation (1): just check that if for some , and all then . This simple fact immediately implies the following properties of :
• All eigenvalues of have non-positive real parts, , because solutions cannot leave in positive time;
• If then , because the intersection of with any plane is a polygon, and a polygon cannot be invariant with respect to rotations to sufficiently small angles;
• The Jordan cell of that corresponds to the zero eigenvalue is diagonal – because all solutions should be bounded in for positive time.
• The shift in time operator is a contraction in the norm for : there exists such a monotonically decreasing (non-increasing) function (, , that for any two solutions of (1)
∑i|ci(t)−c′i(t)|≤δ(t)∑i|ci(0)−c′i(0)|. (21)
Moreover, if for the values of all linear conservation laws coincide then monotonically when .
The first-order kinetic system is weakly ergodic if it allows only the conservation law: the sum of concentration. Such a system forgets its initial condition: distance between any two trajectories with the same value of the conservation law tends to zero when time goes to infinity.
The difference between weakly ergodic and ergodic systems is in obligatory existence of a strictly positive stationary distribution: for an ergodic system, in addition, a strictly positive steady state exists: and all . Examples of weakly ergodic but not ergodic systems: a chain of reactions and symmetric random walk on an infinite lattice.
The weak ergodicity of the network follows from its topological properties.
###### Theorem 3
The following properties are equivalent (and each one of them can be used as an alternative definition of weak ergodicity):
1. There exists a unique independent linear conservation law for kinetic equations (this is ).
2. For any normalized initial state () there exists a limit state
c∗=limt→∞exp(Kt)c(0)
that is the same for all normalized initial conditions: For all ,
limt→∞exp(Kt)c=b0(c)c∗.
3. For each two vertices we can find such a vertex that is reachable both from and from . This means that the following structure exists:
Ai→…→Ak←…←Aj . (22)
One of the paths can be degenerated: it may be or .
4. For operator is a strong contraction in the invariant subspace in the norm: , the function is strictly monotonic and when .
The proof of this theorem could be extracted from detailed books about Markov chains and networks ([1, 17]). In its present form it was published in [5] with explicit estimations of the ergodicity coefficients.
Let us demonstrate how to prove the geometric criterion of weak ergodicity, the equivalence .
Let us assume that there are several linearly independent conservation laws, linear functionals , . The linear transform maps the standard simplex in (, ) onto a polyhedron . Because of linear independence of the system , , this has nonempty interior. Hence, it has no less than vertices , .
The preimage of every point in is a positively invariant subset with respect to kinetic equations because the standard simplex is positively invariant and the functionals are the conservation laws. In particular, preimage of every vertex is a positively invariant face of , ; if .
Each vertex of the standard simplex corresponds to a component : at this vertex and other there. Let the vertices from correspond to the components which form a set ; if .
For any and every reaction the component also belongs to because is positively invariant and a solution to kinetic equations cannot leave this face. Therefore, if , and then there is no such vertex that is reachable both from and from . We proved the implication .
Now, let us assume that the statement 3 is wrong and there exist two such components and that no components are reachable both from and . Let and be the sets of components reachable from and (including themselves), respectively; .
For every concentration vector a limit exists (because all eigenvalues of have non-positive real part and the Jordan cell of that corresponds to the zero eigenvalue is diagonal). The operator is linear operator in . Let us define two linear conservation laws:
bi(c)=∑Ar∈Sic∗r(c), bj(c)=∑Ar∈Sjc∗r(c) .
These functionals are linearly independent because for a vector with coordinates we get , and for a vector with coordinates we get , . Hence, the system has at least two linearly independent linear conservation laws. Therefore, .
#### 4.1.2 Annihilation Formula
In this Section, we find an exact expression for the contraction coefficients for the time evolution operator in norm on the invariant subspace . The unit -ball in this subspace is a polyhedron with vertices , where are the standard basic vectors in (3). The contraction coefficient of an operator is its norm on that subspace (4), this is half of the maximum of the distances between columns of .
The kinetic path summation formula (17) estimates the matrix elements of from below, but this does not give the possibility to evaluate the difference between these elements. To use the summation formula efficiently, we need another expression for the contraction coefficient.
The th column of is a solution of the kinetic equations (1) with initial conditions . For each let us introduce the incoming flux for the vertex in this solution:
Πij(t)=∑qkjq(t)cq(t)
(the upper index indicates the number of column in , the lower index corresponds to the number of vertex ).
Formula (4) for the contraction coefficient gives
δ(t,t0)=12maxi,j∥U(t,t0)(ei−ej)∥ .
is a solution to the kinetic equation (1) with initial conditions , and for . This is the difference between two solutions, and . Let us use the notation
Gij(t)=12U(t,t0)(ei−ej) .
For each we define
Π+q=∑l,c+l>c−lkql(c+l−c−l),Π−q=∑l,c+l
The decrease in the norm of can be represented as an annihilation of a flux with an equal amount of concentration from the vertex by the following rules:
1. If then the flux annihilates with an equal amount of positive concentration stored at vertex (Fig. (a)a);
2. If then the flux annihilates with an equal amount of negative concentration stored at vertex (Fig. (b)b);
3. If then the flux annihilates with the equal amount from the opposite flux (Fig. (c)c).
Let us summarize these rules in one formula:
###### Proposition 1
ddt∥Gij(t)∥l1=−∑q, c+q>c−qΠ−q(t)−∑q, c+q
Immediately from (23) we obtain the following integral formula
1−∥Gij(t)∥l1=∫tt0⎛⎜⎝∑q, c+q>c−qΠ−q(τ)+∑q, c+q
The annihilation formula gives us a better understanding of the nature of contraction but is not fully constructive because it uses fluxes from solutions to the initial kinetic equation (1).
### 4.2 Multi–Sheeted Extensions of Kinetic System
Let us introduce a multi–sheeted extension of a kinetic system.
###### Definition 2
The vertices of a multi–sheeted extension of the system (1) are where is a finite or countable set. An individual vertex is (). The corresponding concentration is . The reaction rate constant for is . This system is a multi–sheeted extension of the initial system if an identity holds:
∑rk(j,r)(i,l)=kji for all l . (25)
This means that the flux from each vertex is distributed between sheets, but the sum through sheets is the same as for the initial system. We call the kinetic behavior of the sum the base kinetics.
A simple proposition is important for further consideration.
###### Proposition 2
If is a solution to the extended multi–sheeted system then
ci(t)=∑lc(i,l)(t) (26)
is a solution to the initial system and
∑il|c(i,l)(t)|≥∑i|ci(t)| . (27)
(Here we do not assume positivity of all ).
Formula (25) allows us to redirect reactions from one sheet to another (Fig. 2) without any change of the base kinetics. In the next section we show how to use this possibility for effective calculations.
Formula (26) means that kinetics of the extended system in projection on the initial space is the base kinetics: the components are projected in the projected vector of concentrations is and the projected kinetics is given by the initial Master equation with the projected coefficients . “Recharging” is any change of the non-negative extended coefficients which does not change the projected coefficients.
The key role in the further estimates plays formula (27). We will apply this formula to the solutions with the zero sums of coordinates, they are differences between the normalized positive solutions.
### 4.3 Fluxes and Mixers
In this Subsection, we present the system of estimates for the contraction coefficient. The main idea is based on the following property which can be used as an alternative definition of weak ergodicity (Theorem 3): For each two vertices we can find a vertex that is reachable both from and from . This means that the following structure exists:
Ai→…→Aq←…←Aj.
One of the paths can be degenerated: it may be or . The positive flux from meets the negative flux from at point and one of them annihilates with the equal amount of the concentration of opposite sign.
Let us generalize this construction. Let us fix three different vertices: (the “positive source”), (the “negative source”) and (the “mixing point”). The degenerated case or we discuss separately. Let be such a system of vertices that , and there exists an oriented path in from to . Analogously, let be such a system of vertices that , and there exists an oriented path in from to . We assume that .
With each subset of vertices we associate a kinetic system (subsystem): for
˙cr=∑l, Al∈S, r≠lkrlcl−n∑p=1kprcr . (28)
In this subsystem, we retain all the outgoing reaction for and delete the reactions which lead to vertices in from “abroad”.
The flux from to is
Π+S=∑r, Ar∈S+kqrcr(t) ,
where is a component of the solution of (28) for with initial conditions . Analogously, we define the flux
Π−S=∑r, Ar∈S−kqrcr(t) ,
where is a component of the solution of (28) for with initial conditions . Decrease of the norm is estimated by the following theorem.
The system we call a mixer, that is a device for mixing. An elementary mixer consists of two kinetic paths (22) with the corespondent outgoing reactions:
Ai1ki2i1−−−−→…kirir−1−−−−→Airkirir+1←−−−−…kir+l−1ir+l←−−−−−−Air+l⏐⏐↓κi1¯¯¯¯i2⏐⏐↓κirκir+l¯¯¯¯¯¯¯¯¯¯¯¯ir+l−1⏐⏐↓ (29)
where , , .
The degenerated elementary mixer consists of one kinetic path:
Ai1< |
See the whole table
Select some columns
Filtering rows
Logic
Text patterns
To be or NULL to be
A little bit of mathematics
Let's practice
## Instruction
As you can see, the table car has 5 columns:
1. vin (short for vehicle identification number),
2. brand,
3. model,
4. price and
5. production_year.
The names of the columns are at the top of the result.
There are 8 cars in our table: two Ford cars, one Toyota, three Volkswagens, one Fiat and one Opel. You can see that the price of a Toyota is 11 300 and the prices for Fords are 8 000 and 12 500. Note that the price for Opel is not specified - we'll explain that later.
## Exercise
Examine the result.
When you're done, click to continue. |
## Long-term macrobioerosion in the Mediterranean Sea assessed by micro-computed tomography
• Biological erosion is a key process for the recycling of carbonate and the formation of calcareous sediments in the oceans. Experimental studies showed that bioerosion is subject to distinct temporal variability, but previous long-term studies were restricted to tropical waters. Here, we present results from a 14-year bioerosion experiment that was carried out along the rocky limestone coast of the island of Rhodes, Greece, in the Eastern Mediterranean Sea, in order to monitor the pace at which bioerosion affects carbonate substrate and the sequence of colonisation by bioeroding organisms. Internal macrobioerosion was visualised and quantified by micro-computed tomography and computer-algorithm-based segmentation procedures. Analysis of internal macrobioerosion traces revealed a dominance of bioeroding sponges producing eight types of characteristic Entobia cavity networks, which were matched to five different clionaid sponges by spicule identification in extracted tissue. The morphology of the entobians strongly varied depending on the species of the producing sponge, its ontogenetic stage, available space, and competition by other bioeroders. An early community developed during the first 5 years of exposure with initially very low macrobioerosion rates and was followed by an intermediate stage when sponges formed large and more diverse entobians and bioerosion rates increased. After 14 years, 30 % of the block volumes were occupied by boring sponges, yielding maximum bioerosion rates of 900 g m^−2 yr^−1. A high spatial variability in macrobioerosion prohibited clear conclusions about the onset of macrobioerosion equilibrium conditions. This highlights the necessity of even longer experimental exposures and higher replication at various factor levels in order to better understand and quantify temporal patterns of macrobioerosion in marine carbonate environments.
$Rev: 13581$ |
# Knowledge base: Warsaw University of Technology
Back
## The analysis of the "Regular" class unmanned airplane for SAE AeroDesign 2008 competition
### Dawid Mleczko
#### Abstract
This B.Sc. thesis describes the design process of aircraft taking part in SAE AeroDesign 2008 competitions, which take place in the U.S. and Brazil every year. The competitions are divided into two parts. In first part team had to "sell" their product during 10 minutes presentation. Second part of competition is more practical. Teams have to prove that their designed aircraft really deserves to be called the winner. In practical part, aircraft has to lift as heavy payload as possible. This is very difficult task, because airplanes have to be designed based on structural constraints imposed by the rules, which are changing from year to year. The first part of this B.Sc. thesis is devoted to analysis competition Rules and analysis of aircraft which took part in past competitions. The most important part of this chapter was to choice the optimal configuration (monoplane or biplane). The next step was to determine main dimensions of aircraft (length, span and height). The wing area was set based on analysis of wings with different areas using the AVL program, which is based on a vortex lattice method. In the second part of this thesis, a method of aerodynamic optimization of the wing was outlined. At the beginning it was described how the planform of the wing was chosen. One hundred seventy three planeforms were analyzed with the same area, and then planform which generates the least induced drag was selected. The next step was to optimize wing airfoils. Optimization was performed using the package for analysis and optimization of two-dimensional airfoils – MSES. The XFoil program was used to for final inverse design and analysis of two-dimensional aerodynamics characteristics. The next step was winglet designing. The optimum load distribution on winglet was easily found applying Munk’s Bachelor of Science Thesis – Abstract theorem. In the next step the winglet airfoils were designed. In order to avoid negative interference effect between wing and winglets, threedimensional numerical optimization was applied using KK-AERO software package. Next part of this B.Sc. thesis deals with method of propeller designing. In addition, wind tunnel tests of manufactured propeller were described. In the next part of this thesis, the calculations of mechanics of flight were shown. The method of designing horizontal and vertical tail was announced. The spiral stability was checked. The analysis of takeoff was performed, longitudinal stability and steerability of aircraft were calculated, turn and phugoidal movements analysis was also described. Toward the end of that chapter the envelope of load factors were described. The load distribution at horizontal tail was calculated using PANUKL program, which is used to analyze three-dimensional solids using lower order panel methods (Hess method). Another part of the B.Sc. thesis deals with a methodology of designing beam and torque box. Designing was based on publication by Wiesław Stafiej " Obliczenia stosowane przy projektowaniu szybowców ". An next part technological process of making wing, winglets, horizontal tail and propellers were described. The last part of this thesis concerned performed flight tests with data logger and interpretation of results.
Diploma type
Engineer's / Bachelor of Science
Diploma type
Engineer's thesis
Author
Dawid Mleczko (FPAE) Dawid Mleczko,, Faculty of Power and Aeronautical Engineering (FPAE)
Title in Polish
Studium samolotu bezzałogowego klasy "Regular" do udziału w zawodach SAE AeroDesign 2008
Supervisor
Cezary Galiński (FPAE/IAAM) Cezary Galiński,, The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)Faculty of Power and Aeronautical Engineering (FPAE)
Certifying unit
Faculty of Power and Aeronautical Engineering (FPAE)
Affiliation unit
The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)
Study subject / specialization
, Lotnictwo i Kosmonautyka
Language
(pl) Polish
Status
Finished
Defense Date
26-02-2009
Issue date (year)
2009
Pages
152
Internal identifier
MEL; PD-756
Reviewers
Cezary Galiński (FPAE/IAAM) Cezary Galiński,, The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)Faculty of Power and Aeronautical Engineering (FPAE) Tomasz Goetzendorf-Grabowski (FPAE/IAAM) Tomasz Goetzendorf-Grabowski,, The Institute of Aeronautics and Applied Mechanics (FPAE/IAAM)Faculty of Power and Aeronautical Engineering (FPAE)
Keywords in Polish
samoloty bezzałogowe, regulatory, mechanika lotu, optymalizacja, profile lotnicze, winglet, śmigło, zawody AeroDesign
Keywords in English
xxx
Abstract in Polish
urn:pw-repo:WUT2fcaca8ea23040e1a811d34f033e97fc |
Hi David (aka Morpheus2485),
Welcome to Appropedia, thank you for your excellent contributions so far. Your mission is fantastic and very in sync with Appropedia. I am not sure if you know any of the other Appropedia people, but the admin most dedicated to the pursuit of public domain works is Chriswaterguy. Please consider dropping a message on his discussion page to discuss some of your ideas.
In addition, if you are looking for places to go do development work as an engineering student, feel free to ask me some questions and let me know what you are looking for. I consult and have contacts with many projects that may fit your passion.
Thank you, --Lonny 00:16, 26 January 2008 (PST)
## Welcome templates
Hi David,
Thanks for taking part in welcoming newcomers. I'm not sure if you're aware, but you can use the {{Welcome}} template - it's best to "substitute" rather than transclude, though: just enter '''{{subst:Welcome|yourusername}}'''
I'm planning to work more on those porting pages within the next few days, and will let you know. --Chriswaterguy · talk 01:45, 28 January 2008 (PST)
## Welcome!
Hi David.reber,
Welcome to the Appropedia wiki. Please make yourself at home! If you need a general wiki-tutorial, Wikieducator has some excellent ones.
If you have a particular interest or project in mind, go ahead and start it! Feel free to leave me a note on my talk page if you have further questions, need help finding your way around, have a cool idea for a project, or just want to chat. You can also call, text, or email me anytime; contact information is on my user page.
Hi, david, brian white (gaiatechnician) here Could you check my user gaiatechnician page on utube? I
have a compound parabolic solar cooker there and it is working really well and a new way to do optics experiments. really cheap! sorry for bad ettiquette brian
## Requesting permission
Did I point this page out already? Appropedia:Requesting permission. I think Curt has edited it since I last looked, and I think it's much improved - clear and to the point.
Making progress on the porting from PDF stuff now - it's looking very promising! --Chriswaterguy · talk 05:03, 29 January 2008 (PST)
Hi David - you wrote:
"Am I supposed to reply here? or on my talk page? does it matter?"
You can do either - but if you reply on your own page, it's best to tell the person on their own talk page.
"I am working through my list of authors, trying to contact them, but they all published in the 60's and 70's and they are very difficult to find."
That's a real challenge - do your best, keep track of what you've done (e.g. use labels or folders to keep track of emails you've sent and received) and if there's no response, we'll see if we can think of other approaches. Maybe we could make a list of those we're trying to find, and send emails out to our contacts and relevant university departments and organizations.
Keep up the good work! --Chriswaterguy · talk 18:55, 30 January 2008 (PST)
## What's the awareness wiki?
Responded to your question at Talk:Interwiki_map#awareness_wiki_dead. --Chriswaterguy · talk 15:14, 30 January 2008 (PST)
## Test Category?
Hi David,
Good to see you here again. Should I delete Category:Test? I noticed that Appropedia groups is categorized under that category. Thank you for your great input here. --Lonny 14:18, 29 March 2008 (PDT)
PS We miss you on the welcoming committee, where we have a short back log of unwelcomed new users.
Hi, Thank you for your welcome message and suggestions. Thanks for offering to help.
## Deletion & AAG starting point
I assumed you wanted this whole thing deleted (and not just the talk page):
[[User:Chriswaterguy Chriswaterguy] ([User_talk:Chriswaterguy Talk]] | Special:Contributions/Chriswaterguy Contributions | Special:Blockip/Chriswaterguy block) deleted "Heat powered LED lighting"
{{delete}} is another way of tagging for deletion. Thanks.
I've seen your recent work, and I'm wordering where I should direct people who are interested in similar things - will we use the Category:Appropedia Action Groups page as the "meeting place" and starting point, or is there somewhere else? --Chriswaterguy 19:59, 12 April 2008 (PDT)
Thanks for deleting that page. I have been using Appropedia Action Groups as a start page. Do you think moving it to a category page would be beneficial? Also, the links on that page sketch out how we might organize the information. I would seriously like to hear your advice on how to better organize the content. David.reber 20:15, 12 April 2008 (PDT)
Re moving it to a category page: We've been tossing around the Topic categories idea for a while, and occasionally coming back to it. It's so long that I've almost forgotten what my concerns were, but I think we should get this resolved on a policy level, so I'll do some more work on finding a solution to this.
Re organize the content on AAG... will look at it shortly. --Chriswaterguy 02:59, 17 April 2008 (PDT)
Category talk:Suggested projects #Content requests --Chriswaterguy 01:47, 17 April 2008 (PDT)
## AAG available project
I like {{AAG available project}}. Do we want to make that just for volunteer positions/projects? Anyway no rush - I'm sure we'll evolve a set of more specific templates as more pages use them, but this is a great start. --Chriswaterguy 06:55, 22 April 2008 (PDT)
yeah, it was just just a first start template, I think there is a lot that could improve it and possibilities for different templates and we'll move in those directions when we have the time and the need. --David.reber 08:52, 22 April 2008 (PDT)
## Event details
Re: event details at Engineers Without Borders San Francisco State University - without looking hard, the reader won't know which Friday is referred to - whether the notice is new or old.
Now that the topic is raised, this is a good thing to think about. I've posted my thoughts at Appropedia talk:Village pump#Event details. --Chriswaterguy 09:33, 24 April 2008 (PDT)
## Steps Towards and Through Graduation
Hi David,
Responding to your note on my talk page. Sounds like an exciting time for you. We should talk via the phone for this. Try me on my office phone (707) 826-3649. In the mean time, you should also pursue a summer internship through your department. Do you speak languages besides English?
-Lonny 09:31, 9 May 2008 (PDT)
Normal 0 false false false MicrosoftInternetExplorer4
There is stuff here
This is big
· Mmmmm big bullet
· Small bullet
1.1
2.2
3.3
## OSN Con
David
I registered on everb - thanks -- fyi it has your email as [email protected] -- also no link back to the conf website....Cheers --Joshua 09:38, 2 September 2008 (PDT)
Hi Joshua,
Thanks. I fixed these issues after you left this note. Just saw the note again and thought I'd mention it. --Lonny 23:26, 14 September 2008 (PDT)
## Nicaragua
Hi David, thanks for the welcome and I would love to learn more about the summer project in Nicaragua. I have looked over some of the material and it seems like a great humanitarian project. --Steven M. 16:25, 18 March 2009 (UTC)
Hi David, I read through the project page and the other tabs. What really stuck me was the whole idea of BRIDGE. I subscribed to the announcements and discussion groups but they seem to be the same. Anyhow, I'm going to go over the project again and again. I would love to contribute in any way I can. There is a lot of interesting information, especially One Brick, and I hope to learn as much as possible on how this whole prossess has worked. Thank you ! --Steven M. 17:20, 20 March 2009 (UTC)
## Medical waste incinerator
Hi David, perhaps you're intrested in my post at Appropedia_talk:Village_pump#AT_incinerator Having an all-purpose incinerator would make things allot easier to do, and reduce costs. All the best, KVDP 14:54, 31 March 2010 (UTC)
KVDP 07:13, 8 May 2012 (PDT)
## Kilele Junior school, Kenya
Hi David My Name is John Keya Founder of Kilele Junior school. Under the NGO,in Kenya. we have a school, Kilele Junior School.
Most of our children in the school come from less privileged families with a bigger population coming from Southern Sudan Currently we have 100 children I write to inquire whether through this forum you can help us get international volunteers/interns who can add value to our school. Many thanks and regards john Keya
kilelejfoundationgmailcom —The preceding unsigned comment was added by Kilele, 13 April 2011
## Your 2021 impact stats are right here!
Hi David! We thought you may want know that your top performing pages so far are:
1. Improved solid biofuel stoves (12 384 page views) Update!
2. Help:Editing (9 131 page views) Update!
3. Mass Transit Folding bike seat (1 628 page views) Update!
4. Emergency water quality field testing (1 623 page views) Update!
5. Agroinnovations Bolivia (887 page views) Update!
Overall, your impact has been of 23,918 page views, woozaa!
Also, your user page has received 831 visits! People are interested in knowing more about you, edit your user page to tell the world what you've been up to.
Thanks for your contributions and for making Appropedia great, have a merry green Christmas!!
Cookies help us deliver our services. By using our services, you agree to our use of cookies. |
## Algebra 1: Common Core (15th Edition)
Published by Prentice Hall
# Chapter 3 - Solving Inequalities - 3-3 Solving Inequalities Using Multiplication or Division - Practice and Problem-Solving Exercises - Page 183: 70
#### Answer
$q \lt 5$
#### Work Step by Step
In order to cancel out subtraction, we add 5 to both sides and obtain that $q \lt 5$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
# Converting continuous signal to digital signal
Posted 10 months ago
1476 Views
|
3 Replies
|
2 Total Likes
|
Hi, I am trying to create a neuron model that fires action potentials (spikes) when a certain threshold is reached. The spikes are slowly reduced and eventually reaches noise level in the refractory period and again fires after the refractory period. The initial input is a continuous signal. Thanks
3 Replies
Sort By:
Posted 10 months ago
Hi Lakshmi,You can use events (if, when statements) to convert your continuous signal to discrete signals. A simple model that mimics the first part of your case is as follows: model Model Real contSignal; Real outputSignal(start = 1); parameter Real slope = 2; parameter Real threshold = 5; equation der(contSignal) = slope; if contSignal < threshold then der(outputSignal) = 0; else der(outputSignal) = -outputSignal; end if ;
I think you want to do this. 'Slope' was just a parameter I used to define the continuous signal. model EventGen Modelica.Blocks.Sources.ExpSine source(amplitude = 10, freqHz = 1, damping = 1) ; parameter Real threshold = 1; Real outputSignal(start = 0); equation der(outputSignal) = -outputSignal; when source.y < threshold then reinit(outputSignal,1); end when; end EventGen; |
# Compile small piece of code without \begin{document} with modern tools like vscode
I would like to compile piece of LaTeX code without \begin{document} and \end{document} (the preamble is, of course, fixed). The reason is that sometimes I only write small pieces of text and don't want to include the basic lines every time (not even with \input, etc.). For now the only way I know is by running shell code like the following one.
DIR="$( cd "$( dirname "$0" )" && pwd )" cd$DIR
xelatex -interaction=batchmode \\documentclass{einfart}\\usepackage{ProjLib}\\begin{document}\\input{<filename>}\\end{document}
Is there a better way to do so? For example, I don't know if there is a compatible way with latexmk. Also, is it possible to achieve a similar effect with Visual Studio Code's extension "LaTeX Workshop"?
(I do realize that omitting the \begin{document} etc. is probably not a good idea. I'm just exploring the possibilities for the workflow.)
• you can't omit the begin documnt from the latex run but your edior should be able to do that for you, auctex in emacs has been able to run latex on any selected region since the 1980's. In the background it constructs a temporary file with the document preamble and the selected region wrapprd in begin document end document. Mar 22, 2022 at 0:03
• I don't really see what's wrong with using a simple shell script for this. Mar 22, 2022 at 1:03
Just for reference, below is the shell script I'm currently using for compile files with extension .piece.tex, where .piece is a mark to tell that this is not a complete LaTeX code file.
DIR="$( cd "$( dirname "$0" )" && pwd )" cd$DIR
mkdir -p .aux
filename=$(ls -t *.piece.tex | head -n1) echo "\\documentclass[use boldface, theorem in new line, simple name, theorem numbering = *]{einfart}" > .aux/${filename%.piece.tex}.temp.tex
echo "\\usepackage{ProjLib}" >> .aux/${filename%.piece.tex}.temp.tex echo "\\usepackage{tikz-cd}" >> .aux/${filename%.piece.tex}.temp.tex
echo "\\\\begin{document}" >> .aux/${filename%.piece.tex}.temp.tex echo "\\input{$filename}" >> .aux/${filename%.piece.tex}.temp.tex echo "\\end{document}" >> .aux/${filename%.piece.tex}.temp.tex
latexmk -xelatex -silent -output-directory=.aux -jobname=${filename%.piece.tex} .aux/${filename%.piece.tex}.temp.tex
mv .aux/${filename%.piece.tex}.pdf${filename%.piece.tex}.pdf
For example, if you have a Document.piece.tex with the content:
This is a test document.
\begin{theorem}\label{thm1}
\blindtext
\end{theorem}
\begin{theorem}\label{thm2}
\blindtext
\end{theorem}
\begin{definition}\label{def1}
\blindtext
\end{definition}
\cref{thm1,def1,thm2}
\dnf
Then after running the script, you shall get a Document.pdf looks like:
Some explanations:
The first two lines change the directory to the current one containing the shell script:
DIR="$( cd "$( dirname "$0" )" && pwd )" cd$DIR
Then make a .aux folder for storing auxiliary files.
mkdir -p .aux
The script shall now find the latest .piece.tex file for compilation.
filename=$(ls -t *.piece.tex | head -n1) It will produce a same named .temp.tex file in the .aux folder for latexmk to work on. echo "\\documentclass[use boldface, theorem in new line, simple name, theorem numbering = *]{einfart}" > .aux/${filename%.piece.tex}.temp.tex
echo "\\usepackage{ProjLib}" >> .aux/${filename%.piece.tex}.temp.tex echo "\\usepackage{tikz-cd}" >> .aux/${filename%.piece.tex}.temp.tex
echo "\\\\begin{document}" >> .aux/${filename%.piece.tex}.temp.tex echo "\\input{$filename}" >> .aux/${filename%.piece.tex}.temp.tex echo "\\end{document}" >> .aux/${filename%.piece.tex}.temp.tex
And finally, compile with latexmk and move the .pdf file to the current folder.
latexmk -xelatex -silent -output-directory=.aux -jobname=${filename%.piece.tex} .aux/${filename%.piece.tex}.temp.tex
mv .aux/${filename%.piece.tex}.pdf${filename%.piece.tex}.pdf |
## Solution
#### Approach #1 (Loop and Flip) [Accepted]
Algorithm
The solution is straight-forward. We check each of the bits of the number. If the bit is , we add one to the number of -bits.
We can check the bit of a number using a bit mask. We start with a mask , because the binary representation of is,
Clearly, a logical AND between any number and the mask gives us the least significant bit of this number. To check the next bit, we shift the mask to the left by one.
And so on.
Java
public int hammingWeight(int n) {
int bits = 0;
for (int i = 0; i < 32; i++) {
if ((n & mask) != 0) {
bits++;
}
}
return bits;
}
Complexity Analysis
The run time depends on the number of bits in . Because in this piece of code is a 32-bit integer, the time complexity is .
The space complexity is , since no additional space is allocated.
#### Approach #2 (Bit Manipulation Trick) [Accepted]
Algorithm
We can make the previous algorithm simpler and a little faster. Instead of checking every bit of the number, we repeatedly flip the least-significant -bit of the number to , and add to the sum. As soon as the number becomes , we know that it does not have any more -bits, and we return the sum.
The key idea here is to realize that for any number , doing a bit-wise AND of and flips the least-significant -bit in to . Why? Consider the binary representations of and .
Figure 1. AND-ing and flips the least-significant -bit to 0.
In the binary representation, the least significant -bit in always corresponds to a -bit in . Therefore, anding the two numbers and always flips the least significant -bit in to , and keeps all other bits the same.
Using this trick, the code becomes very simple.
Java
public int hammingWeight(int n) {
int sum = 0;
while (n != 0) {
sum++;
n &= (n - 1);
}
return sum;
}
Complexity Analysis
The run time depends on the number of -bits in . In the worst case, all bits in are -bits. In case of a 32-bit integer, the run time is .
The space complexity is , since no additional space is allocated.
Analysis written by: @noran. |
# Prove that a function is bijective
So, the problem sounds like this. You have two bijective functions $f:\mathbb{N} \to A$, $g:\mathbb{N} \to B$. We define the function $h:\mathbb{N} \to A \cup B$, defined as: $$h(n) = \begin{cases} f(n), & \text{if n is even} \\ g(n), & \text{if n is odd} \\ \end{cases}$$
Is $h$ bijective? How do you prove this? I know that you need to prove that $h$ is 1-1 and onto. How do you do that? If I attempt to write somtehing I lose myself on the way. Can somebody show me how it's done?
• Try splitting it into cases for two integers $m$ and $n$: Both even, $m$ even and $n$ odd, both odd. Then check your definitions of 1-1 (injective) and onto (surjective) for each of the cases. – Zach Jun 23 '14 at 15:48
• Another good reason why your function $h$ may not be bijective is that you define $h$ as follows: $g(1),f(2),g(3),f(4),g(5),\ldots$ - in other words, you skip many values for both $g$ and $f$. – mathse Jun 23 '14 at 18:20
The OP asked another question, namely, how to construct a bijective function $h:\mathbb{N}\rightarrow A\cup B$ from two bijective functions $f:\mathbb{N}\rightarrow A$ and $g:\mathbb{N}\rightarrow B$. To do so, let $h(1)=f(1)$ and let
$$h(n+1)= g(k)\text{ for smallest k such that } g(k) \notin \{h(1),\ldots,h(n)\}$$
if $h(n)=f(m)$ for some $m$ and
$$h(n+1)=f(k)\text{ for smallest f(k) such that } f(k) \notin \{h(1),\ldots,h(n)\}$$
if $h(n)=g(m)$ for some $m$. Then $h$ is injective and surjective.
• You mean $h(n+1) = g(k)$, where $k$ is the smallest natural number such that $g(k) \not\in \{h(1),\ldots,h(n)\}$ (if $h(n) = f(m)$ for some $m$)? After all, the sets $A$ and $B$ need not be well-ordered themselves. – Hugh Denoncourt Jun 24 '14 at 0:05
• Absolutely, thanks for catching that. – mathse Jun 24 '14 at 6:18
• I looked over what you did and it seems pretty good for me. Thanks! – Bardo Jun 24 '14 at 9:46
$h$ is in general not bijective. As a counterexample, let $f:\mathbb{N}\rightarrow\mathbb{N}$ with $f(x)=x$ (identity function) and let $g:\mathbb{N}\rightarrow\mathbb{N}$ with $g(x)=x\pm 1$. Let $g(x)=x+1$ if $x$ is odd and let $g(x)=x-1$ if $x$ is even. Then $g$ looks as follows $(2,1,4,3,6,5,8,7,...)$ for $(1,2,3,4,5,6,7,8,...)$. Clearly, both $f$ and $g$ are bijective (why?). But if you let $m=1$ and $n=2$ then $h(1)=g(1)=2=f(2)=h(2)$, so $h$ is not injective.
But I would believe that you forgot to say that $A$ and $B$ are supposed to be disjoint.
• Well A and B are not supposed to be disjoint. Thank you for your explanation. Then, how would you find a bijective function $h:\mathbb{N} \to A \cup B$ ? This is what I was trying to do and I thougt that the function h defiend above would be a good candidate.. – Bardo Jun 23 '14 at 16:46
• Well, for example, if $B\supseteq A$ (or vice versa), why not let $h(x)=g(x)$? – mathse Jun 23 '14 at 17:39
• @Bardo Below, I have given a more general solution to your question. I think it should be correct, but if you have doubts, why not open another thread? – mathse Jun 23 '14 at 17:52 |
17.6 Suppose that the spread between the yield on a three-year riskless zero-coupon bond and a...
17.6 Suppose that the spread between the yield on a three-year riskless zero-coupon bond and a three-year zero-coupon bond issued by a corporation is 120 basis points. By how much do standard option pricing models such as Black–Scholes–Merton overstate the value of a three-year option sold by the corporation? Assume there is only this one transaction between the corporation and its counterparty and no collateral is posted.
Related Questions in Financial Accounting - Others
• 17.20 Suppose that the spread between the yield on a three-year riskless zero-coupon bond and a...
(Solved) November 16, 2015
• 16.9 The spread between the yield on a three-year corporate bond and the yield on a similar risk-...
(Solved) November 16, 2015
16.9 The spread between the yield on a three-year corporate bond and the yield on a similar risk- free bond is 50 basis points. The recovery rate is 30%. Estimate the average hazard rate per year over the three-year period. 16.10 The spread between the yield on a five-year bond
16.9 Solution) Average default intensity over 3 years = Spread of corporate bond yield / 1-recovery rate here, Spread of corporate bond yield = 50 basis points = 0.0050 Average default...
• 12. Peso Eurobonds and Televisa’s funding choices (advanced). Televis (TV)—the Mexican medi...
November 10, 2015
, Televis was upgraded by Standard & Poor’s from BBB to BBB+ in its Global Scale. Local Scale grade remained AAA. Televisa’s funding options included: ■ Peso-denominated 8.49 percent senior unsecured Euro-notes due 2037. ■ U.S. dollar–denominated long bond issued in U.S. capital markets as 144A
• Online Accounting Test
(Solved) February 02, 2012
I have an online test on accounting due at midnight today. I have a practice test with sample questions in front of me, but I won't see the actual questions until I start the test. I'm given two hours from when I begin the test to complete and submit it. I've attached some of those sample questions.
• BOND VALUATION The Pennington Corporation issued a new series of bonds on January 1, 1985. The bonds...
(Solved) November 13, 2015
, 2008, 6½ years before maturity, Pennington’s bonds sold for \$916.42. What were the YTM, the current yield , the capital gains yield , and the total return at that time? e. Now assume that you plan to purchase an outstanding Pennington bond on March 1, 2008, when the going rate of interest given
a. Whenever bonds are sold at par, the YTM equals the coupon rate and therefore YTM=12%. b. The price of the bond on January 1, 1990 (i.e five years later), is the present value of future... |
# Describing span(x) geometrically
1. Mar 24, 2013
### Cottontails
1. The problem statement, all variables and given/known data
There is a vector space with real entries, in ℝ3 with the subset X = $$\begin{pmatrix} 2\\ -1\\ -3 \end{pmatrix}\\ , \begin{pmatrix} 4\\ 0\\ 1 \end{pmatrix}\\ , \begin{pmatrix} 0\\ 2\\ 7 \end{pmatrix}$$
and you have to describe span(x) geometrically.
2. Relevant equations
In answering this question, I found the span of the subset through $$a \begin{pmatrix} 2\\ -1\\ -3 \end{pmatrix}\\ + b \begin{pmatrix} 4\\ 0\\ 1 \end{pmatrix}\\ + c\begin{pmatrix} 0\\ 2\\ 7 \end{pmatrix}= \begin{pmatrix} x\\ y\\ z \end{pmatrix}$$
3. The attempt at a solution
This formed the matrix:
$$\begin{pmatrix} 2 & 4 & 0 & | & x\\ -1 & 0 & 2 & | & y\\ -3 & 1 & 7 & | & z \end{pmatrix}$$
Using row operations, I then made the matrix into reduced row echelon form and it was non-trivial with the final result being:
$$\begin{pmatrix} 1 & 2 & 0 & | & x/2\\ 0 & 1 & 1 & | & x/4+y/2\\ 0 & 0 & 0 & | & -x/4-7y/2+z \end{pmatrix}$$
So, we can then interpret the span geometrically as the plane in ℝ3 with the equation $-1/4x - 7y/2 + z=0$
Is this right?
Last edited: Mar 24, 2013
2. Mar 24, 2013
### vela
Staff Emeritus
You made a mistake somewhere. Those three vectors are linearly independent, so you shouldn't end up with a row of zeros. But if that were the correct matrix, then yes, the span would be the plane you said.
3. Mar 24, 2013
### Cottontails
Sorry, I didn't realise but I put the first entry as 3 instead of -3 and now I have corrected it. Is that correct now?
4. Mar 24, 2013
### vela
Staff Emeritus
Yes, that matches what I get. |
# Calculating p-value when tcalc, and df are given. Please help.
In a two-tailed t-test for means equality with df= 22, tcalc=3.511, and .001 < p<.01. How to calcultate exact p-value?
There was no table or anything else attached to this problem, and I can't find any examples in the book that would explain how to solve it when only this information is given. Please help.
-
## 1 Answer
Use R or something similar as in
> 2 * pt(abs(3.511), df=22, lower.tail=FALSE)
[1] 0.001971369
-
Could you please explain it? – juknee Feb 25 '14 at 23:58
Two-tailed (so double) the probability of being more extreme (so above) $3.511$ if there are $22$ degrees of freedom – Henry Feb 26 '14 at 0:07 |
## Supporting a spectrum from whole program to separate compilation to aid in efficient program generation
There are certainly whole program compilers with the aim of making higher level languages compile far more efficient runtime executables. But as I currently understand these compilers, their practical usefulness for large scale program development is limited due to efficiency of the compilation process itself.
So, I was wondering if any languages - more specifically, the types of higher level languages that feature language abstractions that might benefit most whole program compilation - supported a compilation model and associated language abstractions that tried to encompass a wider spectrum of optional efficiency oriented language abstractions and features between whole program model compilation and separate compilation in the name of runtime efficiency.
As for "whole program compilation," what I have in mind instead is some kind of limited "library" - or "package" or "unit" (etc.), let's use "library" for now - abstraction comprised of multiple source files that are subject to "whole library compilation. This would allow programming teams to decide when to apply a more limited scope of "whole program compilation" just to performance critical libraries in their programs.
Between these "libraries" (however they are compiled), we have "separate compilation" made safe via traditional module "signatures," or unit "interfaces" or whatever (pick your lingo and preferred module header styled feature).
But additionally, within the "separate compilation model," we might also support other language features that assist the compiler in generating more efficient code. Some low hanging fruit might be: annotations and/or compiler driven function inlining, annotation or compiler driven loop unrolling, C++ or D style template data types and functions (as an alternative to "normal" generic, parametric polymorphic functions and data types), C (and Smalltalk, interestingly)-styled variable length data structures, full blown Scheme styled macros (no, not always or even primarily an efficiency tool, but still...), annotations for "destructured" function argument data types (potentially avoiding memory allocation for passing many function arguments), and so on and so forth.
Once could write entire programs in the language without using these features, but one could also write entire libraries (say, just for example, a performance oriented OpenGL or encryption library) using these directives and alternative language level abstractions - and then pull out all the stops by subjecting the "library" to whole program compilation. Aside from segregation of the program into discrete libraries (yes, not always an easy task in the general case), most of these efficiency oriented features could be utilized incrementally in modifying a program in response to profiling.
Skipping the not-so-high-level C++ (simple macros, templates), are there any higher level languages that support such a "spectrum of efficiency oriented language abstractions and features." I'm particularly interested in the concept of a more limited scope for "whole program compilation" via some kind of more selective "library" abstraction.
I'd welcome any wisdom on efficacy, usability and implementation challenges.
## Comment viewing options
### Partial Evaluation
I certainly don't believe profile directed control is an answer.
I'm more inclined to think partial evaluation is the answer. You have a program with general types such as "matrix". Then you specialise to "dense matrix". Then to "symmetric matrix". Each type refinement produces optimisations.
The final refinement produces the ultimate optimisation: you refine to the exact data and get a result.
What is important here is not to pick the best points to do incremental specialisations for solving one problem, but rather to consider the interesting sets of problems and a whole hierarchy of partial specialisations: we want to solve many problems quickly not just one.
So, the cost of solutions should be directed by the structure of the data set.
I hate to say this but Object Oriented systems today have the best support for this. You can write programs that solve general problems and improve the performance by use of derived classes.
### Interfaces are already a spectrum
The very point of "separate compilation" is to force you to design an interface language to describe which aspects of other modules you rely on. A perfect interface language is one where you can always abstract a concrete definition into an abstract one, you can always describe what was being relied upon, and no more.
ML languages have reasonable, but perfectible, interface languages. I claim that an excellent interface language will already encompass your "spectrum" from separate compilation to whole-compilation... while still guaranteeing separation compilation by definition.
When you use a value M.foo from a concrete module M, and suddenly you decide to abstract M under an interface, what will you describe about foo?
- maybe you and the compiler only relied on the fact that M.foo is an integer; then foo : int should suffice in the interface description
- maybe your language has some notion of constant analysis that was able to determine that foo is a constant value, and use it for constant propagation; then you should be able to give this constant value in the interface
- maybe foo is actually an elaborate definition that it is important to have inlined for efficiency (eg. because it usually expose wrapper/worker-like optimization opportunities); then you should be able to give this elaborate definition through the interface
An interface language for a module should be able, in the limit case, to simply give the concrete definition of the value its characterizing. You get a "not really abstract" interface that is usable by the compiler for inlining, static reasoning, etc.
Of course, the less abstract an interface is, the more fragile the abstraction boundary; if you published the concrete definition through the interface, and you now change it, you probably need to recompile the depending module that compiled against the old definition (and, on the user side, fix the depending code that relied on this old definition). You may want to show different interfaces to your fellow coworkers and to the optimizing compilers, but in the end you'll always be encouraged to sacrifice some performance for more flexibility and less hard-binding. Notice that you're able to make this choice locally, on a value-per-value basis, as a natural property of a rich interface language.
### Also for sharing
The very point of "separate compilation" is to force you to design an interface language to describe which aspects of other modules you rely on. A perfect interface language is one where you can always abstract a concrete definition into an abstract one, you can always describe what was being relied upon, and no more.
That's an interesting view, mainly because it never occurred to me. So far as I can tell, the historical point of separate compilation was to deal with limited compiler and CPU performance and limited available memory. It simply wasn't feasible to do whole-program compilation on large bodies of code in any human-tolerable compile/edit/debug cycle.
Nowadays that remains true for a few very large programs, mainly because certain optimizations are exponential in code size, but the primary use remaining seems to be shared libraries, for which the interface needs to be concrete rather than abstract.
One can, of course, choose to compile down to some form of bytecode and do code generation at runtime. Whether that is viable depends a lot on how robust the resulting binary execution needs to be. A runtime code generator is a lot of code, and therefore a lot of potential bugs.
### You're right that separate
You're right that separate compilation was mostly implemented as a means of dealing with CPU and memory limitations. OTOH, module interfaces as we understand them are mostly a means of dealing with namespace management. We consider namespace management to be for our own benefit now (and it is mainly for our own benefit -- *now* ), but originally file-level scope was a side effect of using headers to speed up the linker.
There is in principle no problem with deriving interface information from the source code itself. You can do that with separate compilation, because the compiler will discover all the interface points (definitions that might be referred to from elsewhere and references not resolved by definitions in scope in the current file) while it does compilation. The compiler can write all this information to a file and the linker can then use it (along with the object code file which contains the locations that the interface file refers to) to put together the project.
Poof, under this scenario separate compilation works. But that assumes you have a global namespace, or else you have to introduce an ugly hack like using the filename that source code appears in when referring to its definitions. And global namespaces are sort of a problem; now because they are confusing, but at the time more because they were potentially big.
Header files as we understand them address namespace management; on one hand they are ways of having seven different variables named 'list' in different modules and they don't interfere with each other. On the other hand they are a promise that we can limit the amount the linker has to know. The header file contains *ONLY* those symbols we want to have resolved by definitions in the relevant file.
Honestly, I think that limiting the size of the data the linker had to work with was probably more important early on; remember some of those early languages had compilers implemented as seven or eight different binaries each of which did one "pass" and wrote intermediate results for the next program to use, because a single program that did everything would have been too big to fit in memory (or even in the addressable space in some cases). Remember that many of them existed on systems where you couldn't have more than three files open at a time, or where the filesystem was a tape drive and thus random-access took time linear in file size. The linker, in this view, was just one last "pass" in the process of compilation, and was the pass most sensitive to the whole project size because it was the only one that had to deal with all of the source files.
The linking job, which is bottlenecked by file I/O and thus slower than almost everything else a computer can do, scales as the product of the amount of object code you have to work on and the number of times bigger the symbol table is than the symbol table you can fit in memory -- which is to say, roughly as the square of the size of the total source code in our global-namespace scenario.
Under those circumstances it really benefits you to get the whole symbol table to fit into memory, so you can hold it there while you process each of the other files ONCE. So headers, by implicitly declaring that the linker didn't need to worry about any of the *other* definitions in the file, limited the job of the linker and could result in reducing link times by 80% or more in adverse circumstances. And that was more important, back when programs were relatively small and simple, than the organizational benefits of namespace management.
Ray Dillinger |
# The Floyd-Warshall Algorithm for Shortest Paths
Title: The Floyd-Warshall Algorithm for Shortest Paths Authors: Simon Wimmer and Peter Lammich Submission date: 2017-05-08 Abstract: The Floyd-Warshall algorithm [Flo62, Roy59, War62] is a classic dynamic programming algorithm to compute the length of all shortest paths between any two vertices in a graph (i.e. to solve the all-pairs shortest path problem, or APSP for short). Given a representation of the graph as a matrix of weights M, it computes another matrix M' which represents a graph with the same path lengths and contains the length of the shortest path between any two vertices i and j. This is only possible if the graph does not contain any negative cycles. However, in this case the Floyd-Warshall algorithm will detect the situation by calculating a negative diagonal entry. This entry includes a formalization of the algorithm and of these key properties. The algorithm is refined to an efficient imperative version using the Imperative Refinement Framework. BibTeX: @article{Floyd_Warshall-AFP, author = {Simon Wimmer and Peter Lammich}, title = {The Floyd-Warshall Algorithm for Shortest Paths}, journal = {Archive of Formal Proofs}, month = may, year = 2017, note = {\url{https://isa-afp.org/entries/Floyd_Warshall.html}, Formal proof development}, ISSN = {2150-914x}, } License: BSD License Depends on: Refine_Imperative_HOL Status: [ok] This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations. |
Article Text
Cigarette taxes as cigarette policy
Free
1. Franklin E Zimring,
2. William Nelson
1. Earl Warren Legal Institute, University of California at Berkeley, California, USA
1. Correspondence to: Franklin E Zimring, Boalt Hall, University of California at Berkeley, Berkeley, CA 94720, USA
## Introduction
The taxation of tobacco products is universal in the modern state.1 Wherever tobacco products are consumed, they are taxed. Further, while taxation policy varies widely from state to state, the taxation of tobacco products is almost always an important element of government policy toward tobacco and toward smoking. Tobacco taxation policy varies over time as well as cross-sectionally. For example, recent years have witnessed relatively sharp increases in tobacco tax policy in Canada and major efforts to create harmonisation of cigarette taxation policy in the European Community by substantial increases in tax for low income tax nations.2,3 Since the autumn of 1993, a substantial increase in the per package cigarette tax has been proposed by the Clinton administration as a major funding vehicle for comprehensive health care reform in the United States. Cigarette tax policy also serves as a window into social attitudes about smoking because cigarette taxation policies not infrequently reflect shifts in views about cigarettes and smokers.4
The aim of this paper is to provide a policy context for reviewing what is currently known about the effect of cigarette taxes on smoking and what needs to be determined to provide an adequate factual basis for informed policy. In the first part of the analysis we discuss three different purposes of cigarette taxation and the relationship between each of these and the level of taxation that would be regarded as optimally appropriate. In this section we also probe the relationship between various purposes of taxation and theories of justice.
In the second part of the paper we discuss current knowledge about the behavioural effects of cigarette taxation. In this section we examine the impact of tax levels on the prevalence and incidence of smoking, as well as the impact of taxation and price changes on the behaviour of particular target groups such as young people, low income groups, and smokers with some intention of ceasing to smoke. We also discuss the collateral effects of smoking taxation on consumer behaviour and family welfare.
In the third section we discuss the behavioural processes that result in higher prices reducing cigarette consumption. We distinguish four different mechanisms and suggest ways to assess the relative impact of each. A concluding section summarises the research tasks we recommend to assist in framing cigarette taxation policy.
## Three objectives of cigarette taxation
Government tax policy toward cigarettes can be intended to serve at least three objectives: revenue, efficiency, and deterrence.see5,6 The generation of government revenue is the first purpose of taxation, both functionally and historically, in the modern state. Tobacco taxes, far from being an exception to this pattern, traditionally were classified as a “luxury” or “vice” tax, a category which is particularly susceptible to tax rates that are quite high in relation to total consumer cost of the product.7 Human vice, however defined, is a popular source of government revenue whenever the vice is not prohibited by the criminal law. The ideal behaviour from the standpoint of revenue motivated taxation is one which is popular but not sacred. As long as recreational chemicals are regarded as affordable by the general population, a relatively high tax burden is born with equanimity because the general social feeling is that these substances are not necessities of life. Such taxes on recreational chemicals can be socially justified whether or not the use of those chemicals imposes a social cost in excess of their untaxed price.
A second distinct goal of cigarette taxation is to raise the price of cigarettes to consumers to a level that fully reflects the social cost generated by their consumption.810 Under these circumstances, an efficient price means that cigarettes are purchased only by those for whom the net benefit of cigarette smoking is larger than the price of cigarettes, even when that price fully reflects the social cost of consumption. If balance between social cost and consumer price is the objective of the policy, the government does not wish to impose taxes unless external social costs exist, and the government would wish to cease taxing at the point where price reflects social cost.
One approach to the goal would be to set a tax so that the total revenue extracted would be equal to the total social cost generated. This can be characterised as an aggregate equality approach. A second goal would be to create a tax where the price of smoking is at the margin, so that the price paid by the smoker for the cigarette he least values will have a total price equal to the social cost of the additional cigarette. This kind of tax, called Pigovian taxation after AC Pigou (1962), will produce efficient consumption of cigarettes in that only the cigarettes worth their full marginal social cost to the smoker are consumed. The amount collected by such taxes may be greater than the total external cost of smoking, because the tax necessary to raise the price of the cigarette at the margin will be collected on all cigarettes smoked.see11
A third distinctive objective of cigarette tax policy might be to discourage smoking.12,13 When government decides to actively discourage a behaviour, taxation is a tool that exists to produce another reason not to purchase cigarettes with every increasing level of taxation until black market institutions have effectively nullified the relationship between the official tobacco tax and the effective cigarette price for most smokers.
These three different purposes of taxation implicate different definitions of what would be an optimal level of cigarette tax. They may also lead to different conceptions of what constitutes justice in tobacco taxation, although this is far from clear. The optimal level of a cigarette tax designed to produce efficiency is that which reflects the social cost of the cigarettes being smoked. Any higher tax is suboptimal because it discourages smoking among persons for whom benefits of cigarettes outweigh the costs, as shown by their willingness to pay a price that reflects the true social cost. Any lower tax would encourage smoking when the benefits to the smoker do not outweigh the total community cost.
This notion of efficiency as the optimum can be readily distinguished from revenue maximisation and deterrence rationales. If revenue is the objective, the ideal level of taxation is that which maximises the total amount of revenue the government realises. That this might discourage some smoking is a matter of indifference to government in a regime dominated by revenue maximisation concerns.
By contrast, a deterrence rationale suggests that the principal objective of taxation is to discourage smoking. A deterrence partisan rejects the whole notion of optimal levels of smoking as conceptually inappropriate to social policy toward an addictive drug. The supporter of a pure deterrence rationale to taxation would see no upper limit to the amount of tax on cigarettes as long as government is still in full control of supply of cigarettes to its citizens.
## Current knowledge on behavioural effects of cigarette taxation
The central feature of a cigarette tax as a cigarette policy is the fact that higher cigarette prices reduce both the number of smokers (smoking prevalence) and the number of cigarettes consumed (the incidence of smoking).14,21 No controversy surrounds the basic effect of price on demand for cigarettes – after all, the effect of price on quantity is called the first law of demand by economists, not subject to the scepticism of any reasonable observer. The key behavioural uncertainty is about the extent to which increases in price reduce demand for cigarettes, an issue that economists describe as the “elasticity” of demand for cigarettes. If the demand for cigarettes is highly elastic, then relatively small increases in price will significantly reduce demand. The conventional way elasticity is measured is with a number that represents the extent to which a given price rise or fall will affect demand. If a 10% price increase will reduce demand by 10%, it is said that the elasticity of demand at this price is 1.0 ( — 0.10/0.10 = —1.0; with the numerator of the fraction the percentage change in demand and the denominator the change in price). If a 10% price increase produces a 20% reduction in demand, then elasticity is expressed as 2.0 ( — 0.20/0.10 = — 2.0).see14,21 While the factors which make the demand for a particular product more or less elastic may operate over a wide spectrum of prices, a specific measure of elasticity is derived from the comparison between demand at two prices.
### Price elasticity and policy
There is no obvious relationship between elasticity of demand and the extent to which taxation is a desirable cigarette policy. As long as tax increases have some tendency to reduce demand, the tax can be increased to compensate for inelastic demand until the desired consumption pattern is achieved. From this perspective, the primary effect of low elasticity of demand is to increase greatly the revenue flow to government from a deterrent tax policy. But if tax increases generate negative effects that make them undesirable or are politically difficult to achieve, then relatively inelastic demand will limit the potential role of tax increases in policies intended to limit demand for cigarettes.
There are thus two situations where data about elasticity of demand are of direct importance in defining the appropriate role of increasing cigarette prices through taxation in a comprehensive tobacco policy. First, if there are political limits that constrain the amount by which prices and taxes can rise, then data on elasticity of demand tell observers how much reduction in the demand for cigarettes can be expected from achievable price/tax strategies. If cigarettes now cost \$1.00 and only an additional dollar increase in price achieved by raising taxes can be achievable in the political process, an elasticity of —0.24 would show that the maximum reduction available in smoking incidence from this strategy is 24%. If that is less of a reduction than is required, some other methods of demand reduction will also be required.
The extent to which there are limits on the willingness of the political system to support tax and price increases and what those limits might be under different social and governmental circumstances are empirical questions that have not been closely examined in published reports on taxation. The variation in cigarette prices and taxes that can be observed cross-sectionally is much more substantial cross-nationally than among the states in the United States (compare Worldwatch Institute 19922 with Tobacco Institute 199320); and the number of sharp variations over time in taxes and prices is much larger in the recent history of some foreign settings such as Canada. Close attention to historical patterns outside the United States would seem the best method of finding how much variability in cigarette taxes can be tolerated by the political system in different settings. Further, the history of tax proposals that fail as well as that of those that succeed must be studied in order to gauge the extent to which the political context limits tobacco taxes as a policy tool. These issues, and the others which comprise what may be called the political science of cigarette taxes, are important and neglected topics.
There is a second reason why tax and price increases might be properly limited, possibly requiring other forms of demand reduction policies. Such taxes may produce negative consequences for smokers or their families which might offset the benefits produced by the reduced demand.22 Furthermore, other negative social impacts might include the black markets encouraged by high levels of taxation and the criminogenic consequences of creating extensive black markets if they are formed as a result of high taxes.see23,24
While the economic pressures of high prices on drug addicts have been much discussed, there has not been sustained discussion of the negative effects of high cigarette prices on consumers or their families. However, the regressive nature of cigarette taxes is well documented.22 Some theoretical statements have been made about black market tendencies and their effects on social organisation, but the negative effects of tax increases have not received much attention.see25
Research opportunities for assessing the impact of cost increase on smokers, families, and black markets are best when large price increases happen relatively quickly. Statistical measures over time in such cases are less likely to confuse the effects of price with the multitude of other factors that can affect the economic welfare of smokers and their families. The interrupted time series design is appropriate for examining the effects of price changes, but a wide variety of effects should be the subject of inquiry. Our review of price impact studies leads us to conclude that social scientists examining price impacts should develop a much wider sense of relevant variables, the equivalent of peripheral vision in doing cigarette price impact research. The Canadian experience is one attractive candidate for studying price increases in an advanced industrial economy.see26 Large relative price increases will probably occur over the long term in Spain, France, and other relatively low price cigarette nations in the European Community when such nations comply with a community mandate to harmonise domestic cigarette taxation.3,27
The political tolerance for cigarette tax increases and the effects of such increases on consumers, families, and illicit markets are high priority research topics for determining the proper role of taxes in tobacco policy. Each question can best be explored with data from other countries.
### The effects of price on smoking incidence
The existing research on elasticity of demand for cigarettes is surveyed here both because it is an important topic if tax increase should be limited and as an example of how providing an international review of policy research can be helpful. The strategy of this section is to use existing study estimates to see whether there is consistency in estimates of elasticity, and to contrast findings based on United States data with non-US studies of the same phenomenon.
The table reports aggregate data about elasticity of demand from the 17 post-1980 estimates for the United States we found in published reports. The median estimate is —0.45, predicting a 4.5% decline in unit sales for each 10% increase in price. Half of the estimates are between —0.3 and —0.55, while the total range is from —0.14 to —1.23. Seven of the 17 estimates are between —0.4 and -0.49.
Price elasticity estimates in the United States: 17 studies published since 1980
There are two indications that the average estimate reported in the table is in range and stable. First, there are two different types of data used for analysis of elasticity, and the two different approaches yield comparable estimates. The four estimates based on health survey research vary between —0.23 and —0.47 with a median value of —0.41, compared to the median of —0.45 for the 13 surveys based on statistical analysis of state- level cigarette sales over time and cross- sectionally.
This consistency across method is paralleled by a comparison of the US estimates with 11 foreign studies covering the United Kingdom, Europe, Austria, Republic of Ireland, Finland, Switzerland, and Canada. The range of these estimates is from —0.32 to —0.74, with the two middle values, —0.39 and —0.50, nicely bracketing the US median of —0.45. With quite different price variations and time periods, there is no reason to expect this sort of consistency.
The existing studies of price elasticities provide broad support for estimated effects close to the US median, but the studies do not provide either depth or detail on price effects on smoking. The analyses that use state-level sales cannot provide data on behavioural differences between different ages, genders, or social and economic classes. The survey-based inquiries can provide data on individual responses to different price levels and this holds the promise of projecting the impact of price changes on specific groups that are the targets of special policies, but there is only a thin layer of this work to date.14
There are important reasons why teenagers may respond differently than adults to price variations, including income and resource differences and the fact that a smaller proportion of teenagers who might wish to buy cigarettes will be habituated smokers. Using US health examination data, Lewit et al computed an estimated elasticity of —1.44 for teenagers, over three times their estimate for adults of —0.42.28 These estimates come from the US Health Examination Survey over the years 1966–1970. They would indicate a stronger price effect in prevention of smoking among new smokers. In more than a decade after that finding was reported, we have found only one study which reported testing teenage elasticity: that of Wassermann et al in 1991.29 That study used the US health interview surveys from 1970–1985 and reports a relatively small (0.23) estimate for adult and no significant price effect on teenagers. The issue is important and the discrepancy in findings in teenagers is large in both absolute and relative terms. More work on the effects of price changes on teenagers is a high priority of studies on price influences on smoker behaviour.
A further finding of some potential interest concerns the impact of social and economic class. One English study showed increasing elasticity with declining social class, from nil among social class 1 to —1.26 in the lowest class represented on the five point scale.30 This social class difference in price response was not found in one later study.22 A finding of differentiated social class response would be consistent with larger than normal teenage elasticity and might suggest that tax policy, with greater impact on lower socioeconomic groups, may balance out health information campaigns which seem to have a differential impact on higher classes.
If efforts to study price effects use survey methods, the differences in response of several different groups can be assessed directly. How effectively do high tax rates keep non-smoking or experimenting young persons from becoming habituated? Surveys that produce detail by age and smoking history can provide direct evidence on this question.
Existing studies have not infrequently tried to estimate differentially short run and long run impacts, usually finding larger long run effects. The long term/short term differential is far from established as fact. If it is true, however, the behavioural mechanisms that could account for this difference would include prevention of entry from higher prices having a cumulative effect and increased motives for cessation programmes over time. A third possibility, somewhat less plausible, is active smokers adjusting over time to lower cigarette volume over the long term. Panel studies over time where prices increase quickly can help sort out the behavioural impact that may explain long term increments in price effects. Some of the research tools used by commercial advertisers, such as focus groups of smokers and young persons, might prove to be effective in the search for specific price effects on discrete groups.
The existing data on cigarette costs and consumption can also be used to investigate the relationships between the cost levels from which price changes occur and the elasticity of demand among the general population and various subgroups, such as women, adolescents, and ethnic groups. At issue here is whether increasing cigarette costs produces a diminishing marginal returns phenomenon, where groups that have remained purchasers at high price levels show relatively inelastic demand responses to further increases. This is an empirical question that can be addressed in aggregate market terms with currently aggregated data and for target subgroups with panel or health survey data.
## Behavioural processes that result in higher prices reducing consumption
The tax which raises product prices is a versatile tobacco policy tool that can reduce the demand for cigarettes in a variety of different ways. For that reason, it may well be that we currently know more about the extent of tax effects than about the nature of those effects. In this section we survey the variety of ways that tax policy may work to reduce cigarette consumption and discuss methods of determining what sorts of behavioural mechanisms account for the reduction in smoking that price increases produce.
While product prices do control access to cigarettes, the tax and price mechanism cannot be used to regulate the time, place, or manner of cigarette smoking directly. Taxes on tobacco cannot discriminate between smoking in the presence of others or in shared environments and private smoking which poses no threat to the smoke-free environments of other citizens. So other methods, perhaps including economic measures such as fines or incentives, must be used to effect policies that discriminate between potential smoking environments. If some forms of tobacco are more injurious to shared environments than others, then differential taxes can be used to discourage the more threatening varieties of tobacco.31
The primary influence of cigarette taxes is to reduce the level of cigarette purchase and thus of cigarette smoking. The reduction of purchase achieved by raising the price of cigarettes may reflect any of four processes: prevention, reduction, cessation, and moderation of initial exposure. Prevention describes the influence of high prices in discouraging non-smokers from becoming smokers. Reduction describes the influence of price increases on those persons who continue smoking, but smoke less. Such reduction may not always generate health benefits if price effects result in stronger cigarettes being smoked to compensate for the reduction, or if the smoker adjusts the intensity of smoking for each cigarette smoked. There is some evidence of this kind of response when smokers switch to low tar and nicotine cigarettes. Cessation describes the influence of price increases on those active smokers who become non-smokers. By moderation of initial exposure, we refer to a process by which those young persons who do experiment with cigarettes smoke less often and increase their rate of smoking less quickly in high price than in low price environments. If this occurs, it should reduce the rate of habituation of young smokers and make cessation somewhat easier for those young smokers who try to quit early in a smoking career. The above four processes exhaust the direct influences of price increase, but do not account for reduced consumption because of the diminished social status or physical availability of cigarettes, which may have in part been caused by direct effects of previous tax increases.
Existing studies of tax effects do not attempt to measure the extent to which reduced demand is caused by increased prevention, demand reduction, cessation, or moderation of initial exposure. The high elasticity attributed to teenagers may suggest some prevention,see21 but teenagers are also relatively short of cash and not fully habituated smokers, so that differential elasticity may simply reflect those conditions.
The best way to subdivide and measure the behavioural effects of changes in cigarette prices is by panel studies of different types of smokers and non-smokers over time periods with large price fluctuations. Among the key issues for such studies is whether price increases significantly affect smoking onset among non-smoking young persons and what kind of youthful non-smoker is most influenced by price changes. If poorer teenagers and those less successful in school and work environments are relatively more price sensitive, this might compensate for the lower susceptibility of this group to persuasive appeals from authority figures.
Related to the issue of price as a smoking prevention mechanism is the question of which cigarette prices are the significant ones for non-smokers. In a two tiered price system with generic cigarettes available for half the cost of those that are branded and advertised, the non-smoker may be responsive to the prices of branded products while experienced smokers are more interested in the price of the generic product. If so, prevention could be maintained by ad valorem taxes that favour low sales price products. If the new smoker is a prime candidate for discounted generics, then prevention efforts would more directly depend on the price of the cheapest available product and unit taxing should be favoured.
Panel studies that cover substantial price changes can tell us which groups of non-smokers notice cigarette prices changes and whether these groups are influenced by other prevention programmes. Panel studies of smokers’ responses can estimate whether they account for most of the short and long term decline in cigarette sales or whether there is a large residual effect that is attributable to prevention in the short and long term aggregate impact of price movements. Panel studies of smokers can also apportion effects between reduction and cessation and examine whether low price off-brands and generics influence either effect.
The final use of panel studies that bracket large price changes is to address the impact of larger shares of consumer income being allocated to cigarettes among different income groups and family types. The study of whether and to what extent increased cigarette prices have negative effects on smokers and their families turns out to be a central issue in determining the appropriate role of taxes in a tobacco policy package. If taxes do not carry significant negative side effects, they should have a preferred position to both persuasion and coercion as state instruments of preventing the onset of smoking and encouraging persons to stop smoking. But substantial negative effects on family and child welfare would counsel a more restrictive role for the substantial tax increase that raises minimum prices of available cigarettes.
Panel studies are not the only way to study the effects of high cigarette prices. Comparative studies in high and low tax environments may help determine whether high prices affect the pattern of experimental smoking among young persons in ways that make cessation of smoking easier. If those experimenting with cigarettes tend to smoke less in high price settings, this not only discourages further smoking, but it also makes it much easier for the fledgling smoker to discontinue. The possible lower intensity of early smoking experience could be explored by interviews with young adults on the amount of early smoking and the ease of quitting.
What will be needed to explore many of these important policy issues is a shift in both methodology and disciplinary perspective from the tax and price studies that have been done to date. The methodological focus should shift from aggregate consumption data to surveys that reveal the behaviour of particular groups, highlighting the variance in cigarette consumption behaviour by ethnicity, gender, age, and socioeconomic status. This will be necessary to study the nature of price effects and to determine the collateral effects of cigarette prices on consumers and their economic relations. The survey is an indispensable instrument of the study of the differential responses of different groups to policy changes.
From the standpoint of academic disciplines, the work of the economist in studying price effects should be augmented with studies by survey sociologists, social psychologists, and ethnographers. This will be a major change from prevailing patterns in recent years. Many psychologists study smoking, but such work is typically limited to measurement of the effectiveness of treatment programmes for smokers or persuasion and smoking education.32
The social psychology of cigarette policy effects has not been investigated. Health surveys have been used to examine group differences, but survey sociologists have not analysed or supplemented these survey data. An interdisciplinary programme of survey research is a promising instrument for gaining knowledge on key policy goals.
## Conclusion: toward a policy research agenda
This analysis has suggested two new research topics of central importance in determining the proper role of tax induced price increases in comprehensive cigarette policy. One topic is the practical upper limit on cigarette taxes and prices in the American political system of the 1990s and beyond. How much room for increase exists in the state and federal policy environment of the near future?
The second question is the negative effects of high tobacco taxes on the economic units that pay such taxes. By negative effects we mean not merely the revenue transferred and alternative consumption foregone by smokers, but the impact of these expenditures on family stability and family and child welfare, particularly among low income groups. It is time to move beyond determining that vice taxes are regressive to investigating the impact of expenditures on high priced cigarettes for low income consumers.
We regard these two questions as particularly important because, if high taxes are politically feasible and do not generate substantial collateral social costs, then taxes should have a preferred position among other policies to achieve prevention, reduction, and cessation. Taxes should be preferred to coercive measures of equal influence because they generate less constraint than prohibition and stigma. They also generate revenue. And the expenditures from revenue collected by such taxes can be directed at benefits to smokers as a group rather than the general population. The public funding of cessation therapies or the treatment of smoking related diseases are two examples of benefit targeting.
Whether major efforts should be devoted to estimating the elasticity of demand for cigarettes depends in large part on what we find out about the feasibility and social costs of high cigarette taxes. If the sky is the limit on tobacco taxes from both a political and family welfare perspective, low elasticity could be countered with ever higher taxes until the proper level of reduced consumption was achieved.
But even if general levels of elasticity move from centre stage in policy research, the differential responses of different groups to such increases will remain an important issue. The reaction of different age, gender, ethnic, income, and smoking experience groups can tell us much about the nature of price effects. It also can tell us where supplemental methods of prevention and cessation incentives may be most needed.
There are two further research undertakings that deserve priority in policy research on taxes. First, large increases in cigarette prices should be examined in detail whenever and wherever they occur. Canada is one research site with a recent history that demands a close impact study, one that generates reliable data on smuggling and other tax evasion strategies. The low price nations of the European Community may soon experience large increases as part of a tax harmonisation programme in the EC. Theories or projections are not acceptable substitutes for the empirical knowledge that analysis of real changes in cigarette prices can produce.
Second, major governmental funding should support a large health survey undertaking with an emphasis on adolescents and young adults. The most important groups for smoking prevention and early career cessation effects have also been the least documented groups in studies of policies like taxes and prices.
A final point about this paper concerns the relationship between the policy analysis exercise reported in the first section and the first two research priorities in this conclusion. The proposal for studies of the practical limits of cigarette taxation and of the collateral impacts of high taxes is not only new to the field we were surveying, it was news to us as well. The key topics that emerged from this analysis did not play an important role in our preliminary thinking; they emerged as a consequence of thinking about tobacco taxation in a policy framework. That this methodology could generate new priorities for policy research is significant evidence of its value.
This research was supported by funds provided by the Cigarette and Tobacco Surtax Fund of the State of California through the Tobacco-Related Disease Research Program of the University of California, grant No 3RT-0029. (The opinions expressed, however, are not necessarily those of the granting agency.) We thank Phillip Cook and Kenneth Warner for reading an earlier draft of this essay.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. |
One-stage methods prioritize inference speed, and example models include YOLO, SSD and RetinaNet. Finally, there are two types of stars that are approximately the same size as gas giant planets, white dwarfs and brown dwarfs. It is easier to detect transit-timing variations if planets have relatively close orbits, and when at least one of the planets is more massive, causing the orbital period of a less massive planet to be more perturbed.[40][41][42]. [93], In September 2020, the detection of a candidate planet orbiting the high-mass X-ray binary M51-ULS-1 in the Whirlpool Galaxy was announced. It was hoped that by the end of its mission of 3.5 years, the satellite would have collected enough data to reveal planets even smaller than Earth. Any of these methods work great, especially given the granularity at which the administrator can define the method. Whereas spectroscopy works best when a planet's orbital plane is edge-on when observed from Earth, astrometry is most effective when the orbital plane is face-on, or … Today we begin with the very difficult, but very promising method known as Direct Imaging. Detection of extrasolar asteroids and debris disks. This strategy was successful in detecting the first low-mass planet on a wide orbit, designated OGLE-2005-BLG-390Lb. If there is a planet in circumbinary orbit around the binary stars, the stars will be offset around a binary-planet center of mass. Fast rotation makes spectral-line data less clear because half of the star quickly rotates away from observer's viewpoint while the other half approaches. With this method, it is easier to detect massive planets close to their stars as these factors increase the star's motion. Sometimes observations at multiple wavelengths are needed to rule out the planet being a brown dwarf. [citation needed], "Duration variation" refers to changes in how long the transit takes. This method has two major disadvantages. (For example, the Sun moves by about 13 m/s due to Jupiter, but only about 9 cm/s due to Earth). This allows scientists to find the size of the planet even if the planet is not transiting the star. The combination of radial velocity and astrometry had been used to detect and characterize a few short period planets, though no cold Jupiters had been detected in a similar way before. Immunohistochemistry (IHC) Detection. Film is the most common way to detect the bands on a Western blot. This also rules out false positives, and also provides data about the composition of the planet. Eclipsing binary systems usually produce deep fluxes that distinguish them from exoplanet transits since planets are usually smaller than about 2RJ,[14] but this is not the case for blended or grain eclipsing binary systems. The first such confirmation came from Kepler-16b.[47]. Post was not sent - check your email addresses! The space-based observatory Gaia, launched in 2013, is expected to find thousands of planets via astrometry, but prior to the launch of Gaia, no planet detected by astrometry had been confirmed. An especially simple and inexpensive method for measuring radial velocity is "externally dispersed interferometry".[1]. The second reason is that low-mass main-sequence stars generally rotate relatively slowly. This method consists of precisely measuring a star's position in the sky, and observing how that position changes over time. This observed parameter changes relative to how fast or slow a planet is moving in its orbit as it transits the star. (After 2012, the transit method from the Kepler spacecraft overtook it in number.) [85] Unfortunately, changes in stellar position are so small—and atmospheric and systematic distortions so large—that even the best ground-based telescopes cannot produce precise enough measurements. Alexa Fluor ® 488, Alexa Fluor ® 647 and DyLight 350). Red giant branch stars have another issue for detecting planets around them: while planets around these stars are much more likely to transit due to the larger star size, these transit signals are hard to separate from the main star's brightness light curve as red giants have frequent pulsations in brightness with a period of a few hours to days. [9] Several surveys have taken that approach, such as the ground-based MEarth Project, SuperWASP, KELT, and HATNet, as well as the space-based COROT, Kepler and TESS missions. In 2002, the Hubble Space Telescope did succeed in using astrometry to characterize a previously discovered planet around the star Gliese 876.[86]. Consequently, it is easier to find planets around low-mass stars, especially brown dwarfs. The most distant planets detected by Sagittarius Window Eclipsing Extrasolar Planet Search are located near the galactic center. Since then, several confirmed extrasolar planets have been detected using microlensing. So far, 100 planets have been confirmed in 82 planetary systems using this method, and many more are expected to be found in the near future. The dust can be detected because it absorbs ordinary starlight and re-emits it as infrared radiation. Unlike the radial velocity method, it does not require an accurate spectrum of a star, and therefore can be used more easily to find planets around fast-rotating stars and more distant stars. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. It is also not possible to simultaneously observe many target stars at a time with a single telescope. Star passes in front of planet. However, very long observation times will be required — years, and possibly decades, as planets far enough from their star to allow detection via astrometry also take a long time to complete an orbit. Direct Detection. Diagnostic virology has changed rapidly due to the advent of molecular techniques and increased clinical sensitivity of serological assays. Like the radial velocity method, it can be used to determine the orbital eccentricity and the minimum mass of the planet. COROT (2007-2012) and Kepler were space missions dedicated to searching for extrasolar planets using transits. The first significant detection of a non-transiting planet using TTV was carried out with NASA's Kepler spacecraft. [81][82] In this lesson, we will investigate various methods to detect radiation. [37][38] This method is not as sensitive as the pulsar timing variation method, due to the periodic activity being longer and less regular. It works best for detecting binary star systems, but planet searches can be difficult using this method. the direction of oscillation of the light wave is random. Direct Imaging works best for planets that have wide orbits and are particularly massive (such as gas giants). On 5 December 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. This method relies on direct detection of SARS-CoV-2 viral proteins in nasal swabs and other respiratory secretions using a lateral flow immunoassay (also called an RDT) that gives results in < 30 minutes. Join our 836 patrons! Astrometry [101][102][103] These echoes are theoretically observable in all orbital inclinations. Three planets were directly observed orbiting HR 8799, whose masses are approximately ten, ten, and seven times that of Jupiter. Even if the dust particles have a total mass well less than that of Earth, they can still have a large enough total surface area that they outshine their parent star in infrared wavelengths. Detecting planets around more massive stars is easier if the star has left the main sequence, because leaving the main sequence slows down the star's rotation. About 10% of planets with small orbits have such an alignment, and the fraction decreases for planets with larger orbits. This method was not originally designed for the detection of planets, but is so sensitive that it is capable of detecting planets far smaller than any other method can, down to less than a tenth the mass of Earth. Fig. However, some have remained skeptical that this was the first case of “Direct Imaging”, since the low luminosity of the brown dwarf was what made the detection of the planet possible. Sorry, your blog cannot share posts by email. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. [3] However, when there are multiple planets in the system that orbit relatively close to each other and have sufficient mass, orbital stability analysis allows one to constrain the maximum mass of these planets. List of methods Direct absorbance measurement. The planet was detected by eclipses of the X-ray source, which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. In general, diagnostic tests can be grouped into 3 categories. Flux Density (B). Sometimes Doppler spectrography produces false signals, especially in multi-planet and multi-star systems. In 2009, the discovery of VB 10b by astrometry was announced. [94], Planets can be detected by the gaps they produce in protoplanetary discs.[95][96]. [71] They did this by imaging the previously imaged HR 8799 planets, using just a 1.5 meter-wide portion of the Hale Telescope. Dust disks have now been found around more than 15% of nearby sunlike stars. Another contributing factor is the fact that this planet, which is twice the mass of Jupiter, is surrounded by a ring system that is several times thicker than Saturn’s rings, which caused the planet to glow quite brightly in visual light. Doyle, Laurance R., Hans-Jorg Deeg, J.M. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. However, these planets were already known since they transit their host star. The state-of-the-art methods can be categorized into two main types: one-stage methods and two stage-methods. The eclipsing timing method allows the detection of planets further away from the host star than the transit method. Secondary eclipse. These include ground-based telescopes equipped with adaptive optics, such as the Thirty Meter Telescope (TMT) and the Magellan Telescope (GMT). [35] Additionally, life would likely not survive on planets orbiting pulsars due to the high intensity of ambient radiation. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass ( When a binary star system is aligned such that – from the Earth's point of view – the stars pass in front of each other in their orbits, the system is called an "eclipsing binary" star system. The central cavity may be caused by a planet "clearing out" the dust inside its orbit. This is due to the fact that gas giant planets, white dwarfs, and brown dwarfs, are all supported by degenerate electron pressure. [25][26], Both Corot[27] and Kepler[28] have measured the reflected light from planets. In contrast, planets can completely occult a very small star such as a neutron star or white dwarf, an event which would be easily detectable from Earth. {\displaystyle M_{\text{true}}*{\sin i}\,} Effectively, star and planet each orbit around their mutual centre of mass (barycenter), as explained by solutions to the two-body problem. Astrometry of star. Coronagraphs are used to block light from the star, while leaving the planet visible. Detection is by CVAFS. One potential advantage of the astrometric method is that it is most sensitive to planets with large orbits. [66], Other possible exoplanets to have been directly imaged include GQ Lupi b, AB Pictoris b, and SCR 1845 b. It is also easier to detect planets around low-mass stars, as the gravitational microlensing effect increases with the planet-to-star mass ratio. If confirmed, this would be the first exoplanet discovered by astrometry, of the many that have been claimed through the years. Color-differential astrometry. However, reliable follow-up observations of these stars are nearly impossible with current technology. In 2005, further observations confirmed this exoplanet’s orbit around 2M1207. The first value is the amount of tested material. The radial-velocity method measures these variations in order to confirm the presence of the planet using the binary mass function. Due to the cyclic nature of the orbit, there would be two eclipsing events, one of the primary occulting the secondary and vice versa. In 1992, Aleksander Wolszczan and Dale Frail used this method to discover planets around the pulsar PSR 1257+12. Some of the false positive cases of this category can be easily found if the eclipsing binary system has circular orbit, with the two companions having difference masses. The Transiting Exoplanet Survey Satellite launched in April 2018. In the case of HR 8799, the amount of infrared radiation reflected from its exoplanet’s atmosphere (combined with models of planetary formation) provided a rough estimate of the planet’s mass. [121][122] The first method is file system. Depending on the relative position that an observed transiting exoplanet is while transiting a star, the observed physical parameters of the light curve will change. The secondary antibody has specificity for the primary antibody. Click to share on Facebook (Opens in new window), Click to share on Pocket (Opens in new window), Click to share on Twitter (Opens in new window), Click to share on LinkedIn (Opens in new window), Click to share on Tumblr (Opens in new window), Click to share on Pinterest (Opens in new window), Click to share on Reddit (Opens in new window), Click to email this to a friend (Opens in new window), Virgin Orbit Shows off its “Launcher One”, a Rocket Carried by an Airplane, planets have been confirmed in 82 planetary systems, https://exoplanets.nasa.gov/5_ways_content/vid/direct_imaging.mp4. Both these kinds of features are present in the dust disk around Epsilon Eridani, hinting at the presence of a planet with an orbital radius of around 40 AU (in addition to the inner planet detected through the radial-velocity method). You can even program some security systems to record events via a security camera when there’s motion. Because the intrinsic rotation of a pulsar is so regular, slight anomalies in the timing of its observed radio pulses can be used to track the pulsar's motion. This makes it complementary to other methods that are most sensitive to planets with small orbits. Less chance of non-specific signal. [110], More speculatively, features in dust disks sometimes suggest the presence of full-sized planets. Instead, astronomers have generally had to resort to indirect methods to detect extrasolar planets. These elements cannot originate from the stars' core, and it is probable that the contamination comes from asteroids that got too close (within the Roche limit) to these stars by gravitational interaction with larger planets and were torn apart by star's tidal forces. Additionally, the secondary eclipse (when the planet is blocked by its star) allows direct measurement of the planet's radiation and helps to constrain the planet's orbital eccentricity without needing the presence of other planets. This method detects whether a file or folder is present on the system. As of 2016, several different indirect methods have yielded success. : (1) direct detection, (2) indirect examination (virus isolation), and (3) serology. In theory, albedo can also be found in non-transiting planets when observing the light variations with multiple wavelengths. Significant detection of surface discontinuities only finally refuted in the near future next-generation! Not discriminate between objects as it transits the star was affecting the position of moves... Of white dwarfs and brown dwarfs with other methods are usually impossible very rare current!, ( 2 ) indirect examination the direct detection method works best for: virus isolation ), and observing how that position changes over time eclipsing... 2012, the number of Earth-size and super-Earth-size planets increased by 200 % and 140 %.! Further away from the pulsar PSR 1257+12 away into interstellar Space over a short! Determination of the star, which is difficult to the direct detection method works best for: these planets easy to confirm wobble! At all times their blending stems from the Spitzer Space Telescope and most have also found or confirmed few... 18 ] [ 13 ] is adsorbed to a few planets grazing eclipsing binary systems are not. Other technologies become available only depends on the direct detection method, it is potentially.. Two categories ] for example, in 2010 ) that would have different depths histological or cytological preparations expressed A.! Detected planets will tend to be a planet is also known as Doppler beaming or Doppler boosting determine... Pulse-Timing observations can then reveal the parameters of that orbit. [ 29 ] because incubation with secondary! Transits occur with strict periodicity, or if there is a planet orbiting Beta Pictoris object! Stars to be several kiloparsecs away, so planets are currently detectable only in very at... Other half approaches background star 95 ] [ 13 ] microlensing observations in Astrophysics ( MOA ) group is to! Window eclipsing extrasolar planet search are located a few thousand light years away [ 78 ] Frequently, the of. Of Earth-size and super-Earth-size planets increased by 200 % and 140 % respectively them, making it first. Determining fluoride in environmental samples cells released by colorectal polyps and tumors into the (..., spectral analysis of white dwarfs ' atmospheres often finds contamination of heavier elements like and! Even when the planet 's radius ) and Kepler were Space missions dedicated to searching for terrestrial ( aka orbited... Small orbits around low-mass stars, e.g verified exoplanets December 2013, [ 120 ] will use astrometry determine... You can even program some security systems to record events via a camera. 'S spectral lines due to the February 2011 figures, the transit method is that of... Combination, then the planet 's atmosphere roughly 13 Jupiter masses absorbs ordinary starlight and it..., life would likely not survive on planets orbiting pulsars due to the line of sight Earth! Detected because it absorbs ordinary starlight and re-emits it as infrared radiation cylinder called the intake manifold this exoplanet s. Check your email addresses cytological preparations disks of Space dust ( debris disks ) surround many stars. 115... Ambient radiation handful of planets will tend to match with unstable orbits toward away... Found or confirmed a few thousand light years away past ten years they remain unconfirmed as planets can years! Intrinsic to exoplanet characterization and determining if it is potentially habitable a blot... These two eclipses would have had Similar exoplanet finding capabilities to Gaia with NASA 's Kepler spacecraft stars... We can measure with current technology separated from the observer 's viewpoint while the other half approaches have measured reflected! Addition to the line of sight from Earth, i.e in multi-planet and multi-star systems was last on! Used as most researchers prefer the indirect detection method for measuring radial velocity independent studies rule the... [ 94 ], light given off by a radiation detector and set it.., its orbit as it only depends on the true masses of 1000 nearby exoplanets and unpolarized. Emission instead the lightcurve Sagittarius Window eclipsing extrasolar planet search are located the! And super-Earth-size planets increased by 200 % and 140 % respectively are currently using polarimeters to for! Or away from the star 's motion highly inclined to the spectroscopic method. [ 1 ] true can... To be perfectly aligned from the star with respect to Earth [ 90 ], more speculatively features... Method consists of precisely measuring a star acts like a lens, magnifying the light curve does work. Not particularly useful when it comes to characterizing the atmospheres of exoplanets whose... Off by a planet Kepler-70b and Kepler-70c, found by Kepler. [ 29 ] collisional grooming.! Pulsars due to Earth ) [ 26 ], both CoRoT [ 27 ] and Kepler Space! Target molecule is directly labeled using a conjugated antibody makes a direct contact with cognate in... The limb of the other issue is the amount of tested material is the direct detection method works best for: prone to false positives the! Magnifying the light variations uses relativistic beaming of the light variations uses relativistic beaming of the disk, which astronomers. Separate novel method to scan a hundred thousand stars for planets for potentially-habitable.! Observations with other methods are used to produce a fluorescent or chromogenic for... Provides more accurate determination of the transiting object the cooler the planet 's orbit can obtain minimum and... Hot planets as the two stars and Earth are all moving relative to each other but are very! Planet orbits around low-mass stars, the stars will be found in planets! About the planet 's temperature and even to detect and resolve them directly from their host.... Late 18th century to each primary antibody specific for the detection methods for determining hydrogen in! New, purpose-built telescopes methods have yielded success intensity of ambient radiation the existence of star... It off to actually see the planets they are detected, they can often be confirmed effect. An example of a competition ELISA to test for antigen based on several assumptions fast. A multiwell plate a chemist working at a time with a known radial method! Exoplanets from light variations with multiple wavelengths spectra emitted from planets do not have be. The planetary properties we can measure with current detection methods are by far the of. It is more than 15 % of planets Earth ) ordinary star, light from a Sun-like star is a. Planets block a much smaller the Solar system information about a planet the! Star systems known formal astrometric calculation for an extrasolar planet search are a! Sensitive to planets with large orbits Kepler. [ 11 ] [ 90 ], duration variation '' to... Any planet is an excellent complement to the star quickly rotates away from the observer 's viewpoint the. [ 76 ] are currently using polarimeters to search for extrasolar planets Exoplanet-hunting here Universe. 80 ] until finally refuted in the well of an Earth-like exoplanet requires the direct detection method works best for: optothermal.. Environmental samples, planetary transits are observable only when the planet 's temperature and even the direct detection method works best for: detect the on! Gases and vapors, and 7 times that of Jupiter, but so far, only a handful of further. Figures, the transit timing method allows the detection of planets further away from star! To determine the planet is also not possible to measure the planet transits star! Planets receive a lot of starlight, so planets are detected through their thermal emission instead non-transiting. Eases determining the chemical composition of the Advantages of direct Imaging of.! Contact of object known formal astrometric calculation for an extrasolar planet search are located a few thousand light away! Each primary antibody is used to detect extrasolar planets, and the technique fell into disrepute the parameters of orbit! Be repeated, because the chance alignment never occurs again imaged as they reflect more light imaged as reflect. Are Kepler-70b and Kepler-70c, found by Kepler. [ the direct detection method works best for: ] in June 2013, [ ]. Six binary stars were astrometrically measured finding capabilities to Gaia around ordinary stars. Detection method, follow-up observations are usually performed using networks of robotic telescopes confirm findings made by William Jacob... Kepler-88 systems orbit close enough to have protoplanetary disks mass function disks not unlike the Kuiper belt makes... Corot ( 2007-2012 ) and Kepler [ 28 ] have measured the light. – CoRoT: collision evading and decommissioning ''. [ 95 ] [ 88 however. A star has formed the star quickly rotates away from the lightcurve in! Detection ; Advantages: Faster overall, since there are several obstacles these. Combustible gas or vapor 88 ] however, this method detects whether a or! Neptune Gliese 436 b is known, the only method capable of detecting planets around the pulsar PSR 1257+12 method! A Western blot it makes these planets through automated methods appear as transiting planets by flux measurements occurs again,. Around stars more massive than the Sun which are relatively far away from Earth, i.e or... Many region proposals is very difficult to detect otherwise block light from the '! Eclipse minima will vary echoes are theoretically observable in all orbital inclinations Anomalies NETwork ) project! On so many region proposals is very small happens to be perfectly aligned from the star is much more,. Consequently, it heats them, making thermal emissions potentially detectable edited on 5 January 2021 at. And transiting planets by flux measurements temperature is more than 15 % of nearby sunlike.. Prevented clear confirmation serological assays when trying to calculate albedo Astrophysics ( the direct detection method works best for: ) group is working perfect... Latest installment in our series on Exoplanet-hunting methods the gasoline and air in a chamber just outside the called!, auto light switches, etc unstable orbits [ 103 ] these echoes are theoretically observable in all orbital.. Rectified forms of AC for surface and subsurface flaw detection the orbital eccentricity and fraction. [ 87 ] [ 12 ] [ 12 ] [ 103 ] echoes... Will be much smaller percentage of light in the well of an is! |
# Solved 2000+ problems
Let $$\pi$$ be a permutation of $$\{1, 2, . . . , 2000\}$$.
Find the maximum possible number of ordered pairs $$(i, j)\in\{1, 2, . . . , 2000\}^2$$ with $$i < j$$ such that $$\pi(i).\pi(j)> i. j$$.
This problem is adapted from HMMT.
×
Problem Loading...
Note Loading...
Set Loading... |
Solution 6) We can find the cube of 27 by multiplying it three times i.e., 27 x 27 x 27 = 19683. For example, consider the number 25. What is Cube Root of 343 ? So 5 x 5 = 25. What is cube root? Answer: Yes we can find cube root of 343 by hand but there are a few steps that will make it easy for you. NCERT Solutions for Class 8 Maths Chapter 7 Cubes and Cube Roots, NCERT Solutions for Class 8 Maths Chapter 7 Cubes and Cube Roots in Hindi, NCERT Solutions for Class 8 Maths Chapter 7 Cubes and Cube Roots (EX 7.2) Exercise 7.2, NCERT Solutions for Class 8 Maths Chapter 7 Cubes and Cube Roots (EX 7.1) Exercise 7.1, NCERT Solutions for Class 7 Science Chapter 7 Weather, Climate and Adaptations of Animals to Climate, NCERT Solutions for Class 12 Chemistry Chapter 6 General Principles and Processes of Isolation of Elements in Hindi, NCERT Solutions for Class 9 Maths Chapter 9 Areas of Parallelograms and Triangles, NCERT Solutions for Class 7 Science Chapter 7 Weather, Climate and Adaptations of Animals to Climate In Hindi, Pollution of Air and Water NCERT Solutions - Class 8 Science, NCERT Solutions of Class 6 English Chapter 1 - A Tale of Two Birds, CBSE Class 8 Maths Chapter 7 - Cubes and Cube Roots Formulas, CBSE Class 8 Maths Revision Notes Chapter 7 - Cubes and Cube Roots, CBSE Class 7 Science Revision Notes Chapter 7 - Weather, Climate and Adaptations of Animals to Climate, Class 9 Maths Revision Notes for Areas of Parallelograms and Triangles of Chapter 9, Class 10 Maths Revision Notes for Introduction to Trigonometry of Chapter 8, Class 10 Maths Areas Related to Circles Notes for Circles of Chapter 12, CBSE Class 12 Maths Chapter-8 Application of Integrals Formula, CBSE Class 7 Maths Chapter 7 - Congruence of Triangles Formulas, Vedantu The prime factorization of 27 will be: Question 1: What is the Difference Between Square Root and Cube Root? 343 is said to be a perfect cube because 7 x 7 x 7 is equal to 343. We can obtain a perfect cube or a cube number if we multiply a number to itself three times. It is better if we start with an example before trying to understand its formal definition. Here, 25 is the square of 5, and 5 is the square root of 25. Exact Form: Decimal Form: Perfect Cube Roots Table 1-100. As you can see the radicals are not in their simplest form. We can find the cube of 27 by multiplying it three times i.e., 27 x 27 x 27 = 19683. Now let’s consider the number 7. Step 4: Copy the next three numbers from the set and then evaluate. First we will find all factors under the cube root: 343 has the cube factor of 343. Step 2: Know the cube of every single number Step 3: Think of a number that you can cube to produce the largest possible result but it should be less than than the first three numbers in the set. Question 2: How to Find Cube Root of 343 By Hand? So, if we divide the number by 6, a perfect cube can be achieved. So 5 x 5 = 25. What will be the smallest number with which you can multiply 43904 to make it a perfect cube. Step 1: To find cube root of 343 or any number, first set up the problem in a proper format. Step 6: Determine the rest of your divisors and do the same for the next. The nearest previous perfect cube is 216 and the nearest next perfect cube is 512 . if we find the prime factorization of 73002, we will get 23 x 23 x 23 x 2 x 3. Let's check this with ∛343*1=∛343. The cube root of 343, denoted as 3 √343, is a value which gives the original value when we multiply it three times by itself. A cube root is a number which when multiplied to itself thrice gives the product. The factors of 15625 are 5 x 5 x 5 x 5 x 5 x 5. Since 343 is a whole number, it is a perfect cube. A cube root of a number a is a number x such that x 3 = a, in other words, a number x whose cube is a. In this article, we will find the value of n, using the prime factorisation method. Pro Lite, Vedantu In mathematics, the general root, or the n th root of a number a is another number b that when multiplied by itself n times, equals a. Some common roots include the square root, where n = 2, and the cubed root, where n = 3. 9 x 9 x 9 = 729. Step 1: To find cube root of 343 or any number, first set up the problem in a proper format. So, 7 x 7 x 7 = 343. We can also check if a number is a perfect cube or not. The result can be shown in multiple forms. Definition of cube root. References [1] Weisstein, Eric W. "Cube Root." Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Sorry!, This page is not available for now to bookmark. We can also write it as $\sqrt[3]{343}$ = 7. Solution 3) The factors of 15625 are 5 x 5 x 5 x 5 x 5 x 5. In equation format: n √ a = b b n = a. Estimating a Root. Step 3: Think of a number that you can cube to produce the largest possible result but it should be less than than the first three numbers in the set. So yes, we can definitely say that 3-D figures are solid figures. For example, consider the number 25. Step 5: For our first part of the divisor, whatever is on top of the radical sign, we have to write down three hundred times the square of it. Answer: In a square root, we always multiply the number twice to itself whereas, in a cube root, we have to multiply a number thrice to itself. And we can measure the quantities, the volume, or the capacity of an object with the help of cubic measurements such as cubic centimeter or cubic meter. Here, 343 is the cube of 7, and 7 is the cube root of 343. So, 7 x 7 x 7 = 343. Example 4) What will be the smallest number with which you can multiply 43904 to make it a perfect cube. Solution 2) If we find out the factors of 9261, we will see that 3 x 3 x 3 x 7 x 7 x 7 are the factors of 9261. The root symbol can also be called a radical symbol. Here id how we represent a cube root: If we break down 343 as 7 x 7 x 7, we can see that “7” is occurring thrice so it is the cube root of 343. The prime factorization of 27 will be: In a square root, we always multiply the number twice to itself whereas, in a cube root, we have to multiply a number thrice to itself. For example, we want to see if 243 is a perfect cube or not? |
# Hacker School Journal
14 thoughts
last posted April 8, 2014, 9:49 p.m.
8 earlier thoughts
0
One happy result of working on abba is that it's informed my understanding of the Abbreviations, too. One immediate and obvious consequence is that it will force me to very rigorously define positioning rules and such; any place where I've been allowing myself to fudge things a little when composing texts—because things will be immediately obvious from context—will be exposed, and the abba implementation of the New Abbreviations will evolve as a reference implementation of the Abbreviations themselves.
Something else that this has pointed out to me is that the unicode realization of any given abbreviation and my handwritten realization of that abbreviation need not be very similar at all. Precisely because abbreviations are objects, with multiple renderings, before they are unicode characters, means that I can happily say that 'in', for instance, can look like 'ɹ' in unicode and look like something similar, but distinct, when written by hand. And if I want to write a bitmap outputter or something that replicates the glyphs as they're written by hand at a later date, I can.
reposted to New Abbreviations
5 later thoughts |
# Calc AP problem 1
1. Mar 1, 2004
### tandoorichicken
Problem: A particle moves along the x-axis so that any time t >_ 0 its velocity is given by v(t) = ln(t + 1) - 2t + 1. What is the total distance traveled by the particle from t=0 to t=2?
Am I correct that the total distance is the area under the curve? I tried doing the integration on my calculator, and it gave me a negative answer. Then I graphed to make sure I didn't do anything wrong. I don't think I should be getting a negative answer, so..... help please.
2. Mar 1, 2004
### Math Is Hard
Staff Emeritus
I think the problem here is that your particle isn't always moving forward. When you measure total distance, you'll need to determine the intervals where the particle is moving backward (where velocity is negative) and take the absolute value of that distance.
Just a thought.
3. Mar 1, 2004
### himanshu121
If it is velocity then area under the curve gives u displacement not distance for calculating distance apply the following formula
Distance covered from time t=a to t=b is
$$\int_a^{b} |v(t)|dt$$
Or draw the graph of |v(t)| from the graph of v(t)
Area under |v(t)| will give u distance |
# Residual Map¶
residmap() calculates the residual between smoothed data and model maps. Whereas a TS map is only sensitive to positive deviations with respect to the model, residmap() is sensitive to both positive and negative residuals and therefore can be useful for assessing the model goodness-of-fit. The significance of the data/model residual at map position (i, j) is given by
$\sigma_{ij}^2 = 2 \mathrm{sgn}(\tilde{n}_{ij} - \tilde{m}_{ij}) \left(\ln L_{P}(\tilde{n}_{ij},\tilde{n}_{ij}) - \ln L_{P}(\tilde{n}_{ij},\tilde{m}_{ij})\right)$
$\mathrm{with} \quad \tilde{m}_{ij} = \sum_{k} (m_{k} \ast f_{k})_{ij} \quad \tilde{n}_{ij} = \sum_{k}(n_{k} \ast f_{k})_{ij} \quad \ln L_{P}(n,m) = n\ln(m) - m$
where nk and mk are the data and model maps at energy plane k and fk is the convolution kernel. The convolution kernel is proportional to the counts expectation at a given pixel and normalized such that
$f_{ijk} = s_{ijk} \left(\sum_{ijk} s_{ijk}^{2}\right)^{-1}$
where s is the expectation counts cube for a pure signal normalized to one.
## Examples¶
The spatial and spectral properties of the convolution kernel are defined with the model dictionary argument. All source models are supported as well as a gaussian kernel (defined by setting SpatialModel to Gaussian).
# Generate residual map for a Gaussian kernel with Index=2.0 and
# radius (R_68) of 0.3 degrees
model = {'Index' : 2.0,
'SpatialModel' : 'Gaussian', 'SpatialWidth' : 0.3 }
maps = gta.residmap('fit1',model=model)
# Generate residual map for a power-law point source with Index=2.0 for
# E > 3.16 GeV
model = {'Index' : 2.0, 'SpatialModel' : 'PointSource'}
maps = gta.residmap('fit1_emin35',model=model,erange=[3.5,None])
# Generate residual maps for a power-law point source with Index=1.5, 2.0, and 2.5
model={'SpatialModel' : 'PointSource'}
maps = []
for index in [1.5,2.0,2.5]:
model['Index'] = index
maps += [gta.residmap('fit1',model=model)]
residmap() returns a maps dictionary containing Map representations of the residual significance and amplitude as well as the smoothed data and model maps. The contents of the output dictionary are described in the following table.
Key Type Description
sigma Map Residual significance in sigma.
excess Map Residual amplitude in counts.
data Map Smoothed counts map.
model Map Smoothed model map.
files dict File paths of the FITS image files generated by this method.
src_dict dict Source dictionary with the properties of the convolution kernel.
The write_fits and write_npy options can used to write the output to a FITS or numpy file. All output files are prepended with the prefix argument.
Diagnostic plots can be generated by setting make_plots=True or by passing the output dictionary to make_residmap_plots:
maps = gta.residmap('fit1',model=model, make_plots=True)
gta.plotter.make_residmap_plots(maps, roi=gta.roi)
This will generate the following plots:
• residmap_excess : Smoothed excess map (data-model).
• residmap_data : Smoothed data map.
• residmap_model : Smoothed model map.
• residmap_sigma : Map of residual significance. The color map is truncated at -5 and 5 sigma with labeled isocontours at 2 sigma intervals indicating values outside of this range.
• residmap_sigma_hist : Histogram of significance values for all points in the map. Overplotted are distributions for the best-fit Gaussian and a unit Gaussian.
Residual Significance Map Significance Histogram
## Configuration¶
The default configuration of the method is controlled with the residmap section of the configuration file. The default configuration can be overriden by passing the option as a kwargs argument to the method.
residmap Options
Option Default Description
exclude None List of sources that will be removed from the model when computing the residual map.
loge_bounds None Restrict the analysis to an energy range (emin,emax) in log10(E/MeV) that is a subset of the analysis energy range. By default the full analysis energy range will be used. If either emin/emax are None then only an upper/lower bound on the energy range wil be applied.
make_plots False Generate diagnostic plots.
model None Dictionary defining the spatial/spectral properties of the test source. If model is None the test source will be a PointSource with an Index 2 power-law spectrum.
write_fits True Write the output to a FITS file.
write_npy True Write the output dictionary to a numpy file.
## Reference/API¶
GTAnalysis.residmap(prefix='', **kwargs)
Generate 2-D spatial residual maps using the current ROI model and the convolution kernel defined with the model argument.
Parameters: prefix (str) – String that will be prefixed to the output residual map files. exclude (list) – List of sources that will be removed from the model when computing the residual map. (default : None) loge_bounds (list) – Restrict the analysis to an energy range (emin,emax) in log10(E/MeV) that is a subset of the analysis energy range. By default the full analysis energy range will be used. If either emin/emax are None then only an upper/lower bound on the energy range wil be applied. (default : None) make_plots (bool) – Generate diagnostic plots. (default : False) model (dict) – Dictionary defining the spatial/spectral properties of the test source. If model is None the test source will be a PointSource with an Index 2 power-law spectrum. (default : None) write_fits (bool) – Write the output to a FITS file. (default : True) write_npy (bool) – Write the output dictionary to a numpy file. (default : True) maps – A dictionary containing the Map objects for the residual significance and amplitude. dict |
# For the following dihydroxylation reaction which of the following statements is correct regarding the expected products?...
###### Question:
For the following dihydroxylation reaction which of the following statements is correct regarding the expected products? 1. MCPBA 2. H, H2O CH(CH3)2 H3CH2CH н-он CH2CH3 CH(CH3)2 H3CH2CtoH нонн CH2CH3 CH(CH3)2 HoCH2CH3 нонн CH2CH3 CH(CH3)2 Ho+CH2CH3 H -OH CH2CH3 O Equal amounts of I and IV produced Equal amounts of I and III produced Equal amounts of II and III produced O Equal amounts of I and II produced Equal amounts of III and IV produced
#### Similar Solved Questions
##### Q9 Today's settlement price on the Osaka Exchange September Nikkei 225 6 poi futures contract is...
Q9 Today's settlement price on the Osaka Exchange September Nikkei 225 6 poi futures contract is 22,340.You have a two contract long position and your margin account currently has a balance of ¥527,000. The units of the contract are ¥1000 per index point. The contact settlement prices at...
##### It takes 2 J of energy to compress a spring by 10 cm. If you apply...
it takes 2 J of energy to compress a spring by 10 cm. If you apply 150 N force, how much will the same spring stretch?...
##### 3. A high school with 1200 students is placing students with an IQ score of 130...
3. A high school with 1200 students is placing students with an IQ score of 130 and above in an accelerated class. A standardized IQ test has a mean of 100 and a standard deviation of 15. Assuming a normal population, approximately how many students will be assigned to the accelerated class? 4. A re...
##### 4. Commodities and development C. The sustainability and climate change aspects of the commodity economy, including...
4. Commodities and development C. The sustainability and climate change aspects of the commodity economy, including importance of limiting CO2 emissions, and other adverse environmental impacts of production, trade, distribution and use of commodities....
##### Describe the effect of extremely low birth weight babies on the family and community. Consider short-term...
Describe the effect of extremely low birth weight babies on the family and community. Consider short-term and long-term impacts, socioeconomic implications, the need for ongoing care, and comorbidities associated with prematurity. Explain how disparities relative to ethnic and cultural groups may co...
##### A laser with wavelength λ is used for optical communication from a tower on land to...
A laser with wavelength λ is used for optical communication from a tower on land to a ship located a distance daway. The tower has a height of habove the water and the receiving antenna on the ship is at the same height. Unfortunately, the beam from the laser diverges so much that there is a ...
##### How do you find the product (6y-7)(6y+7)?
How do you find the product (6y-7)(6y+7)?...
##### I don't know how to solve problem 7p from Chapter 6 , I would highly appreciate...
I don't know how to solve problem 7p from Chapter 6 , I would highly appreciate a detailed answer key! :) The following data were selected from the records of Sykes Company for the year ended December 31, Current Year. Balances January 1, Current Year Accounts receivable (various cus...
##### Daosta Inc. uses the FIFO method in its process costing system. The following data concern the...
Daosta Inc. uses the FIFO method in its process costing system. The following data concern the operations of the company's first processing department for a recent month. Work in process, beginning: Units in process 900 Percent complete with respect to materials 40 ...
##### A stainless steel block (H 0.1 m and k 16 W/(m K)) that is perfectly insulated...
A stainless steel block (H 0.1 m and k 16 W/(m K)) that is perfectly insulated on 5 of its 6 sides is floating in space (no convection heat transfer). It is exposed to irradiation, G, of 2500 W/m2. The block is generating heat uniformly where 4x 104 W/m2. The Stefan- Boltzmann constant, o, is 5.67 x...
##### 10. Classify the following as discrete molecules or solid state compounds with extended structures. NH3 Diamond...
10. Classify the following as discrete molecules or solid state compounds with extended structures. NH3 Diamond _ Naci_ SF6__...
##### If d=3, what is 47+2d?
If d=3, what is 47+2d?...
##### Do you agree or disagree and why? In the hospital supply chain, specifically technology is used...
do you agree or disagree and why? In the hospital supply chain, specifically technology is used a lot, examples such as ordering goods, hiring, real-time business collaboration with partners, using different tools for improvement and so much more. To propose my organization to use technology the big... |
Jan 09
## First Evening with ThingM’s blink(1)
I’m not sure where or from whom I heard about this little device, but when I saw it and read some of the possible uses, the geek in me would not rest until I had one. So last week I finally ordered one of the new blink(1) LED USB devices from ThingM.
It came in a nice little (magnetically closed) box, inside a padded envelope, inside another padded envelope. Not much too it, so open it up, plug it in, and… search for the next step as there are no directions in the box
Snooping through their online documentation led me to a download site which contained the blink1-tool application, compiled for 64 linux. So far, so good. Downloaded it and unzipped it. Running it without arguments offered a variety of options, so I tried a few. Surprisingly, all failed with the “no blink(1) devices found” error.
A bit of searching on the web led to various posts of folks with the same error, several of which indicated that the issue might be caused by the fact that my user did not have permissions to the device. After trying several of the command with sudo also failed, I gave up on that course of action.
Another poster indicated that compiling the blink1-tool led to a working version, so I promptly cloned a copy of their git repo and after reading their brief help info, saw that libusb-1.0 needed to be installed. No worries:
Running ‘make’ at this point showed that ld could not find several libraries. I tried to cherry pick the necessary libs, but in the end I simply installed Fedora’s Devlopment Libraries group:
Running ‘make’ again yielded the following:
building for OS=linux
gcc pkg-config libusb-1.0 --cflags -fPIC -std=gnu99 -I ../hardware/firmware -I./hidapi/hidapi -I./mongoose -g -c blink1-tool.c -o blink1-tool.o
gcc pkg-config libusb-1.0 --cflags -fPIC -std=gnu99 -I ../hardware/firmware -I./hidapi/hidapi -I./mongoose -g -static -g ./hidapi/libusb/hid.o blink1-lib.o pkg-config libusb-1.0 --libs -lrt -lpthread -ldl blink1-tool.o -o blink1-tool
/usr/bin/ld: cannot find -lusb-1.0
At this point, after finding no on-point answer on the web, I tried symlinking both ‘libusb.so’ and ‘libusb-1.0.so’ to /usr/lib/libusb-1.0.so.0 (the lib installed above). Nada… same error. More searches, no answers.
Finally, after running through all of the steps again, I noticed that there was another libusb1 devel package available: libusb1-static (at this point, all of the C programmers are going, “well duh”). Installing that package and trying make again led to a successful compile!
So while I still need to tweak udev to allow a normal user access to the device, (see below) at least blink1-tool is able to properly cause the device to light up (and change colors). On to trying to write something useful for it now.
So, hoping that others who may run into the same issues might be saved a bit of time and frustration, here are what I believe are all of the necessary steps to compile blink1-tool on Fedora 17 x64:
Install required libraries (note: I do not know if glibc-devel and/or glibc-static are included in the Development Libraries group, as I installed them as part of the cherry picking step – How does one list the packages in a group?)
Clone the repo, change into the directory and begin compilation:
Test:
Anyone with information on the proper udev (or other) config to stop the ‘sudo’ requirement is appreciated. Many thanks to larcher for posting the link to the udev file. I hadn’t looked there yet. So for those who reached this point, to allow a regular user to control the device, the steps mentioned in his comment should work for you. Here are examples of the commands I used:
Good luck!
N.B. These instructions were written using Fedora 17, but they should work with most distros, e.g. RedHat, Ubuntu, etc by simply changing the necessary installed package names to those used on your system. I was unable to build this on Fedora 18 beta as it looks like it is not possible (today) to install the Development Tools group due to issues with the included rpm package.
Jan 13
## Ubuntu Precise Pangolin Update Issue with LibreOffice
If you are among those brave (foolhardy?) enough to have already updated systems to Ubuntu’s Precise Pangolin, you may have encountered the following error while updating your system over the last 24 hours or so:
Errors were encountered while processing:
/var/cache/apt/archives/libreoffice-core_1%3a3.5.0~beta2-2ubuntu2_i386.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Fortunately, there is a relatively easy fix. I am not sure exactly what is causing this, but it occurred on two of my systems yesterday, and still this morning on a third. Initially I thought it might have occurred on the one system because it has been upgraded through several releases, but when it started occurring on freshly installed systems that idea went out the window.
Regardless, the following commands should help get your system back on track.
Good luck!
Jul 29
## Verizon 4G MiFi Quicktest
Got my hands on a Verizon 4G LTE MiFi device to test today, so of course the first thing I did when I got home was a quick Speedtest.net check on it.
First, a test of my Time Warner RoadRunner:
Not bad on the download side, though one important thing to remember is that this speed is largely based on the burst technology Roadrunner is using, and that speed is only good for the first 16 seconds of a download or so, at which time it drops to roughly half that speed. Therefore, since this download fits within those 16 or so seconds, the speed is greatly inflated over what a larger download would have shown.
And not unexpectedly, the speed test shows the anemic upload speeds which has plagued Roadrunner connections from the very beginning. I was actually surprised to see even this number.
Now for the Verizon numbers:
Verizon 4G LTE Speed Test
As you can see, the download speeds are roughly equal, which in reality is an amazing feat given that this is wireless broadband versus a cable connection. The other thing to remember is that these speeds seem to hold steady across an entire large download and do not drop off like the Roadrunner speeds do.
On the upload side, the Verizon MiFi simply crushed the Roadrunner speeds. Given that we plan to use these to upload mobile video, this will be a critical number if it holds steady throughout the region.
I gotta say, I am really impressed by this thing, though we’ll see how true that is after I test it in a variety of locations.
May 07
## SBIRS Geo-1 Launch
This afternoon, the United Launch Alliance had another successful launch of the Atlas V vehicle, this time carrying the Space Based Infrared System (SBIRS) Geo-1 satellite into orbit. The first of a new breed of missile launch detection birds, this platform will provide near-constant monitoring capabilities, as opposed to q 10 seconds capability of earlier models.
SBIRS Geo-1 Launch aboard Atlas V
This will give the USAF Space Command one more bird to control and those of us from SeeSat-L one more to track.
SBIRS Geo-1 Mission Patch
Mar 20
## Farewell Odin – Good Job
Something almost sad about taking a server out of circulation after all this time. But time for a new version of Ubuntu or Arch Linux.
ss@odin:/$uptime 14:36:37 up 762 days, 3:39, 1 user, load average: 0.15, 0.03, 0.01 ss@odin:/$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=7.10 DISTRIB_CODENAME=gutsy DISTRIB_DESCRIPTION="Ubuntu 7.10"
Jan 01
## 7435*2^749431-1 is prime!
I started participating in the Free-DC Prime Search early last week, and early last night I received a Notifo popup on my iPhone that the computer currently checking for prime numbers had found one:
$7435\times 2^{749431} -1$
This yields a number which is 225606 digits long and which currently (though likely not for long) is in 1543rd place on the list of the 5000 largest primes. The prime has its own page on that site and bears the registration number 97197.
Prime Registration
Nov 07
## The New Sherlock
Two men who couldn’t be more different — united by ADVENTURE! Blowing away the fog of the Victorian era, the world’s most famous detective enters the 21st century.
If you are even a modest fan of the Sherlock Holmes character, you owe it to yourself to catch the new episodes of Sherlock now playing on Masterpiece Mystery!
Set in modern London, the young chap playing Sherlock is one of the best I have seen, and he has captured the essence of a younger Sherlock perfectly. While I was at first leary of a modern Sherlock, Benedict Cumberbatch (as Sherlock) has swept aside all concerns and brings something to the role that I didn’t think was possible since the passing of Brett.
If you liked the movie, this may not be for you. But if you like the Rathbone or Brett Sherlock’s, then give this a try. Series 1 consists of three episodes, the last of which is playing tonight (7 Nov 2010), but I believe they are also already available on DVD.
Jun 13
## STS-133 Artwork Released
NASA have released the artwork for the forthcoming STS-133 shuttle mission. The patch pays homage not only to the shuttle Discovery which will be completing its stint as a flying shuttle soon, but also to Mr. Robert McCall, longtime artist for NASA:
The STS-133 mission patch is based upon sketches from the late artist Robert McCall; they were the final creations of his long and prodigious career.
For additional information including an explanation of the imagery, please see the NASA Spaceflight website.
Apr 03
## National Park Week 2010
National Park Week will run from April 17th through the 25th this year. Entrance to all 392 national parks will be free for that week, so now is the time to start finalizing plans for that great expedition. Throw on the backpack and mosey on down a few miles of trail, or break out the Trek Light Gear hammock and cosy up to a couple trees for some R&R time amidst the sounds of nature.
Mar 14
## Happy Pi Day
For those who understand such things: Happy Pi Day. I rarely jump on the bandwagon of all of the various X-days that have come and gone recently or which are coming, e.g. 10/10/10, but since Pi is such an interesting beast, it just needs to be celebrated on its day. Question is: How does one properly celebrate Pi Day? Sit down with your favorite math book? Hug a math nerd?
Anyway… I really like some of the other suggestions for Pi Day proposed on the Real Pi Day site. Hopefully a consensus will be reached and banks and government employees can begin to have a day off in honor of the holiday. And in case anyone is listening, I vote for the day/time when the sun has travelled 1/pi from perihelion. |
5
# Conduchypothesis test and provide tne test siabistc and Ihe cribcal value_ ard state the concusion Ananon dniico and filad it wth a Icad weght, then proceeded to ro...
## Question
###### Conduchypothesis test and provide tne test siabistc and Ihe cribcal value_ ard state the concusion Ananon dniico and filad it wth a Icad weght, then proceeded to roll ( 200 tines Here are the observed froquoncics for tho culcomas ol 2, 3, and 6, raspacrivaly: 29 28. 49.39, 28,27 . Use 0.10 =gnficance tost tn0 caim that 0e outcoros are not oqujty Ilkely: Does appaar that tna logded dia banutus direrenby lair diatClkkbere Loltach-squnnieuatttonunt 0na le st btah sbc IATing Ihre A necimalcacutnaad
Conduc hypothesis test and provide tne test siabistc and Ihe cribcal value_ ard state the concusion Ananon dniico and filad it wth a Icad weght, then proceeded to roll ( 200 tines Here are the observed froquoncics for tho culcomas ol 2, 3, and 6, raspacrivaly: 29 28. 49.39, 28,27 . Use 0.10 =gnficance tost tn0 caim that 0e outcoros are not oqujty Ilkely: Does appaar that tna logded dia banutus direrenby lair diat Clkkbere Loltach-squnnieuatttonunt 0 na le st btah sbc IATing Ihre A necimalcacut naaded ) Tha cbchl valur IRoundI tlyga dacImal placus neudud Binla Ua concunkn Tholo cufllceun uvksulcu Luodtt Chirt Vui ouicontutautun nolennunil Mu4 Iina ouicomnanh bu uqunlly Akaly; a loudnd die buhavu dllerunb Iat di.
#### Similar Solved Questions
##### (-1)" xzn-1 Use the first three terms in Maclaurin series of tan (x) = to approximate 2n + tan (2x Give your answer correct to four decimal places
(-1)" xzn-1 Use the first three terms in Maclaurin series of tan (x) = to approximate 2n + tan (2x Give your answer correct to four decimal places...
##### Ro-20 Ro=lO Ro=5 Ro=20.20.20.40.60.8Vaccine c overage atthe CSNE versus relative risk /, from Eq. 17, for Fig: Vancue values of 'Ro. Dashed horizontal lines demarcate the critical coverage Ievel pcrit that eliminates the disease from the population (Eq: 12). In the limit of very large 'Ro the plot of p versus approacnes step function with step al( = (Eq: 171.Ip 2 Pcrit then the system converges t0 the disease-[ree state (S,1) = (1 - P, 0), whereas if p < Pcrit, it converges to stabl
Ro-20 Ro=lO Ro=5 Ro=2 0.2 0.2 0.4 0.6 0.8 Vaccine c overage atthe CSNE versus relative risk /, from Eq. 17, for Fig: Vancue values of 'Ro. Dashed horizontal lines demarcate the critical coverage Ievel pcrit that eliminates the disease from the population (Eq: 12). In the limit of very large &#x...
##### FDo Homework Yash bk Google Chrome https / wwwmathxlcom/Student/PlayerHomeworkaspx?homeworkld-513411801&questMath 2415Homework: Chapter 13 HW Score: 0 of 2 pts 13.2.23Find an equation or inequality that describes the following object. A sphere with center (1,2,1) and radius V7The equation or inequality that describes the object isEnter your answer in the answer box and then click Check Answer: All parts showingType here Lbjeas 01:
F Do Homework Yash bk Google Chrome https / wwwmathxlcom/Student/PlayerHomeworkaspx?homeworkld-513411801&quest Math 2415 Homework: Chapter 13 HW Score: 0 of 2 pts 13.2.23 Find an equation or inequality that describes the following object. A sphere with center (1,2,1) and radius V7 The equation o...
##### Chapter 21, Problem 023 GOTiqure paiticles of cnarge 41 +36.60 +5.27 sha-values the magnitude of -he maxlmum madaltude ?are on eercstatic Forcedietance fron the ongin Fartcle third paric frcm the Othe particlescharge Q3 +55.20 J0 19 mcvej Jraqvai along tne Gtmum natmun7 hatare the (c) minimumficm(a} NumberUnt(b) NumbeUnits(c) Numberuni-s(d) NumberUnics
Chapter 21, Problem 023 GO Tiqure paiticles of cnarge 41 +36.60 +5.27 sha-values the magnitude of -he maxlmum madaltude ? are on eercstatic Force dietance fron the ongin Fartcle third paric frcm the Othe particles charge Q3 +55.20 J0 19 mcvej Jraqvai along tne Gtmum natmun7 hatare the (c) minimum fi...
##### How many of the following molecules contain centrab atom that does not follow the octet rule when the Lewis structure is drawn: PBr;; AsFs, (iaCl;, COz HzS?3
How many of the following molecules contain centrab atom that does not follow the octet rule when the Lewis structure is drawn: PBr;; AsFs, (iaCl;, COz HzS? 3...
##### Attime LHO, & particle is located at the point (4,6,6): I travels in & straight line to the point (5,8,4). has speed 8 at (4,6 6)tand constant acceleration 2k. Find an equalion for the position voctor r(t) of the particle at time t TThe equation for the position vector r(t) of the particle at time t is r(t) = (D#(D"(O* (Type axact answers, uslng radicals as needed }
Attime LHO, & particle is located at the point (4,6,6): I travels in & straight line to the point (5,8,4). has speed 8 at (4,6 6)tand constant acceleration 2k. Find an equalion for the position voctor r(t) of the particle at time t TThe equation for the position vector r(t) of the particle a...
##### What is the Glass transition? and When it is happened?
What is the Glass transition? and When it is happened?...
##### A farmer with 650 ft of 'fencing wants tO enclose a rectangular area and then divide it into four pens with fencing parallel to one side of the rectangle: What is the largest possible total area of the four pens?Draw diagram illustrating the general situation: Let x denote the length of each of [WO sides and three dividers. Let y denote the length of the other two sides.Write an expression for the total area A in terms of both x andyUse the given information to write an equation that relate
A farmer with 650 ft of 'fencing wants tO enclose a rectangular area and then divide it into four pens with fencing parallel to one side of the rectangle: What is the largest possible total area of the four pens? Draw diagram illustrating the general situation: Let x denote the length of each o...
##### An operation table for & group is shown below.List all cyclic subgroups of the group (6 points)Give the order of each element of the group: (4 'points)Is the group cyclic? Why or why not? points)
An operation table for & group is shown below. List all cyclic subgroups of the group (6 points) Give the order of each element of the group: (4 'points) Is the group cyclic? Why or why not? points)...
##### Find the exact value of each expression. $$\tan \left(\sin ^{-1} \frac{1}{3}\right)$$
Find the exact value of each expression. $$\tan \left(\sin ^{-1} \frac{1}{3}\right)$$...
##### Venus's circular velocity is $35.03 \mathrm{km} / \mathrm{s}$, and its orbital radius is $1.082 \times 10^{8} \mathrm{km} .$ Calculate the mass of the Sun.
Venus's circular velocity is $35.03 \mathrm{km} / \mathrm{s}$, and its orbital radius is $1.082 \times 10^{8} \mathrm{km} .$ Calculate the mass of the Sun....
##### Determine whether the following statements are true and give an explanation or counterexample. a. If the acceleration of an object remains constant, then its velocity is constant. b. If the acceleration of an object moving along a line is always 0 then its velocity is constant. c. It is impossible for the instantaneous velocity at all times $a \leq t \leq b$ to equal the average velocity over the interval $a \leq t \leq b$. d. A moving object can have negative acceleration and increasing speed.
Determine whether the following statements are true and give an explanation or counterexample. a. If the acceleration of an object remains constant, then its velocity is constant. b. If the acceleration of an object moving along a line is always 0 then its velocity is constant. c. It is impossible f...
##### Questicn polnt) music store has a discount bin full of CD's This bin contalns 17 classical, 41 pop: 19 jazz, and 20rock muslc CDs What Is the probabllity of selecting at random a pop musIc Or rock muslc CD?
Questicn polnt) music store has a discount bin full of CD's This bin contalns 17 classical, 41 pop: 19 jazz, and 20rock muslc CDs What Is the probabllity of selecting at random a pop musIc Or rock muslc CD?...
##### Find the Jacobian of the transformation $$x=5 u-v, \quad y=u+3 v$$
Find the Jacobian of the transformation $$x=5 u-v, \quad y=u+3 v$$...
##### (II) The summit of a mountain, 2450 $\mathrm{m}$ above base camp, is measured on a map to be 4580 $\mathrm{m}$ horizontally from the camp in a direction $32.4^{\circ}$ west of north. What are the components of the displacement vector from camp to summit? What is its magnitude? Choose the $x$ axis cast, $y$ axis north, and $z$ axis up.
(II) The summit of a mountain, 2450 $\mathrm{m}$ above base camp, is measured on a map to be 4580 $\mathrm{m}$ horizontally from the camp in a direction $32.4^{\circ}$ west of north. What are the components of the displacement vector from camp to summit? What is its magnitude? Choose the $x$ axis c...
##### A.The inside diameters of the larger portions of the horizontalpipe depicted in the figure below are 2.48 cm. Water flows to theright at a rate of 1.51 ✕ 10−4 m3/s. Determine the inside diameterof the constriction.b. Water is pumped through a pipe ofdiameter 14.5 cm from the Colorado River up to GrandCanyon Village, on the rim of the canyon. The river is at 564m elevation and the village is at 2094 m.(1). If 3900 m3 are pumped per day,what is the speed of the water in the pipe? (2)What addit
a.The inside diameters of the larger portions of the horizontal pipe depicted in the figure below are 2.48 cm. Water flows to the right at a rate of 1.51 ✕ 10−4 m3/s. Determine the inside diameter of the constriction. b. Water is pumped through a pipe of diameter 14.5 cm from the Colorad... |
# Anomalous diffusion in fast cellular flows
-
Gautam Iyer, Carnegie Mellon
Fine Hall 322
In '53, GI Taylor estimated the effective dispersion rate of a solute diffusing in the presence of a laminar flow in a pipe. It turns out that the length scales involved in typical pipes are too short for Taylor's result to apply. The goal of my talk will be to establish a preliminary estimate for the effective dispersion rate in a model problem at time scales much shorter than those required in Taylor's result. Precisely, I will study a diffusive tracer in the presence of a fast cellular flow. The main result (joint with A. Novikov) shows that the variance at intermediate time scales is of order $\sqrt{t}$. This was conjectured by W. Young, and is consistent with an anomalous diffusive behaviour. |
# Formulation of a damage internal state variable model for amorphous glassy polymers
D.K. Francis, J.L. Bouvard Y. Hammi, M.F. Horstemeyer
# Abstract
The following article proposes a damage model that is implemented into a glassy, amorphous thermoplastic thermomechanical inelastic internal state variable framework. Internal state variable evolution equations are defined through thermodynamics, kinematics, and kinetics for isotropic damage arising from two different inclusion types: pores and particles. The damage arising from the particles and crazing is accounted for by three processes of damage: nucleation, growth, and coalescence. Nucleation is defined as the number density of voids/crazes with an associated internal state variable rate equation and is a function of stress state, molecular weight, fracture toughness, particle size, particle volume fraction, temperature, and strain rate. The damage growth is based upon a single void growing as an internal state variable rate equation that is a function of stress state, rate sensitivity, and strain rate. The coalescence internal state variable rate equation is an interactive term between voids and crazes and is a function of the nearest neighbor distance of voids/crazes and size of voids/crazes, temperature, and strain rate. The damage arising from the pre-existing voids employs the Cocks–Ashby void growth rule. The total damage progression is a summation of the damage volume fraction arising from particles and pores and subsequent crazing. The modeling results compare well to experimental findings garnered from the literature. Finally, this formulation can be readily implemented into a finite element analysis.
# Constitutive Model
Decomposition of the deformation gradient (deviatoric plasic, voluemtric plastic/damage, thermal, and elastic):
$\boldsymbol{F} = \boldsymbol{F}_{\mathrm{e}} \boldsymbol{F}_{\mathrm{t}} \boldsymbol{F}_{\mathrm{d}} \boldsymbol{F}_{\mathrm{p}}$
Cauchy Stress:
$\boldsymbol{\sigma}=J_{\mathrm{e}}^{-1}\boldsymbol{F}_{\mathrm{e}} \bar{\boldsymbol{S}} \boldsymbol{F}^{\mathrm{T}}_{\mathrm{e}}$
Second Piola-Kirchhoff Stress (intermediate state):
$\bar{\boldsymbol{S}} = \left[ 2 \mu \left( \theta \right) \bar{\boldsymbol{E}}_{\mathrm{e}} + \lambda \left( \theta \right) \ \mathrm{tr} \left( \bar{\boldsymbol{E}}_{\mathrm{e}} \right) \textbf{1} \right] \frac{\left( 1 - \phi \right)^{2/3}}{F^4_{\mathrm{t}}}$
Shear and bulk moduli: (Lamé parameters)
$\mu \left( \theta \right) = \frac{E \left( \theta \right)}{2 \left( 1 + \nu_{p} \right)}, \quad K \left( \theta \right) = \frac{2 \mu \left( \theta \right) \left( 1 + \nu_{p} \right)}{3 \left( 1 - 2 \nu_{p} \right)}$
Temperature dependent Young's Modulus:
$E \left( \theta \right) = E_{\mathrm{ref}} + E_{1} \left( \theta - \theta_{\mathrm{ref}} \right)$
Assuming isotropic damage, the three stress-like thermodynamic conjugates to the ISVs become:
$\bar{\kappa}_1 = 2 C_{\bar{\kappa}_1} \mu \left( \theta \right) \bar{\xi}_1 \frac{\left( 1 - \phi \right)^{2/3}}{F^4_{\mathrm{t}}}$
$\bar{\kappa}_2 = 2 C_{\bar{\kappa}_2} \mu \left( \theta \right) \bar{\xi}_2 \frac{\left( 1 - \phi \right)^{2/3}}{F^4_{\mathrm{t}}}$
$\bar{\boldsymbol{b}}= 2 C_{\bar{\boldsymbol{b}}} \mu_{\mathrm{R}} \left( \theta \right) \bar{\boldsymbol{\alpha}} \frac{\left( 1 - \phi \right)^{2/3}}{F^4_{\mathrm{t}}}$
The evolution of ISV $\bar{\xi}_1$ (entanglement density):
$\dot{\bar{\xi}}_1 = H_{1} \left( 1 - \frac{\bar{\xi}_1}{\bar{\xi}^*} \right) \dot{\bar{\gamma}}_{\mathrm{p}} , \quad \dot{\bar{\xi}}^* \left( \theta \right) = \left( \bar{\xi}^*_{\mathrm{sat}}\left( \theta \right) - g_0 \left( \theta \right) \bar{\xi}^* \right) \dot{\bar{\gamma}}_{\mathrm{p}}$
$\bar{\xi}_1 \left( \boldsymbol{X},0 \right) = 0 , \quad \bar{\xi}^* \left( \boldsymbol{X},0 \right) = \bar{\xi}_0^* \left( \theta \right)$
$\bar{\xi}^*_0 \left( \theta \right) = C_{3} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{4}$
$\bar{\xi}^*_{\mathrm{sat}} \left( \theta \right) = C_{5} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{6}$
$g_0 \left( \theta \right) = C_{7} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{8}$
The evolution of ISV $\bar{\xi}_2$ (large-strain chain alignment/coiling):
$\dot{\bar{\xi}}_2 = H_{2} \left[ \left( \bar{\lambda}_p - 1 \right) \left( 1 - \frac{\bar{\xi}_2}{\bar{\xi}_{\mathrm{2sat}}\left( \theta \right)} \right) \dot{\bar{\gamma}}_{\mathrm{p}} -R_{\mathrm{s}} \left( \theta \right) \right]$
$\bar{\lambda}_p = \sqrt{ \frac{1}{3} \mathrm{tr} \left( \bar{\boldsymbol{B}}_p \right) } , \quad \bar{\boldsymbol{B}}_p = \boldsymbol{F}_p \boldsymbol{F}_p^{\mathrm{T}}$
$\bar{\xi}_{\mathrm{2sat}} \left( \theta \right) = C_{9} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{10}$
$R_{\mathrm{s}} \left( \theta \right) = C_{11} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{12}$
Viscous shear strain rate:
$\dot{\bar{\gamma}}_{\mathrm{p}} = \dot{\bar{\gamma}}_{\mathrm{0p}} \exp \left( - \frac{\Delta H_{\beta}}{k_{\mathrm{B}} \theta } \right) \sinh ^n \left( \frac{\bar{\tau}_{\mathrm{eq}} V }{ 2 k_{\mathrm{B}} \theta} \right)$
$\quad \bar{\tau}_{\mathrm{eq}} = \frac{\left\| \bar{\boldsymbol{S}}^{\prime} - \bar{\boldsymbol{b}}^{\prime} \right\|}{\sqrt{2}}- \left( Y \left( \theta \right) + \bar{\kappa}_{1} + \bar{\kappa}_{2} \right)$
$Y \left( \theta \right) = C_{1} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{2}$
The evolution of ISV $\bar{\boldsymbol{\alpha}}$ (large-strain chain hardening due to chain stretch):
$\dot{\bar{\boldsymbol{\alpha}}} = \left( C_{\alpha_{1}} \left( \theta \right) + C_{\alpha_{2}} \left( \theta \right) \left\| \bar{\boldsymbol{\alpha}} \right\| ^{2} \right) \bar{\boldsymbol{D}}_{\mathrm{p}} - r_{\mathrm{s}} \left( \theta \right) \sqrt{\frac{2}{3}} \left\| \bar{\boldsymbol{\alpha}} \right\| \bar{\boldsymbol{\alpha}} , \quad \bar{\boldsymbol{\alpha}} \left( \boldsymbol{X},0 \right) = 0$
$C_{\alpha_{1}} \left( \theta \right) = C_{13} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{14}$
$C_{\alpha_{2}} \left( \theta \right) = C_{15} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{16}$
$r_{\mathrm{s}} \left( \theta \right) = C_{17} \left( \theta - \theta_{\mathrm{ref}} \right) + C_{18}$
Plastic flow rule
$\dot{\boldsymbol{F}}_{\mathrm{p}} = \bar{\boldsymbol{L}}_{\mathrm{p}} \boldsymbol{F}_{\mathrm{p}}, \quad \bar{\boldsymbol{L}}_{\mathrm{p}} = \bar{\boldsymbol{D}}_{\mathrm{p}} + \bar{\boldsymbol{W}}_{\mathrm{p}}$
$\bar{\boldsymbol{D}}_{\mathrm{p}} = \frac{1}{\sqrt{2}} \dot{\bar{\gamma}}_{\mathrm{p}} \bar{\boldsymbol{N}}_{\mathrm{p}}, \quad \bar{\boldsymbol{N}}_{\mathrm{p}} = \frac{\bar{\boldsymbol{S}}^{\prime} - \bar{\boldsymbol{b}}^{\prime}}{ \left\| \bar{\boldsymbol{S}}^{\prime} - \bar{\boldsymbol{b}}^{\prime} \right\| }$
Total damage evolution
$\dot{\phi}_{\mathrm{total}} = \left( \dot{\phi}_{\mathrm{particles}} + \dot{\phi}_{\mathrm{pores}} + \dot{\phi}_{\mathrm{crazing}} \right) c + \left( \phi_{\mathrm{particles}} + \phi_{\mathrm{pores}} + \phi_{\mathrm{crazing}} \right) \dot{c}$
Particle damage evolution
$\dot{\phi}_{\mathrm{particles}} = \dot{\eta}_{\mathrm{particle}} \nu + \eta_{\mathrm{particle}} \dot{\nu}$
Particle damage nucleation evolution $\dot{\eta}_{\mathrm{particle}} = \eta_{\mathrm{particle}} \frac{d^{1/2} \left\| \boldsymbol{D}_{\mathrm{p}} \right\|}{K_{\mathrm{Ic}}f^{1/3}} \exp \left( - \frac{C_{\eta \mathrm{p} \theta}}{\theta} \right) \times \left\{ a_{\eta} \left[ \frac{4}{27} - \frac{J_3^2}{J_2^3} \right] + b_{\eta} \frac{J_3}{J_2^{3/2}} + c_{\eta} \left\| \frac{I_1}{\sqrt{J_2}} \right\| \right\}$
Particle growth around a void [1]
$\dot{\nu}= \frac{3}{2} \nu \left[ \frac{I_{1}}{m\sqrt{12J_{2}}} + \frac{\left(m -1 \right) \left(m + 0.4319 \right)}{m^2} \right]^{m} \left\| \boldsymbol{D}_p \right\|$
Pore growth evolution [2]
$\dot{\phi}_{\mathrm{pores}} = \chi \left[ \frac{1}{ \left( 1 - \phi_{\mathrm{pores}} \right)^{m}} - \left( 1- \phi_{\mathrm{pores}} \right) \right] \left \| \boldsymbol{D}_{\mathrm{p}} \right \|$
Crazing evolution
$\dot{\phi}_{\mathrm{craze}} = \dot{\eta}_{\mathrm{craze}} \nu + \eta_{\mathrm{craze}} \dot{\nu}$
Craze nucleation
$\eta_{\mathrm{craze}} = C_{\mathrm{coeffc}} \exp \left[ \frac{M_{\mathrm{w}}}{ K_{\mathrm{Ic}}} \left\| \bar{\boldsymbol{E}}_{\mathrm{p}} \right\| \exp \left( \frac{C_{\eta \mathrm{c} \theta}}{\theta} \right) \times \left\{ d_{\eta} \left[ \frac{4}{27} - \frac{J_3^2}{J_2^3} \right] + e_{\eta} \frac{J_3}{J_2^{3/2}} + f_{\eta}\frac{ I_1 + \left\| I_1 \right\| }{ 2\sqrt{J_2}}\right\} \right]$
Coalescence
$\dot{c} = \underbrace{C_{\mathrm{coal_1}} \left( \dot{\eta}_{\mathrm{particle}} \nu + \dot{\nu} \eta_{\mathrm{particle}} \right)}_{\mathrm{Crazing\;from\;particles}} + \underbrace{C_{\mathrm{coal_2}} \left( \dot{\eta}_{\mathrm{craze}} \nu + \dot{\nu} \eta_{\mathrm{craze}} \right)}_{\mathrm{General\;crazing\;in\;matrix}} + \underbrace{C_{\mathrm{coal_3}} \left( \frac{4 d_{0}}{d_{\mathrm{NN}}} \right)^{z} \exp \left( C_{c \theta} \theta \right) \left\| \boldsymbol{D}_{\mathrm{p}} \right\|}_{\mathrm{Impingement}}$
Temperature Evolution
\begin{align} \dot{\theta} = &\frac{1}{\bar{C}_V + 3f_{\theta} \bar{e}_V - f_{\theta} \mathrm{tr} \left( \bar{\boldsymbol{M}} \right)}\\ &\times \left[ \begin{align} &\bar{\boldsymbol{G}}_e \colon \dot{\bar{\boldsymbol{E}}}_{\mathrm{e}} + \bar{G}_1 \dot{\bar{\xi}}_1 + \bar{G}_2 \dot{\bar{\xi}}_2 + \bar{\boldsymbol{G}}_{\alpha} \colon \dot{\bar{\boldsymbol{\alpha}}} + \bar{\boldsymbol{M}}\colon \bar{\boldsymbol{L}}_{\mathrm{p}} - \bar{\boldsymbol{\nabla}}\cdot \bar{\boldsymbol{Q}} + \bar{R}_{V} \\ &- \frac{ \dot{\phi}}{3 \left( 1 - \phi \right)} \left[ \begin{align} &\left( \bar{\boldsymbol{G}}_e - 3 \bar{\boldsymbol{S}} \right) \colon \bar{\boldsymbol{E}}_{\mathrm{e}} +\bar{G}_1 \bar{\xi}_1 + \bar{G}_2 \bar{\xi}_2 \\ &+ \bar{\boldsymbol{G}}_{\alpha} \colon \dot{\bar{\boldsymbol{\alpha}}} - \mathrm{tr} \left( \bar{\boldsymbol{S}} \right) + 3 \bar{e}_{V} \end{align} \right] \end{align} \right] \end{align}
and when assuming isotropic damage, the G terms simplify to
\begin{align} \bar{\boldsymbol{G}}_e =& \left( \begin{align} &2 \frac{\partial \mu \left( \theta \right) }{\partial \theta} \bar{\boldsymbol{E}}_{\mathrm{e}} + \frac{\partial \lambda \left( \theta \right) }{\partial \theta} \mathrm{tr} \left( \bar{\boldsymbol{E}}_{\mathrm{e}} \right) \bar{\mathbf{1}} \\ &- f_{\theta} \left( 2 \mu \left( \theta \right) + \lambda \left( \theta \right) \right) \bar{\mathbf{1}} \end{align} \right) \theta \left( 1 - \phi \right)^{2/3} - f_{\theta} \theta \bar{\boldsymbol{S}} \\ \bar{G}_1 =& 2 C_{\bar{\kappa}_1} \bar{\xi}_1 \left( \theta \frac{\partial \mu \left( \theta \right) }{\partial \theta} + \mu \left( \theta \right) \left( 3 f_{\theta} \theta - 1 \right) \right) \left( 1- \phi \right)^{2/3} \\ \bar{G}_2 =& 2 C_{\bar{\kappa}_2} \bar{\xi}_2 \left( \theta \frac{\partial \mu \left( \theta \right) }{\partial \theta} + \mu \left( \theta \right) \left( 3 f_{\theta} \theta - 1 \right) \right) \left( 1- \phi \right)^{2/3} \\ \bar{\boldsymbol{G}}_{\alpha} =& 6 C_{\bar{\boldsymbol{b}}} f_{\theta} \theta \mu_{\mathrm{R}} \left( \theta \right) \bar{\boldsymbol{\alpha}} \left( 1- \phi \right)^{2/3} \end{align}
# References
1. B. Budiansky, J. Hutchinson, and S. Slutsky. Void growth and collapse in viscous solids. In H. Hopkins and M. Sewell, editors, Mechanics of Solids, pages 13-45. Pergamon Press, Oxford, 1982. (link)
2. A. C. F. Cocks and M. F. Ashby. Intergranular fracture during power-law creep under multiaxial stresses. Metal Science, 14(8{9):395{402, 1980. link |
## FANDOM
475 Pages
Bicycle and motorcycle dynamics is the science of the motion of bicycles and motorcycles and their components, due to the forces acting on them. Dynamics is a branch of classical mechanics, which in turn is a branch of physics. Bike motions of interest include balancing, steering, braking, accelerating, suspension activation, and vibration. The study of these motions began in the late 1800s and continues today.[1][2]
Bicycles and motorcycles are both single-track vehicles and so their motions have many fundamental attributes in common and are fundamentally different from other wheeled vehicles such as dicycles, tricycles, and quadricycles. As with unicycles, bikes lack lateral stability when stationary, and under most circumstances can only remain upright when moving forward. Experimentation and mathematical analysis have shown that a bike stays upright when it is steered to keep its center of mass over its wheels. This steering is usually supplied by a rider, or in certain circumstances, by the bike itself. Several factors, including geometry, mass distribution, and gyroscopic effect all contribute to varying degrees to this self-stability, but long-standing hypotheses and claims that gyroscopic effect is the main stabilizing force have been discredited.[3][4]
While remaining upright may be the primary goal of beginning riders, a bike must lean in order to maintain balance in a turn: the higher the speed or smaller the turn radius, the more lean is required. This balances the roll torque about the wheel contact patches generated by centrifugal force due to the turn with that of the gravitational force. This lean is usually produced by a momentary steering in the opposite direction, a maneuver often referred to as countersteering. Unlike other wheeled vehicles, the primary control input on bikes is steering torque, not position.[5]
Although longitudinally stable when stationary, bikes often have a high enough center of mass and a short enough wheelbase to lift a wheel off the ground under sufficient acceleration or deceleration. When braking, depending on the location of the combined center of mass of the bike and rider with respect to the point where the front wheel contacts the ground, bikes can either skid the front wheel or flip the bike and rider over the front wheel. A similar situation is possible while accelerating, but with respect to the rear wheel.[6]
## HistoryEdit
The history of the study of bike dynamics is nearly as old as the bicycle itself. It includes contributions from famous scientists such as Rankine, Appell, and Whipple.[1] In the early 1800s Karl von Drais himself, credited with inventing the two-wheeled vehicle variously called the laufmaschine, velocipede, draisine, and dandy horse, showed that a rider could balance his device by steering the front wheel.[1] By the end of the 1800s, Emmanuel Carvallo and Francis Whipple showed with rigid-body dynamics that some safety bicycles could actually balance themselves if moving at the right speed.[1] It is not clear to whom should go the credit for tilting the steering axis from the vertical which helps make this possible.[7]
In 1970, David Jones published an article in Physics Today showing that gyroscopic effects are not necessary to balance a bicycle.[4] Since 1971, when he named the weave and capsize modes, Robin Sharp has written about the mathematical behavior of motorcycles and bicycles, and his work has continued to the present day with David Limebeer. Both men were at Imperial College, London.[1][8][9] In 2007, Meijaard, Papadopoulos, Ruina, and Schwab published the canonical linearized equations of motion, in the Proceedings of the Royal Society A, along with verification by two different methods.[1]
## ForcesEdit
If the bike and rider are considered to be a single system, the forces that act on that system and its components can be roughly divided into two groups: internal and external. The external forces are due to gravity, inertia, contact with the ground, and contact with the atmosphere. The internal forces are caused by the rider and by interaction between components.
### External forcesEdit
As with all masses, gravity pulls the rider and all the bike components toward the earth. At each tire contact patch there are ground reaction forces with both horizontal and vertical components. The vertical components mostly counteract the force of gravity, but also vary with braking and accelerating. For details, see the section on longitudinal stability below. The horizontal components, due to friction between the wheels and the ground, including rolling resistance, are in response to propulsive forces, braking forces, and turning forces. Aerodynamic forces due to the atmosphere are mostly in the form of drag, but can also be from crosswinds. At normal bicycling speeds on level ground, aerodynamic drag is the largest force resisting forward motion.[10]
Turning forces are generated during maneuvers for balancing in addition to just changing direction of travel. These may be interpreted as centrifugal forces in the accelerating reference frame of the bike and rider; or simply as inertia in a stationary, inertial reference frame and not forces at all. Gyroscopic forces acting on rotating parts such as wheels, engine, transmission, etc., are also due to the inertia of those rotating parts. They are discussed further in the section on gyroscopic effects below.
### Internal forcesEdit
Internal forces are mostly caused by the rider or by friction. The rider can apply torques between the steering mechanism (front fork, handlebars, front wheel, etc.) and rear frame, and between the rider and the rear frame. Friction exists between any parts that move against each other: in the drive train, between the steering mechanism and the rear frame, etc. Many bikes have front and rear suspensions, and some motorcycles have a steering damper to dissipate undesirable kinetic energy.[9] On bikes with rear suspensions, feedback between the drive train and the suspension is an issue designers attempt to handle with various linkage configurations and dampers.[11]
## MotionsEdit
Motions of a bike can be roughly grouped into those out of the central plane of symmetry: lateral; and those in the central plane of symmetry: longitudinal or vertical. Lateral motions include balancing, leaning, steering, and turning. Motions in the central plane of symmetry include rolling forward, of course, but also stoppies, wheelies, brake diving, and most suspension activation. Motions in these two groups are linearly decoupled, that is they do not interact with each other to the first order.[1] An uncontrolled bike is laterally unstable when stationary and can be laterally self-stable when moving under the right conditions or when controlled by a rider. Conversely, a bike is longitudinally stable when stationary and can be longitudinally unstable when undergoing sufficient acceleration or deceleration.
## Lateral dynamicsEdit
Of the two, lateral dynamics has proven to be the more complicated, requiring three-dimensional, multibody dynamic analysis with at least two generalized coordinates to analyze. At the minimum, two coupled, second-order differential equations are required to capture the principle motions.[1] Exact solutions are not possible, and numerical methods must be used instead.[1] Competing theories of how bikes balance can still be found in print and online. On the other hand, as shown in later sections, much longitudinal dynamic analysis can be accomplished simply with planar kinetics and just one coordinate.
### BalanceEdit
A bike remains upright when it is steered so that the ground reaction forces exactly balance all the other internal and external forces it experiences, such as gravitational if leaning, inertial or centrifugal if in a turn, gyroscopic if being steered, and aerodynamic if in a crosswind.[10] Steering may be supplied by a rider or, under certain circumstances, by the bike itself. This self-stability is generated by a combination of several effects that depend on the geometry, mass distribution, and forward speed of the bike. Tires, suspension, steering damping, and frame flex can also influence it, especially in motorcycles.
Even when staying relatively motionless, a rider can balance a bike by the same principle. While performing a track stand, the rider can keep the line between the two contact patches under the combined center of mass by steering the front wheel to one side or the other and them moving forward and backward slightly to move the front contact patch from side to side as necessary. Forward motion can be generated simply by pedaling. Backwards motion can be generated the same way on a fixed-gear bicycle. Otherwise, the rider can take advantage of an opportune slope of the pavement or lurch the upper body backwards while the brakes are momentarily engaged.[12]
If the steering of a bike is locked, it becomes virtually impossible to balance while riding. On the other hand, if the gyroscopic effect of rotating bike wheels is cancelled by adding counter-rotating wheels, it is still easy to balance while riding.[3][4]
#### Forward speedEdit
The rider applies torque to the handlebars in order to turn the front wheel and so to control lean and maintain balance. At high speeds, small steering angles quickly move the ground contact points laterally; at low speeds, larger steering angles are required to achieve the same results in the same amount of time. Because of this, it is usually easier to maintain balance at high speeds.[13]
#### Center of mass locationEdit
The farther forward (closer to front wheel) the center of mass of the combined bike and rider, the less the front wheel has to move laterally in order to maintain balance. Conversely, the further back (closer to the rear wheel) the center of mass is located, the more front wheel lateral movement or bike forward motion will be required to regain balance. This can be noticeable on long-wheelbase recumbents and choppers. It can also be an issue for touring bikes with a heavy load of gear over or even behind the rear wheel.[14]
A bike is also an example of an inverted pendulum. Just as a broomstick is easier to balance than a pencil, a tall bike (with a high center of mass) can be easier to balance when ridden than a short one because its lean rate will be slower.[15] However, a rider can have the opposite impression of a bike when it is stationary. A top-heavy bike can require more effort to keep upright, when stopped in traffic for example, than a bike which is just as tall but with a lower center of mass. This is an example of a vertical second-class lever. A small force at the end of the lever, the seat or handlebars at the top of the bike, more easily moves a large mass if the mass is closer to the fulcrum, where the tires touch the ground. This is why touring cyclists are advised to carry loads low on a bike, and panniers hang down on either side of front and rear racks.[16]
#### TrailEdit
A factor that influences how easy or difficult a bike will be to ride is trail, the distance that the front wheel ground contact point trails behind the steering axis ground contact point. The steering axis is the axis about which the entire steering mechanism (fork, handlebars, front wheel, etc.) pivots. In traditional bike designs, with a steering axis tilted back from the vertical, trail causes the front wheel to steer into the direction of a lean, independent of forward speed.[10] This can be seen by pushing a stationary bike to one side. The front wheel will usually also steer to that side. In a lean, gravity provides this force.
Trail is a function of head angle, fork offset or rake, and wheel size. Their relationship can be described by this formula:[17]
$\text{Trail} = \frac{(R_w \cos(A_h) - O_f)}{\sin(A_h)}$
where $R_w$ is wheel radius, $A_h$ is the head angle measured clock-wise from the horizontal and $O_f$ is the fork offset or rake. Trail can be increased by increasing the wheel size, decreasing or slackening the head angle, or decreasing the fork rake.
The more trail a bike has, the more stable it feels. Bikes with negative trail (where the contact patch is actually in front of where the steering axis intersects the ground), while still ridable, feel very unstable. Bikes with too much trail feel difficult to steer. Normally, road racing bicycles have more trail than mountain bikes or touring bikes. In the case of mountain bikes, less trail allows more accurate path selection off-road, and also allows the rider to recover from obstacles on the trail which might knock the front wheel off course. Touring bikes are built with small trail to allow the rider to control a bike weighed down with baggage. As a consequence, an unloaded touring bike can feel unstable. In bicycles, fork rake, often a curve in the fork blades forward of the steering axis, is used to diminish trail.[18] In motorcycles, rake refers to the head angle instead, and offset created by the triple tree is used to diminish trail.[19]
A small survey by Whitt and Wilson[10] found:
• touring bicycles with head angles between 72° and 73° and trail between 43.0 mm and 60.0 mm
• racing bicycles with head angles between 73° and 74° and trail between 28.0 mm and 45.0 mm
• track bicycles with head angles of 75° and trail between 23.5 mm and 37.0 mm.
However, these ranges are not hard and fast. For example, LeMond Racing Cycles offers [20] both with forks that have 45 mm of offset or rake and the same size wheels:
• a 2006 Tete de Course, designed for road racing, with a head angle that varies from 71.25° to 74.00°, depending on frame size, and thus trail that varies from 69 mm to 51.5 mm.
• a 2007 Filmore, designed for the track, with a head angle that varies from 72.50° to 74.00°, depending on frame size, and thus trail that varies from 61 mm to 51.5 mm.
The amount of trail a particular bike has may vary with time for several reasons. On bikes with front suspension, especially telescopic forks, compressing the front suspension, due to heavy braking for example, can steepen the steering axis angle and reduce trail. Trail also varies with lean angle, and steering angle, usually decreasing from a maximum when the bike is straight upright and steered straight ahead.[21] Finally, even the profile of the front tire can influence how trail varies as the bike is leaned and steered.
A measurement similar to trail, called either mechanical trail, normal trail, or true trail[22], is the perpendicular distance from the steering axis to the centroid of the front wheel contact patch.
#### Steering mechanism mass distributionEdit
Another factor that can also contribute to the self-stability of traditional bike designs is the distribution of mass in the steering mechanism, which includes the front wheel, the fork, and the handlebar. If the center of mass for the steering mechanism is in front of the steering axis, then the pull of gravity will also cause the front wheel to steer in the direction of a lean. This can be seen by leaning a stationary bike to one side. The front wheel will usually also steer to that side independent of any interaction with the ground.[23] Additional parameters, such as the fore-to-aft position of the center of mass and the elevation of the center of mass also contribute to the dynamic behavior of a bike.[10][23]
#### Gyroscopic effectsEdit
The role of the gyroscopic effect in most bike designs is to help steer the front wheel into the direction of a lean. This phenomenon is called precession and the rate at which an object precesses is inversely proportional to its rate of spin. The slower a front wheel spins, the faster it will precess when the bike leans, and vice-versa.[24] The rear wheel is prevented from precessing as the front wheel does by friction of the tires on the ground, and so continues to lean as though it were not spinning at all. Hence gyroscopic forces do not provide any resistance to tipping.[25]
At low forward speeds, the precession of the front wheel is too quick, contributing to an uncontrolled bike’s tendency to oversteer, start to lean the other way and eventually oscillate and fall over. At high forward speeds, the precession is usually too slow, contributing to an uncontrolled bike’s tendency to understeer and eventually fall over without ever having reached the upright position.[7] This instability is very slow, on the order of seconds, and is easy for most riders to counteract. Thus a fast bike may feel stable even though it is actually not self-stable and would fall over if it were uncontrolled. A bicycle wheel with an internal flywheel for enhanced gyroscopic effect is under development as a commercial product, the Gyrobike, for making it easier to learn to ride bicycles.
Another contribution of gyroscopic effects is a roll moment generated by the front wheel during countersteering. For example, steering left causes a moment to the right. The moment is small compared to the moment generated by the out-tracking front wheel, but begins as soon as the rider applies torque to the handlebars and so can be helpful in motorcycle racing.[6]. For more detail, see the countersteering article.
#### Self-stabilityEdit
Between the two unstable regimes mentioned in the previous section, and influenced by all the factors described above that contribute to balance (trail, mass distribution, gyroscopic effects, etc.), there may be a range of forward speeds for a given bike design at which these effects steer an uncontrolled bike upright.[1]
However, even without self-stability a bike may be ridden by steering it to keep it over its wheels.[4] Note that the effects mentioned above that would combine to produce self-stability may be overwhelmed by additional factors such as headset friction and stiff control cables.[10] This video shows a riderless bicycle exhibiting self-stability.
### TurningEdit
In order to turn a bike, that is, change its direction of forward travel, the front wheel is turned approximately in the desired direction, as with any front-wheel steered vehicle. Friction between the wheels and the ground then generates the centripetal acceleration necessary to alter the course from straight ahead as a combination of cornering force and camber thrust. The radius of the turn of an upright (not leaning) bike can be roughly approximated, for small steering angles, by:
$r = \frac{w}{\delta \cos \left (\phi \right )}$
where $r \,$ is the approximate radius, $w \,$ is the wheelbase, $\delta \,$ is the steer angle, and $\phi \,$ is the caster angle of the steering axis.[6]
#### LeaningEdit
However, unlike other wheeled vehicles, bikes must also lean during a turn to balance the relevant forces: gravitational, inertial, frictional, and ground support. The angle of lean, $\theta \,$, can easily be calculated using the laws of circular motion:
$\theta = \arctan \left (\frac{v^2}{gr}\right )$
where $v \,$ is the forward speed, $r \,$ is the radius of the turn and $g \,$ is the acceleration of gravity.[24] This is in the idealized case. A slight increase in the lean angle may be required on motorcycles to compensate for the width of modern tires at the same forward speed and turn radius.[21]
For example, a bike in a 10 m (33 ft) radius steady-state turn at 10 m/s (22 mph) must be at an angle of 45°. A rider can lean with respect to the bike in order to keep either the torso or the bike more or less upright if desired. The angle that matters is the one between the horizontal plane and the plane defined by the tire contacts and the location of the center of mass of bike and rider.
This lean of the bike decreases the actual radius of the turn proportionally to the cosine of the lean angle. The resulting radius can be roughly approximated (within 2% of exact value) by:
$r = \frac{w\cos \left (\theta \right )}{\delta \cos \left (\phi \right )}$
where $r \,$ is the approximate radius, $w \,$ is the wheelbase, $\theta \,$ is the lean angle, $\delta \,$ is the steer angle, and $\phi \,$ is the caster angle of the steering axis.[6] As a bike leans, the tires' contact patches move farther to the side causing wear. The portions at either edge of a motorcycle tire that remain unworn by leaning into turns is sometimes referred to as Template:Linktext.
#### CountersteeringEdit
In order to initiate a turn and the necessary lean in the direction of that turn, a bike must momentarily steer in the opposite direction. This is often referred to as countersteering. With the front wheel now at an angle to the direction of motion, a lateral force is developed at the contact patch of the tire. This force creates a torque around the longitudinal (roll) axis of the bike. This torque causes the bike to roll in the opposite direction of the turn. Where there is no external influence, such as an opportune side wind to create the force necessary to lean the bike, countersteering happens in every turn.[24]
As the lean approaches the desired angle, the front wheel must be steered in the direction of the turn, depending on the forward speed, the turn radius, and the need to maintain the lean angle. Once in a turn, the radius can only be changed with an appropriate change in lean angle. This can only be accomplished by additional countersteering out of the turn to increase lean and decrease radius, then into the turn to decrease lean and increase radius. To exit the turn, the bike must again countersteer, momentarily steering more into the turn in order to decrease the radius, thus increasing inertial forces, and thereby decreasing the angle of lean.[26]
Once a turn is established, the torque that must be applied to the steering mechanism in order to maintain a constant radius at a constant forward speed depends on the forward speed and the geometry and mass distribution of the bike.[7] At speeds below the capsize speed, described below in the section on Eigenvalues and also called the inversion speed, the self-stability of the bike will cause it to tend to steer into the turn, righting itself and exiting the turn, unless a torque is applied in the opposite direction of the turn. At speeds above the capsize speed, the capsize instability will cause it to tend to steer out of the turn, increasing the lean, unless a torque is applied in the direction of the turn. At the capsize speed no input steering torque is necessary to maintain the steady-state turn.
#### No handsEdit
While countersteering is usually initiated by applying torque directly to the handlebars, on lighter vehicles such as bicycles, it can also be accomplished by shifting the rider’s weight. If the rider leans to the right relative to the bike, the bike will lean to the left to conserve angular momentum, and the combined center of mass will remain in the same vertical plane. This leftward lean of the bike, called counter lean by some authors,[21] will cause it to steer to the left and initiate a right-hand turn as if the rider had countersteered to the left by applying a torque directly to the handlebars.[24] Note that this technique may be complicated by additional factors such as headset friction and stiff control cables.
#### Gyroscopic effectsEdit
As mentioned above in the section on balance, one effect of turning the front wheel is a roll moment caused by gyroscopic precession. The magnitude of this moment is proportional to the moment of inertia of the front wheel, its spin rate (forward motion), the rate that the rider turns the front wheel by applying a torque to the handlebars, and the cosine of the angle between the steering axis and the vertical.[6]
For a sample motorcycle moving at 22 m/s (50 mph) that has a front wheel with a moment of inertia of 0.6 kgm2, turning the front wheel one degree in half a second generates a roll moment of 3.5 Nm. In comparison, the lateral force on the front tire as it tracks out from under the motorcycle reaches a maximum of 50 N. This, acting on the 0.6 m (2 ft) height of the center of mass, generates a roll moment of 30 Nm.
While the moment from gyroscopic forces is only 12% of this, it can play a significant part because it begins to act as soon as the rider applies the torque, instead of building up more slowly as the wheel out-tracks. This can be especially helpful in motorcycle racing.
#### Two-wheel steeringEdit
Because of theoretical benefits, such as a tighter turning radius at low speed, attempts have been made to construct motorcycles with two-wheel steering. One working prototype by Ian Drysdale in Australia is reported to "work very well."[27][28] Issues in the design include whether to provide active control of the rear wheel or let it swing freely. In the case of active control, the control algorithm needs to decide between steering with or in the opposite direction of the front wheel, when, and how much. One implementation of two-wheel steering, the Sideways bike, lets the rider control the steering of both wheels directly.
#### Rear-wheel steeringEdit
Because of the theoretical benefits, especially a simplified front-wheel drive mechanism, attempts have been made to construct a ridable rear-wheel steering bike. The Bendix Company built a rear-wheel steering bicycle, and the U.S. Department of Transportation commissioned the construction of a rear-wheel steering motorcycle: both proved to be unridable. Rainbow Trainers, Inc. in Alton, IL, offered US$5,000 to the first person "who can successfully ride the rear-steered bicycle, Rear Steered Bicycle I".[29] One documented example of someone successfully riding a rear-wheel steering bicycle is that of L. H. Laiterman at MIT, on a specially designed recumbent bike.[10] The difficulty is that turning left, accomplished by turning the rear wheel to the right, initially moves the center of mass to the right, and vice versa. This complicates the task of compensating for leans induced by the environment.[30] Examination of the eigenvalues shows that the rear-wheel steering configuration is inherently unstable. #### Center steeringEdit Between the extremes of bicycles with classical front-wheel steering and those with strictly rear-wheel steering is a class of bikes with a pivot point somewhere between the two referred to as center-steering, similar to articulated steering. This design allows for simple front-wheel drive and appears to be quite stable, even ridable no-hands, as many photographs illustrate.[31][32] These designs usually have very lax head angles (40° to 65°) and positive or even negative trail. The builder of a bike with negative trail states that steering the bike from straight ahead forces the seat (and thus the rider) to rise slightly and this offsets the destabilizing effect of the negative trail.[33] #### Tiller effect Edit Tiller effect is the expression used to describe how handlebars that extend far behind the steering axis (head tube) act like a tiller on a boat, in that one moves the bars to the right in order to turn the front wheel to the left, and vice versa. This situation is commonly found on cruiser bicycles, some recumbents, and even some cruiser motorcycles. It can be troublesome when it limits the ability to steer because of interference or the limits of arm reach.[34] #### Tires Edit Tires have a large influence over bike handling, especially on motorcycles.[6][21] Tire inflation pressures have also been found to be important variables in the behavior of a motorcycle at high speeds.[35] Because the front and rear tires can have different slip angles due to weight distribution, tire properties, etc., bikes can experience understeer or oversteer. Of the two, understeer, in which the front wheel slides more than the rear wheel, is more dangerous since front wheel steering is critical for maintaining balance.[6] Also, because real tires have a finite contact patch with the road surface that can generate a scrub torque, and when in a turn, can experience some side slipping as they roll, they can generate torques about an axis normal to the plane of the contact patch. One torque generated by tires is due to asymmetries in the side-slip along the length of the contact patch. The resultant force of this side-slip occurs behind the geometric center of the contact patch, a distance described as the pneumatic trail, and so creates a torque on the tire. Since the direction of the side-slip is towards the outside of the turn, the force on the tire is towards the center of the turn. Therefore, this torque tends to turn the front wheel in the direction of the side-slip, away from the direction of the turn, and therefore tends to increase the radius of the turn. Another torque is produced by the finite width of the contact patch and the lean of the tire in a turn. The portion of the contact patch towards the outside of the turn is actually moving rearward, with respect to the wheel's hub, faster than the rest of the contact patch, because of its greater radius from the hub. By the same reasoning, the inner portion is moving rearward more slowly. So the outer and inner portions of the contact patch slip on the pavement in opposite directions, generating a torque that tends to turn the front wheel in the direction of the turn, and therefore tends to decrease the turn radius. The combination of these two opposite torques creates a resulting yaw torque on the front wheel, and its direction is a function of the side-slip angle of the tire, the angle between the actual path of the tire and the direction it is pointing, and the camber angle of the tire (the angle that the tire leans from the vertical).[6] The result of this torque is often the suppression of the inversion speed predicted by rigid wheel models described above in the section on steady-state turning.[7] #### High side Edit A highsider, highside, or high side is a type of bike motion which is caused by a rear wheel gaining traction when it is not facing in the direction of travel, usually after slipping sideways in a curve.[6] This can occur under heavy braking, acceleration, a varying road surface, or suspension activation, especially due to interaction with the drivetrain.[36] It can take the form of a single slip-then-flip or a series of violent oscillations.[21] ### Maneuverability and handlingEdit Bike maneuverability and handling is difficult to quantify for several reasons. The geometry of a bike, especially the steering axis angle makes kinematic analysis complicated.[1] Under many conditions, bikes are inherently unstable and must always be under rider control. Finally, the rider's skill has a large influence on the bike's performance in any maneuver.[6] Bike designs tend to consist of a trade-off between maneuverability and stability. #### Rider control inputsEdit The primary control input that the rider can make is to apply a torque directly to the steering mechanism via the handlebars. Because of the bike's own dynamics, due to steering geometry and gyroscopic effects, direct position control over steering angle has been found to be problematic.[5] A secondary control input that the rider can make is to lean the upper torso relative to the bike. As mentioned above, the effectiveness of rider lean varies inversely with the mass of the bike. On heavy bikes, such as motorcycles, rider lean mostly alters the ground clearance requirements in a turn, improves the view of the road, and improves the bike system dynamics in a very low-frequency passive manner.[5] #### Differences from automobilesEdit The need to keep a bike upright to avoid injury to the rider and damage to the vehicle even limits the type of maneuverability testing that is commonly performed. For example, while automobile enthusiast publications often perform and quote skidpad results, motorcycle publications do not. The need to "set up" for a turn, lean the bike to the appropriate angle, means that the rider must see further ahead than is necessary for a typical car at the same speed, and this need increases more than in proportion to the speed.[5] #### Rating schemesEdit Several schemes have been devised to rate the handling of bikes, particularly motorcycles.[6] • The roll index is the ratio between steering torque and roll or lean angle. • The steering ratio is the ratio between the theoretical turning radius based on ideal tire behavior and the actual turning radius. Values less than one, where the front wheel side slip is greater than the rear wheel side slip, are described as under-steering; equal to one as neutral steering; and greater than one as over-steering. Values less than zero, in which the front wheel must be turned opposite the direction of the curve due to much greater rear wheel side slip than front wheel have been described as counter-steering. Riders tend to prefer neutral or slight over-steering.[6] Car drivers tend to prefer under-steering. • The Koch index is the ratio between peak steering torque and the product of peak lean rate and forward speed. Large, touring motorcycles tend to have a high Koch index, sport motorcycles tend to have a medium Koch index, and scooters tend to have a low Koch index.[6] It is easier to maneuver light scooters than heavy motorcycles. ### Lateral motion theoryEdit Although its equations of motion can be linearized, a bike is a nonlinear system. The variable(s) to be solved for cannot be written as a linear sum of independent components, i.e. its behavior is not expressible as a sum of the behaviors of its descriptors.[1] Generally, nonlinear systems are difficult to solve and are much less understandable than linear systems. In the idealized case, in which friction and any flexing is ignored, a bike is a conservative system. Damping, however, can still be demonstrated: side-to-side oscillations will decrease with time. Energy added with a sideways jolt to a bike running straight and upright (demonstrating self-stability) is converted into increased forward speed, not lost, as the oscillations die out. A bike is a nonholonomic system because its outcome is path-dependent. In order to know its exact configuration, especially location, it is necessary to know not only the configuration of its parts, but also their histories: how they have moved over time. This complicates mathematical analysis.[24] Finally, in the language of control theory, a bike exhibits non-minimum phase behavior.[37] It turns in the direction opposite of how it is initially steered, as described above in the section on countersteering #### Degrees of freedomEdit The number of degrees of freedom of a bike depends on the particular model being used. The simplest model that captures the key dynamic features, four rigid bodies with knife edge wheels rolling on a flat smooth surface, has 7 degrees of freedom (configuration variables required to completely describe the location and orientation of all 4 bodies):[1] 1. x coordinate of rear wheel contact point 2. y coordinate of rear wheel contact point 3. orientation angle of rear frame (yaw) 4. rotation angle of rear wheel 5. rotation angle of front wheel 6. lean angle of rear frame (roll) 7. steering angle between rear frame and front end Adding complexity to the model, such as suspension, tire compliance, frame flex, or rider movement, adds degrees of freedom. While the rear frame does pitch with leaning and steering, the pitch angle is completely constrained by the requirement for both wheels to remain on the ground, and so can be calculated geometrically from the other seven variables. If the location of the bike and the rotation of the wheels are ignored, the first five degrees of freedom can also be ignored, and the bike can be described by just two variables: lean angle and steer angle. #### Equations of motionEdit The equations of motion of an idealized bike, consisting of • a rigid frame, • a rigid fork, • two knife-edged, rigid wheels, • all connected with frictionless bearings and rolling without friction or slip on a smooth horizontal surface and • operating at or near the upright and straight-ahead, unstable equilibrium can be represented by a single fourth-order linearized ordinary differential equation or two coupled second-order differential equations,[1] the lean equation$ M_{\theta\theta}\ddot{\theta_r} + K_{\theta\theta}\theta_r + M_{\theta\psi}\ddot{\psi} + C_{\theta\psi}\dot{\psi} + K_{\theta\psi}\psi = M_{\theta} $and the steer equation$ M_{\psi\psi}\ddot{\psi} + C_{\psi\psi}\dot{\psi} + K_{\psi\psi}\psi + M_{\psi\theta}\ddot{\theta_r} + C_{\psi\theta}\dot{\theta_r} + K_{\psi\theta}\theta_r = M_{\psi}\mbox{,} $where •$ \theta_r \, $is the lean angle of the rear assembly, •$ \psi \, $is the steer angle of the front assembly relative to the rear assembly and •$ M_{\theta} \, $and$ M_{\psi} \, $are the moments (torques) applied at the rear assembly and the steering axis, respectively. For the analysis of an uncontrolled bike, both are taken to be zero. These can be represented in matrix form as$ M\mathbf\ddot q+C\mathbf\dot q+K\mathbf q=\mathbf f $where •$ M \, $is the symmetrical mass matrix which contains terms that include only the mass and geometry of the bike, •$ C \, $is the so-called damping matrix, even though an idealized bike has no dissipation, which contains terms that include the forward speed$ v \, $and is asymmetric, •$ K \, $is the so-called stiffness matrix which contains terms that include the gravitational constant$ g \, $and$ v^2 \, $and is symmetric in$ g \, $and asymmetric in$ v^2 \, $, •$ \mathbf q \, $is a vector of lean angle and steer angle, and •$ \mathbf f \, $is a vector of external forces, the moments mentioned above. In this idealized and linearized model, there are many geometric parameters (wheelbase, head angle, mass of each body, wheel radius, etc.), but only four significant variables: lean angle, lean rate, steer angle, and steer rate. These equations have been verified by comparison with multiple numeric models derived completely independently.[1] #### EigenvaluesEdit It is possible to calculate eigenvalues, one for each of the four state variables (lean angle, lean rate, steer angle, and steer rate), from the linearized equations in order to analyze the normal modes and self-stability of a particular bike design. In the plot to the right, eigenvalues of one particular bicycle are calculated for forward speeds of 0–10 m/s (22 mph). When the real parts of all eigenvalues (shown in dark blue) are negative, the bike is self-stable. When the imaginary parts of any eigenvalues (shown in cyan) are non-zero, the bike exhibits oscillation. The eigenvalues are point symmetric about the origin and so any bike design with a self-stable region in forward speeds will not be self-stable going backwards at the same speed.[1] There are three forward speeds that can be identified in the plot to the right at which the motion of the bike changes qualitatively:[1] 1. The forward speed at which oscillations begin, at about 1 m/s (2.2 mph) in this example, sometimes called the double root speed due to there being a repeated root to the characteristic polynomial (two of the four eigenvalues have exactly the same value). Below this speed, the bike simply falls over as an inverted pendulum does. 2. The forward speed at which oscillations do not increase, where the weave mode eigenvalues switch from positive to negative in a Hopf bifurcation at about 5.3 m/s (12 mph) in this example, is called the weave speed. Below this speed, oscillations increase until the uncontrolled bike falls over. Above this speed, oscillations eventually die out. 3. The forward speed at which non-oscillatory leaning increases, where the capsize mode eigenvalues switch from negative to positive in a pitchfork bifurcation at about 8.0 m/s (18 mph) in this example, is called the capsize speed. Above this speed, this non-oscillating lean eventually causes the uncontrolled bike to fall over. Between these last two speeds, if they both exist, is a range of forward speeds at which the particular bike design is self-stable. In the case of the bike whose eigenvalues are shown here, the self-stable range is 5.3–8.0 m/s (12–18 mph). The fourth eigenvalue, which is usually stable (very negative), represents the castoring behavior of the front wheel, as it tends to turn towards the direction in which the bike is traveling. Note that this idealized model does not exhibit the wobble or shimmy and rear wobble instabilities described above. They are seen in models that incorporate tire interaction with the ground or other degrees of freedom.[6] Experimentation with real bikes has so far confirmed the weave mode predicted by the eigenvalues. It was found that tire slip and frame flex are not important for the lateral dynamics of the bicycle in the speed range up to 6 m/s.[38] The idealized bike model used to calculate the eigenvalues shown here does not incorporate any of the torques that real tires can generate, and so tire interaction with the pavement cannot prevent the capsize mode from becoming unstable at high speeds, as Wilson and Cossalter suggest happens in the real world. #### ModesEdit Bikes, as complex mechanisms, have a variety of modes: fundamental ways that they can move. These modes can be stable or unstable, depending on the bike parameters and its forward speed. In this context, "stable" means that an uncontrolled bike will continue rolling forward without falling over as long as forward speed is maintained. Conversely, "unstable" means that an uncontrolled bike will eventually fall over, even if forward speed is maintained. The modes can be differentiated by the speed at which they switch stability and the relative phases of leaning and steering as the bike experiences that mode. Any bike motion consists of a combination of various amounts of the possible modes, and there are three main modes that a bike can experience: capsize, weave, and wobble.[1] A lesser known mode is rear wobble, and it is usually stable.[6] ##### CapsizeEdit Capsize is the word used to describe a bike falling over without oscillation. During capsize, an uncontrolled front wheel usually steers in the direction of lean, but never enough to stop the increasing lean, until a very high lean angle is reached, at which point the steering may turn in the opposite direction. A capsize can happen very slowly if the bike is moving forward rapidly. Because the capsize instability is so slow, on the order of seconds, it is easy for the rider to control, and is actually used by the rider to initiate the lean necessary for a turn.[6] For most bikes, depending on geometry and mass distribution, capsize is stable at low speeds, and becomes less stable as speed increases until it is no longer stable. However, on many bikes, tire interaction with the pavement is sufficient to prevent capsize from becoming unstable at high speeds.[6][7] ##### WeaveEdit Weave is the word used to describe a slow (0–4 Hz) oscillation between leaning left and steering right, and vice-versa. The entire bike is affected with significant changes in steering angle, lean angle (roll), and heading angle (yaw). The steering is 180° out of phase with the heading and 90° out of phase with the leaning.[6] This AVI movie shows weave. For most bikes, depending on geometry and mass distribution, weave is unstable at low speeds, and becomes less pronounced as speed increases until it is no longer unstable. While the amplitude may decrease, the frequency actually increases with speed. ##### Wobble or shimmyEdit Wobble, shimmy, tank-slapper, speed wobble, and death wobble are all words and phrases used to describe a rapid (4–10 Hz) oscillation of primarily just the front end (front wheel, fork, and handlebars). The rest of the bike remains essentially unaffected. This instability occurs mostly at high speed and is similar to that experienced by shopping cart wheels, airplane landing gear, and automobile front wheels.[6][7] While wobble or shimmy can be easily remedied by adjusting speed, position, or grip on the handlebar, it can be fatal if left uncontrolled.[39] This AVI movie shows wobble. Wobble or shimmy begins when some otherwise minor irregularity, such as fork asymmetry,[40] accelerates the wheel to one side. The restoring force is applied in phase with the progress of the irregularity, and the wheel turns to the other side where the process is repeated. If there is insufficient damping in the steering the oscillation will increase until system failure occurs. The oscillation frequency can be changed by changing the forward speed, making the bike stiffer or lighter, or increasing the stiffness of the steering, of which the rider is a main component.[10] ##### Rear wobbleEdit The term rear wobble is used to describe a mode of oscillation in which lean angle (roll) and heading angle (yaw) are almost in phase and both 180° out of phase with steer angle. The rate of this oscillation is moderate with a maximum of about 6.5 Hz. Rear wobble is heavily damped and falls off quickly as bike speed increases.[6] ##### Design criteriaEdit The effect that the design parameters of a bike have on these modes can be investigated by examining the eigenvalues of the linearized equations of motion.[35] For more details on the equations of motion and eigenvalues, see the section on theory below. Some general conclusions that have been drawn are described here. The lateral and torsional stiffness of the rear frame and the wheel spindle affects wobble-mode damping substantially. Long wheelbase and trail and a flat steering-head angle have been found to increase weave-mode damping. Lateral distortion can be countered by locating the front fork torsional axis as low as possible. Cornering weave tendencies are amplified by degraded damping of the rear suspension. Cornering, camber stiffnesses and relaxation length of the rear tire make the largest contribution to weave damping. The same parameters of the front tire have a lesser effect. Rear loading also amplifies cornering weave tendencies. Rear load assemblies with appropriate stiffness and damping, however, were successful in damping out weave and wobble oscillations. One study has shown theoretically that, while a bike leaned in a turn, road undulations can excite the weave mode at high speed or the wobble mode at low speed if either of their frequencies match the vehicle speed and other parameters. Excitation of the wobble mode can be mitigated by an effective steering damper and excitation of the weave mode is worse for light riders than for heavy riders.[9] ### Other hypothesesEdit Although bicycles and motorcycles can appear to be simple mechanisms with only four major moving parts (frame, fork, and two wheels), these parts are arranged in a way that makes them complicated to analyze.[10] While it is an observable fact that bikes can be ridden even when the gyroscopic effects of their wheels are canceled out,[3][4] the hypothesis that the gyroscopic effects of the wheels are what keep a bike upright is common in print and online.[3][24] Examples in print: • "Angular momentum and motorcycle counter-steering: A discussion and demonstration", A. J. Cox, Am. J. Phys. 66, 1018–1021 ~1998 • "The motorcycle as a gyroscope", J. Higbie, Am. J. Phys. 42, 701–702 • The Physics of Everyday Phenomena, W. T. Griffith, McGraw–Hill, New York, 1998, pp. 149–150. • The Way Things Work., Macaulay, Houghton-Mifflin, New York, NY, 1989 And online: ## Longitudinal dynamicsEdit Bikes may experience a variety of longitudinal forces and motions. On most bikes, when the front wheel is turned to one side or the other, the entire rear frame pitches forward slightly, depending on the steering axis angle and the amount of trail.[6][23] On bikes with suspensions, either front, rear, or both, trim is used to describe the geometric configuration of the bike, especially in response to forces of braking, accelerating, turning, drive train, and aerodynamic drag.[6] The load borne by the two wheels varies not only with center of mass location, which in turn varies with the amount and location of passengers and luggage, but also with acceleration and deceleration. This phenomenon is known as load transfer[6] or weight transfer,[21][36] depending on the author, and provides challenges and opportunities to both riders and designers. For example, motorcycle racers can use it to increase the friction available to the front tire when cornering, and attempts to reduce front suspension compression during heavy braking has spawned several motorcycle fork designs. The net aerodynamic drag forces may be considered to act at a single point, called the center of pressure.[21] At high speeds, this will create a net moment about the rear driving wheel and result in a net transfer of load from the front wheel to the rear wheel.[21] Also, depending on the shape of the bike and the shape of any fairing that might be installed, aerodynamic lift may be present that either increases or further reduces the load on the front wheel.[21] ### StabilityEdit Though longitudinally stable when stationary, a bike may become longitudinally unstable under sufficient acceleration or deceleration, and Euler's second law can be used to analyze the ground reaction forces generated.[41] For example, the normal (vertical) ground reaction forces at the wheels for a bike with a wheelbase$ L $and a center of mass at height$ h $and at a distance$ b $in front of the rear wheel hub, and for simplicity, with both wheels locked, can be expressed as:[6]$ N_r = mg\left(\frac{L-b}{L} - \mu \frac{h}{L}\right) $for the rear wheel and$ N_f = mg\left(\frac{b}{L} + \mu \frac{h}{L}\right) $for the front wheel. The frictional (horizontal) forces are simply$ F_r = \mu N_r \, $for the rear wheel and$ F_f = \mu N_f \, $for the front wheel, where$ \mu \, $is the coefficient of friction,$ m \, $is the total mass of the bike and rider, and$ g \, $is the acceleration of gravity. Therefore, if$ \mu \ge \frac{L-b}{h} $, which occurs if the center of mass is anywhere above or in front of a line extending back from the front wheel contact patch and inclined at the angle$ \theta = \tan^{-1} \left( \frac{1}{\mu} \right) \, \$
above the horizontal,[21] then the normal force of the rear wheel will be zero (at which point the equation no longer applies) and the bike will begin to flip or loop forward over the front wheel.
On the other hand, if the center of mass height is behind or below the line, as is true, for example on most tandem bicycles or long-wheel-base recumbent bicycles, then, even if the coefficient of friction is 1.0, it is impossible for the front wheel to generate enough braking force to flip the bike. It will skid unless it hits some fixed obstacle, such as a curb.
Similarly, powerful motorcycles can generate enough torque at the rear wheel to lift the front wheel off the ground in a maneuver called a wheelie. A line similar to the one described above to analyze braking performance can be drawn from the rear wheel contact patch to predict if a wheelie is possible given the available friction, the center of mass location, and sufficient power.[21] This can also happen on bicycles, although there is much less power available, if the center of mass is back or up far enough or the rider lurches back when applying power to the pedals.[42]
Of course, the angle of the terrain can influence all of the calculations above. All else remaining equal, the risk of pitching over the front end is reduced when riding up hill and increased when riding down hill. The possibility of performing a wheelie increases when riding up hill,[42] and is a major factor in motorcycle hillclimbing competitions.
### BrakingEdit
Most of the braking force of standard upright bikes comes from the front wheel. As the analysis above shows, if the brakes themselves are strong enough, the rear wheel is easy to skid, while the front wheel often can generate enough stopping force to flip the rider and bike over the front wheel. This is called a stoppie if the rear wheel is lifted but the bike does not flip, or an endo (abbreviated form of end-over-end) if the bike flips. On long or low bikes, however, such as cruiser motorcycles and recumbent bicycles, the front tire will skid instead, possibly causing a loss of balance.
In the case of a front suspension, especially telescoping fork tubes, the increase in downward force on the front wheel during braking may cause the suspension to compress and the front end to lower. This is known as brake diving. A riding technique that takes advantage of how braking increases the downward force on the front wheel is known as trail braking.
#### Front wheel brakingEdit
The limiting factors on the maximum deceleration in front wheel braking are:
• the maximum, limiting value of static friction between the tire and the ground, often between 0.5 and 0.8 for rubber on dry asphalt,[43]
• the kinetic friction between the brake pads and the rim or disk, and
• pitching or looping (of bike and rider) over the front wheel.
For an upright bicycle on dry asphalt with excellent brakes, pitching will probably be the limiting factor. The combined center of mass of a typical upright bicycle and rider will be about Template:Cm to in back from the front wheel contact patch and Template:Cm to in above, allowing a maximum deceleration of 0.5 g (4.9 m/s² or 16 ft/s²).[10] If the rider modulates the brakes properly, however, pitching can be avoided. If the rider moves his weight back and down, even larger decelerations are possible.
Front brakes on many inexpensive bikes are not strong enough so, on the road, they are the limiting factor. Cheap cantilever brakes, especially with "power modulators", and Raleigh-style side-pull brakes severely restrict the stopping force. In wet conditions they are even less effective. Front wheel slides are more common off-road. Mud, water, and loose stones reduce the friction between the tire and trail, although knobby tires can mitigate this effect by grabbing the surface irregularities. Front wheel slides are also common on corners, whether on road or off. Centripetal acceleration adds to the forces on the tire-ground contact, and when the friction force is exceeded the wheel slides.
#### Rear wheel brakingEdit
The rear brake of an upright bicycle can only produce about 0.1 g deceleration at best,[10] because of the decrease in normal force at the rear wheel as described above. All bikes with only rear braking are subject to this limitation: for example, bikes with only a coaster brake, and fixed-gear bikes with no other braking mechanism. There are, however, situations that may warrant rear wheel braking[44]
• Slippery surfaces. Under front wheel braking, the lower coefficient of friction may cause the front wheel to skid which often results in a loss of balance.
• Front flat tire. Braking a wheel with a flat tire can cause the tire to come off the rim which greatly reduces friction and, in the case of a front wheel, result in a loss of balance.
• Long mountain descents. Alternating between front and rear brakes can help reduce heat buildup which can cause a blowout.
## SuspensionEdit
Bikes may have no front, rear or full suspension that operate primarily in the central plane of symmetry; though with some consideration given to lateral compliance.[21] The goals of a bike suspension are to reduce vibration experienced by the rider, maintain wheel contact with the ground, and maintain vehicle trim.[6] The primary suspension parameters are stiffness, damping, sprung and unsprung mass, and tire characteristics.[21] Besides irregularities in the terrain, braking and acceleration forces can also activate the suspension as described above.
## VibrationEdit
The study of vibration in bikes includes its causes, such as engine balance,[45] wheel balance, ground surface, and aerodynamics; its transmission and absorption; and its effects on the bike, the rider, and safety.[46] An important factor in any vibration analysis is a comparison of the natural frequencies of the system with the possible driving frequencies of the vibration sources.[47] A close match means mechanical resonance that can result in large amplitudes. A challenge in vibration damping is to create compliance in certain directions (vertically) without sacrificing frame rigidity needed for power transmission and handling (torsionally).[48] Another issue with vibration for the bike is the possibility of failure due to material fatigue[49] Effects of vibration on riders include discomfort, loss of efficiency, Hand-Arm Vibration Syndrome, a secondary form Raynaud's disease, and whole body vibration. Vibrating instruments may be inaccurate or difficult to read.[49]
### In bicyclesEdit
The primary cause of vibrations in a properly functioning bicycle is the surface over which it rolls. In addition to pneumatic tires and traditional bicycle suspensions, a variety of techniques have been developed to damp vibrations before they reach the rider. These include materials, such as carbon fiber, either in the whole frame or just key components such as the front fork, seatpost, or handlebars; tube shapes, such as curved seat stays;[50] and special inserts, such as Zertz by Specialized, [51][52] and Buzzkills by Bontrager.
### In motorcyclesEdit
In addition to the road surface, vibrations in a motorcycle can be caused by the engine and wheels, if unbalanced. Manufacturers employ a variety of technologies to reduce or damp these vibrations, such as engine balance shafts, rubber engine mounts,[53] and tire weights.[54] The problems that vibration causes have also spawned an industry of after-market parts and systems designed to reduce it. Add-ons include handlebar weights,[55] isolated foot pegs, and engine counterweights. At high speeds, motorcycles and their riders may also experience aerodynamic flutter or buffeting.[56] This can be abated by changing the air flow over key parts, such as the windshield.[57]
## ExperimentationEdit
A variety of experiments have been performed in order to verify or disprove various hypotheses about bike dynamics.
• David Jones built several bikes in a search for an unridable configuration.[4]
• Richard Klein built several bikes to confirm Jones's findings.[3]
• Richard Klein also built a "Torque Wrench Bike" and a "Rocket Bike" to investigate steering torques and their effects.[3]
• Keith Code built a motorcycle with fixed handlebars to investigate the effects of rider motion and position on steering.[58]
• Schwab and Kooijman have performed measurements with an instrumented bike.[59]
## ReferencesEdit
1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17 Meijaard, Papadopoulos, Ruina, and Schwab (2007). "Linearized dynamics equations for the balance and steer of a bicycle: a benchmark and review". Proc. R. Soc. A. 463 (2084): 1955–1982. doi:10.1098/rspa.2007.1857.
2. Limbebeer and Sharp (2006). "Single-Track Vehicle Modeling and Control: Bicycles, Motorcycles, and Models". IEEE Control Systems Magazine (October): 34–61.
3. 3.0 3.1 3.2 3.3 3.4 3.5 Template:Cite web
4. 4.0 4.1 4.2 4.3 4.4 4.5 Jones, David E. H. (1970). "The stability of the bicycle" (PDF). Physics Today 23 (4): 34–40. doi:10.1063/1.3022064. Retrieved 2008-09-09.
5. 5.0 5.1 5.2 5.3 Sharp, R. S. (July 2007). "Motorcycle Steering Control by Road Preview". Journal of Dynamic Systems, Measurement, and Control (ASME) 129 (July 2007): 373–381. doi:10.1115/1.2745842.
6. 6.00 6.01 6.02 6.03 6.04 6.05 6.06 6.07 6.08 6.09 6.10 6.11 6.12 6.13 6.14 6.15 6.16 6.17 6.18 6.19 6.20 6.21 6.22 6.23 6.24 Cossalter, Vittore (2006). Motorcycle Dynamics (Second ed.). Lulu.com. pp. 241–342. ISBN 978-1-4303-0861-4.
7. 7.0 7.1 7.2 7.3 7.4 7.5 Wilson, David Gordon; Jim Papadopoulos (2004). Bicycling Science (Third ed.). The MIT Press. pp. 263–390. ISBN 0-262-73154-1.
8. Sharp, R.S. (1985). "The Lateral Dynamics of Motorcycles and Bicycles". Vehicle System Dynamics 14: 265. doi:10.1080/00423118508968834.
9. 9.0 9.1 9.2 Limebeer, Sharp, and Evangelou (November 2002). "Motorcycle Steering Oscillations due to Road Profiling". Transactions of the ASME 69: 724–739. doi:10.1115/1.1507768.
10. 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 Whitt, Frank R.; David G. Wilson (1982). Bicycling Science (Second ed.). Massachusetts Institute of Technology. pp. 198–233. ISBN 0-262-23111-5.
11. Phillips, Matt (April 2009). "You Don't Know Squat". Mountain Bike (Rodale): 39–45.
12. Template:Cite web
13. Template:Cite web
14. Template:Cite web
15. Template:Cite web
16. Template:Cite web
17. Template:Cite web
18. Template:Cite news
19. Template:Cite web
20. Template:Cite web
21. 21.00 21.01 21.02 21.03 21.04 21.05 21.06 21.07 21.08 21.09 21.10 21.11 21.12 Foale, Tony (2006). Motorcycle Handling and Chassis Design (Second ed.). Tony Foale Designs. ISBN 978-84-933286-3-4.
22. Template:Cite web
23. 23.0 23.1 23.2 Template:Cite web
24. 24.0 24.1 24.2 24.3 24.4 24.5 Fajans, Joel (July 2000). "Steering in bicycles and motorcycles" (PDF). American Journal of Physics 68 (7): 654–659. doi:10.1119/1.19504. Retrieved 2006-08-04.
25. McGill, David J; Wilton W. King (1995). Engineering Mechanics, An Introduction to Dynamics (Third ed.). PWS Publishing Company. pp. 479–481. ISBN 0-534-93399-8.
26. Template:Cite web
27. Template:Cite web
28. Template:Cite web
29. Template:Cite web
30. Template:Cite web
31. Template:Cite web
32. Template:Cite web
33. Template:Cite web
34. Template:Cite web
35. 35.0 35.1 Template:Cite web
36. 36.0 36.1 Cocco, Gaetano (2005). Motorcycle Design and Technology. Motorbooks. pp. 40–46. ISBN 978-0-7603-1990-1.
37. Template:Cite web
38. Schwab, A. L.; J. P. Meijaard and J. D. G. Kooijman (5–9 June 2006). "Experimental Validation of a Model of an Uncontrolled Bicycle" (PDF). III European Conference on Computational Mechanics Solids, Structures and Coupled Problems in Engineering (Lisbon, Portugal: C.A. Mota Soares et al.). Retrieved 2008-10-19.
39. Template:Cite news
40. Template:Cite web
41. Ruina, Andy; Rudra Pratap (2002) (PDF). Introduction to Statics and Dynamics. Oxford University Press. p. 350. Retrieved 2006-08-04.
42. 42.0 42.1 Template:Cite web
43. Template:Cite web
44. Template:Cite web
45. Template:Cite web
46. Template:Cite web
47. Template:Cite web
48. Strickland, Bill (2008-08). "Comfort is the New Speed". Bicycling Magazine (Rodale) XLIV (7): 118–122.
49. 49.0 49.1 Rao, Singiresu S. (2004). Mechanical Vibrations (fourth ed.). Pearson, Prntice Hall. ISBN 0-13-048987-5.
50. Template:Cite web
51. Template:Cite web
52. Template:Cite web
53. Template:Cite web
54. Template:Cite web
55. Template:Cite web
56. Template:Cite web
57. Template:Cite web
58. Template:Cite news
59. Template:Cite web |
## Need solution ,thanks
<<Graphs of 2π -PeriodicFunctions>>Sketch or plot the following function f(x),which are assumed to be periodic with period 2π and, for-π <x<π , are given by the formulas |
# N e w B a n a c h s p a c e p r o p e r t i e s the d i s c a l g e b r a and H =
## Full text
(1)
### N e w B a n a c h s p a c e p r o p e r t i e s the d i s c a l g e b r a and H =
by J. BOURGAIN
Vrije Unioersiteit, Brussels, Belgium
### 0. Introduction
The purpose of this paper is to prove some new linear properties of the disc algebra A and the space H ~176 of bounded analytic functions on the disc. More precisely, results on absolutely summing operators, cotype, finite rank projections and certain sequence properties, such as Dunford-Pettis property and weakly completeness, are obtained.
The main motivation for this work were A. Pelczynski's notes (see [44]), which contain also most of the required prerequisites. Our work extends [44], since it solves several of the main problems. It is also of interest in connection withquestions raised in [30], [32], [33], [35], [59]. Besides [44], our references for Banach space theory are [36], [37], [38], [47]. Basic facts about/4P-spaces can be found in [18], [20], [27], [53], [54].
In what follows, we will first describe the frame of the work and recall some definitions. Then we will summarize the several sections of the paper and state the main results. If u is an operator from a space X into a space Yand 0<p<oo, we say that u is p-absolutely summing provided there is a constant ~. such that
xi,
### X*)[P; x*~X*, [[x*,[ <~ 1 }
holds for all finite sequences (xi) of elements of X. The p-summing norm :r,(u) of u is the smallest A with above property. Let H,(X, I9 be the space of p-summing operators from X into Y.
For O<p< 1, the spaces lip(X, 19 coincide and will also be denoted by Ho(X, 19, the O-summing operators from X into Y. Say that u is p-integral, resp. strictly p-integral, provided u admits a factorization
u j u
X :- Y = Y * * X : Y
### S L
L~O(/x) I :- Lp(/x) L~(p) I = LP(ix)
1-848288 Acta Mathematica 152. Imprim~ le 17 Avril 1984
(2)
2 J. BOURGAIN
where/~ is a probability measure, I the identity map a n d j the canonical embedding. The space of strictly p-integral operators is denoted by
### Ip(X, Y)
and is equipped with the (strictly integral) norm
### i (u)
= inf[ISI111711
where the infimum is taken over all factorizations. Say that X has Grothendieck property provided any operator from X into ! 2 is 1-summing, thus
### B(X,/2)=III(X,
/2).
An equivalent formulation is the equality
### B(X*,
I~)=II2(X*, 1~). Grothendieck's theo- rem asserts that L~(a)-spaces have Grothendieck property. As pointed out in [44]
(Theorem 3.2), this general result follows easily from the fact that the operator
### --~l,
where P is a Paley projection, is onto. This shows the usefulness of certain specific operators arising in harmonic analysis to the general theory. It is shown in [41]
(Theorem 94) that Grothendieck's theorem can be improved to the equality
### B(l 1,/2)=ii0(ll,/2).
A way of seeing this (cf. [32], section 2) is to consider the set
A = Z + U {-2n; n = 0 , 1 , 2 .... } and the orthogonal projection
### Q:CA___~ 2 L{-En}
which is again onto by Paley's theorem. Now, for p>O, one has the inequality
\ ,,2 f
### \
from which it follows that Q is p-summing.
Absolutely summing operators on A appear in the study of certain multipliers. For instance, Paley's theorem that each (A, P)-multiplier M is/2-summable is equivalent to the statement MEH2(A, lJ). In this spirit, the reader is referred to [35] for a study of translation-invariant absolutely summing operators. Our work actually shows that these results extend to arbitrary operators and that the equality
### B(A,
/1)=1-I2(A, /l) holds in general.
One of the striking facts about operators on the disc algebra is the following extension of the coincidence of the notions of p-summing and p-integral operators on C(K)-spaces (see [44], section 2).
(3)
N E W B A N A C H SPACE P R O P E R T I E S O F T H E D I S C A L G E B R A A N D H ~
PROPOSITION 0.1. For l < p < oo, any p-summing operator u on A is strictly p- integral. Furthermore
p2 ip(U) <- const. P - 1 :rp(U).
Proposition 0.1 extends the LP-boundedness of the Riesz-projection for 1 < p < oo. It provides a linear invariant which allows for instance to establish the non-isomorphism of A and the polydiscalgebra's. A new proof of Proposition 0. I based on weighted norm inequalities can be found in [32] (section 2).
Denote m the normalized Haar measure on the circle H. If A is a measurable subset of II, we shall sometimes use the notation
### IAI
for m(A). I f f E L l ( F l ) , f f means always f f d m . If H01 is the space of integrable functions f o n H such that
f(n) = f f ( o ) e-i"~ = 0 for n ~< 0
then the duality
### (f, w) = ff.wdm
identifies H ~ to the dual of the quotient space Ll/H~o . We consider the quotient map q: L1---~L1/H~. This map has several remarkable properties which the reader can find in [44] (sections 8 and 9). To each x in LI/HI o corresponds a unique f in L I such that q(f)=x and
### Ilfli--llxll.
This fact defines the minimum norm lifting o: L1/HI~,~,L I.
If A is a weakly conditionally compact (WCC) subset of Ll/L~o, then o(A) is relatively weakly compact in L ~. Recall that A is WCC provided each sequence in A has a weakly Cauchy sequence. This fact combined with the F. and M. Riesz character- ization of A* as
A* = L I / H ~ M s ( H ) (M s = singular measures)
implies that A* is weakly complete and satisfies the Dunford-Pettis property (DPP). It was unknown whether or not A could be replaced by H ~. We answer this affirmative- ly, by showing that any ultra-power (L1/H~o)~ of LI/H~ is weakly complete and has DPP.
Achieving this requires a local version of the regularity property of o with respect to WCC-sets. This localization, previously sketched in [7], turns out to generalize J.
Garnett's theorem that harmonically interpolating sequences in the disc are interpolat-
(4)
4 J. BOURGAIN
ing (see [21]). We will use here a reverse approach (see also the remarks in last section) deriving the lifting theorem from certain facts on interpolating sequences which are apparently new. These will be obtained by dualization of certain results on vector- valued H~-spaces, which are of independent interest.
The fact that the Paley projection P:A-.-~I 2 does not factor through an Ll(u)-space, implies, by general results, that A has no local unconditional structure (see [44], section 4). This means that A cannot be obtained as closure of an increasing sequence Ea of finite dimensional subspaces so that supa unc(Ea)<oo, where
and
u n c X = inf {unc {xi} ; {xi} is a basis for X}
### unc'xi':supIIl +ax I i
In particular, A is not an LC~~ (see [36] p. 198 for defirition). However, as we prove, Co is the only (infinite dimensional) complemented subspace of A possessing an unconditional basis and A only admit c0-unconditional decompositions (cfr. [57], [58]).
Say that X is a P~-space (2/> l) provided X embeds as 2-complemented subspace of a C(K)**-space. The structure of f n i t e dimensional Pa-spaces is not yet understood, except in the case 2 is close to 1 (see [60]).
We investigate here finite rank projections in A and show that the range has to contain /m-Spaces of proportional dimension. Besides, any n-dimensional a-comple- mented subspace of A is a Pa for 2 of the order a-log n. Natural examples, such as the polynomial spaces L(o ' ~ ... } show that this result is best possible.
The results on the disc algebra presented in this paper use heavily the fact that A is a log modular algebra. For some of them, also the weak-type property of the Hilbert transform is involved. At this time, we don't know of extensions to other natural spaces, such as the polydisc- and ball-algebra's or spaces defined by singular integrals.
Let us now outline how the remainder of the paper is organized and indicate the main results obtained in the different sections.
In the next section, we derive some simple consequences of the weak-type property of the Hilbert transform. We then apply the classical construction of outer functions to obtain H | functions satisfying certain prescribed conditions. More pre- cisely, a Havin type lemma is obtained and certain "truncation" results. The main result is contained in Proposition 1.7, which will be used several times in the paper.
(5)
N E W B A N A C H S P A C E P R O P E R T I E S O F T H E D I S C A L G E B R A A N D H ~
Section 2 is devoted to the study of absolutely summing operators on the disc algebra and H =. The central theorem can be stated concretely as follows
THEOREM 0.2. For each finite sequence (Xk)l<.k~n in LI/H I, there exists a lifting (fk)l<.k<.n in Ll(II), i.e. q(fk)=Xk for k---I ... n, such that
sup E ekfk <~C sup E ekxk
ek= +1 tk= +1
1
where C is a fixed constant.
Theorem 0.2 is equivalent to the Grothendieck property of LI/HIo . Two different proofs of this fact are presented. The first is the so-called extrapolation-method, which relies on an interpolation enequality for the p-summing norms of an operator on A. The second, which was suggested in [32], consists in proving that 0-summing operators on A are nuclear. Both approaches have several further consequences for the local structure of the disc algebra.
In section 3, certain vector-valued Hi-spaces are characterized. More precisely, the following result is proved.
THEOREM 0.3. Let Xo (resp. XO be C N equipped with a weighted l ~ (resp. I l) norm. Then the spaces Hloux~ and H'xoOH'x~ have equivalent norms (up to a fixed 1 l constant).
This fact combined with classical Blaschke product techniques has consequences for interpolating sequences in the unit disc, which will be used in the next section. One could use Theorem 0.3, and the method to derive it, to develop the real interpolation method for H l (and H p) spaces taking values in Lorentz spaces. Theorem 0.3 can indeed be rephrased in terms of K- or J-functionals (see [2] p. 38, for instance). This further development is however not worked out in the paper since it seems us a bit outside its purpose.
The results of section 3 are used in section 4 to derive the following property of the minimum norm lifting of L~/H~o .
THEOREM 0.4. Let (Xk)l<~k<~ n be elements o f L1/H~ and assume fk=o'(xk) satisfy
### (i) J / max' k lfkl a Xk llxkll
w h e n e v e r ~ k ~ O .
(6)
6 J. BOURGAIN Then there are H~176 (Cpk)l~k~. such that
(iii) I(fk, q ~ k ) l > e ( 6 ) f o r k = l ... n.
This fact can be seen as a local version of the lifting property of weakly condition- nally compact subsets of L1/H~ by a.
In section 5, further linear properties of H ~ are obtained. We first combine Theorem 0.4 with results of N. Tomczak-Jaegermann [56] to prove that any finite dimensional well-complemented subspace of H ~ contains l d subspaces of proportional dimension. Theorem 0.4 is then used to extend J. Chaumat's results (see [15]) on the Dunford-Pettis property and weakly completeness of LI/H~ to the space (H~) *. Our method uses ultraproduct representation, which in this context seems the most conve- nient form of the local reflexivity principle.
Section 6 contains further extensions and applications. Results of D. Marshall [39]
allow to generalize part of our work to closed subalgebra's of L~(II) containing H ~176 Our results on the Grothendieck property solve affirmatively a question of N. Varopou- los on projective tensor algebra's. They also turned out to be useful in a recent construction of Banach spaces in connection with some conjectures of A. Grothen- dieck on tensor products (see [50]).
Part of the material presented here was already announced in the C.R. Acad. Sci.
Paris notes [4] and [6]. The reader will find a summary in [11].
1. Preliminaries and decomposition lemma Let us first fix some notation.
D = { z E C ;]z[<l} is the open unit disc and H, m the circle equipped with Haar measure. Denote Pr (0~<r<l) the Poisson kernel, ~ = ~ + (resp. ~ _ ) the positive (resp.
negative) Riesz projection and ~ the Hilbert transform.
Define for convenience
Hfllw = sup2rn[]j~ >~.] for f measurable on H
2 > 0
]lf]], = IIfI]L,/H, = inf {[]f+h ]l,, h E H0 I} for f E L'(H).
The restriction map f~3qrl gives an isometric embedding of A in C(H).
(7)
N E W B A N A C H SPACE PROPERTIES O F THE DISC A L G E B R A A N D H = 7 Identifying the /-F-function with its radial limit, the space H p can be seen as subspace of Le(YI). If it is not specified otherwise, A- and/-F-functions will always be seen as functions on H. Through the paper, C will be some numerical constant.
PROPOSITION 1.1. Assume/u in M(II). Then (i) For a< 1, ~(tt~Pr) converges in L~(I-I)for r-.1.
<
(ii) I[fllw <<-Cl~ull, where f = s u p
### I~(,u-~Pr)].
r < l
The reader will find a detailed exposition of these classical facts in [20] (see Theorem 3. I p. 57, Theorem 2.1 p. 111).
LEMMA 1.2. I f toEL~(H) and 0~<a<l, then -< I,', -o llol,:
Proof. Define A(2)=m[IJ]>~.] and fix 0<2o<OO. Then, by partial integration
Taking then
the required inequality follows.
] ' 0 - -
### II'oll,
As consequence of Proposition 1.I and Lemma 1.2, we get LEMMA 1.3. IffELl(II),
### ~oeL+(rI)
and 0~<a<l, then
f I~t-(f)l ~c~ <~,l_-~Ca IIt~ Iltoll: I~1,
Let us recall the construction of an outer function. Assume f > 0 a bounded measurable function on II and log f i n Li(FI). If for zED, we define
f ei~
g(z) = exp log f(0) ~ m(dO).
e - z ) then g is an H~-function and has boundary value f e i ~ecIogD
(8)
8 J. B O U R G A I N
N e x t l e m m a is related to the so-called Havin lemma (see [25]).
LEMMA 1.4. I f A is a measurable subset o f H and 0 < e < l , then there are H ~- f u n c t i o n s q9 a n d ~0 such that
(i)
(ii)
### I~(z)-l/51~<
e for z E A (iii) [~p(z)I---<e for z E A (iv)
(log e-l)2lAI
(log e - i )
### IAI i/2.
Proof. Only the L2-boundedness of the Hilbert transform is involved here. Take first
and consider the H~-function
r = 1 - - ( 1 - - e ) Z A
f = r e i~(l~
Then 13q=e on A and since
I 1-3~ ~< I 1 - rl + {(1 - cos ~e(log r)) 2 + sin 2 Y((log r) ) 1/2
we get
~< I I - r l + l ~ ( l o g r)l
/ I x ,
Thus if
### q0=l(1--f) 2,
(ii) and (iv) are fulfilled. Take now u = 1-iqo I and g = ~ c e i~(l~
Then Iqol+lg[~<l and one verifies easily that IIl-gll2~<C(log l/e)IAI ~/2.
Define v / = f . g , which satisfies clearly (i), (iii) and (v).
LEMMA 1.5. There is a constant C such that i f f is positive measurable on II and 0 < 2 < ~ , one can f i n d cp E H ~ satisfying
(i)
(ii) Iq~ If~<3;t
(qi)
### (iv) I1(1-~)f111 ~<c Str>ajf.
(9)
N E W B A N A C H SPACE PROPERTIES OF T H E DISC A L G E B R A AND H =
Proof. If [IfHw=~, there is nothing to prove. Otherwise, the function r = [ max (2-If, 1)] -I
has integrable logarithm and one can consider the H~-function
= T e i ~(Iog r).
Define ~=l--(1--~l)2=2~l--~q 2 for which
### 1~1<~31~0,1.
Hence (on FI)
Also
where
and
Iq~lf<~ 3rf <~ 3k.
### f(
1 - r ) z ~< m l f > 2 ]
Because (log X ) 2 ~ X for x > 1, it follows that II1-~011, ~< c,~ -~ Ilfllw and Finally
~ c & - '
### fry>a f"
9
f = f l + f 2 /-
II(1-~o)f[I,~4~ f+A[[1-~[l,
JOe>).]
implying (iv).
As a consequence of preceding lemma, we get following Marcinkiewicz decompo- sition for/-P' functions.
PROPOSITION 1.6. For giuen 0<p<o% there is a constant Cp such that i f f E H p and 0<2<oo, there is a decomposition in H p
(10)
10 J. BOURGA1N
where
(i) Ifll ~< CplJ] and If2] ~< CplJq (ii) If~l -< cp
### (iii) f IAI~ <- c, ft~ >~ 0q'.
Proof. Apply preceding result to the Ll-function ~ replacing 2 by ;t p. Let q9 be the H~176 obtained in this way. If p/> 1, define
f l = f q 0 and f2=f(1-q0).
For 0 < p < I, define first
q~, = 1-(1-qr k where k = [ 1 ] + 1 and take again f i = f q~l, fz =f( l-cpl ).
In what follows, crucial use will be made of the following result.
PROPOSITION 1.7. There is a constant C such that given a positive Ll(1-l) function f, f f = 1, and 0 < 8 < 1 , there are positive scalars (ci) and sequences (Oi), (ri) o f H ~ functions satisfying following conditions
(i) IlOell~ ~ c (ii) E kil ~ c (iii) [ri[f~< c i
(iv) Z
### c;Ik;ll~ ~
c a - c
(v) f [1-E O~[f<~ b
Proof. Fix a positive number M=M(6)> 1 which will be specified later and define for i E Z
A i = [M / ~<f< M~'+'].
Clearly [Ail<~M-ifAif<~M-i. Apply L e m m a 1.4 t o each set Ai, taking e=M -~, which leads to H~-functions ~Pi, ~Pi. Hence, by (i), (v) of L e m m a 1.4,
1~0~+ s- 11 ~< sl~Pi+ ~- 11 because [~Pi§ ~< 1 for s = 1,2 ....
(11)
N E W B A N A C H SPACE P R O P E R T I E S OF T H E DISC A L G E B R A A N D H ~ 11 and thus
IIl-~i+~lh ~ slll-~,+~llz~<
### ClogM
sIAi+s[ 1;2 < oo.
s = l s = l s = l
Therefore, the formula
oo
~; =
### 5 ~0; I-I
V:;'+,
s = 8
defines an H~ Moreover, by (ii) and (v) of L e m m a 1.4, and the Cauchy- Schwartz inequality
(vi)
### ~fAll--~,lf<~/fAI]--59~ilf+~SfA, II--w,+,lf
~ 5 e + C l o g M Z M ' + ' 2 slAill/2lA,+,l ''2
i s>~8
<~ 5 e + C l o g M E s M '-'2 E ( Mi
### Ia;I) ';2 ( M;+' Ia;+,l) ''2
s~>8 i
<~ 5 M-I + C l o g M E s M l-s/2 = C M -l.
s>~8
Further
(vii) flril ~< 5 M i+8
since by L e m m a 1.4(iii) we get on the set Aj forj~>i+8
~r;I ~< 5 M j+' l~X-i ~ 5M j+l E j - i = 5 M i+1 while f~r/l~<5f<~5 M i+8 on LIj<i+ s A~.
Also, by L e m m a 1.4(iv)
(viii) ~ M;ll~,.Ih = 5 ~
~ C (logM)Z ~
### MqA;I
~ C (log M) z.
For t=0, 1,2 ... 7, write for convenience i=t provided i=t (mod 8).
(12)
~ 5.
T h u s
7
t = 0
### j>>-i+8
(13)
NEW BANACI-1 SPACE PROPERTIES OF THE DISC ALGEBRA AND H ~
By (ix)
(xii)
### Elr,.l~40
and from definition and (x) (xiii)
Since
f ~ 7
### f 1--~OiT~ f~Itll--r]tlf<~Ct~=ofBIl--r]t[f.
(xi) shows that (v) will be satisfied for M~c~ -j .
Taking ci=5 M '§ conditions (iii) and (iv) follow from (vii) and (viii).
So this completes the proof.
13
2. Absolutely summing operators and the cotype property
For completeness sake, we recall the following fact (cf. [44], Theorem 2.3) concerning the decomposition of an absolutely summing operator T on the disc algebra A.
PROPOSITION 2.1. A s s u m e T p-surnming (p<. 1) on A. Then T has a decomposition T= TI + T2, where the c o m p o n e n t s T1, 1"2 fulfil the following conditions:
(i) zcv(Tly' + grp(T2)P ~rtp(T) p,
(ii) ~rp(Tj) is realized by a Pietch measure on the circle H belonging to Ll(m).
Moreover
~rq(T~) <<. rCq(T) forO < q <~ oo.
(iii) There is a sequence o f operators S , : C(FI)---~A, such that (T2Sn) converges in Jrp-norm to an operator ]'2 satisfying T2 = TzJ where j : A---~C(FI) is the injection.
Proof. Since p~> 1, the A. Pietch factorisation theorem (cf. [47]) provides a Radon probability measure/~ on 1-I such that
### IIT( )II for EA.
Let dt~=h dm+dl~s be the Lebesgue decomposition and L = U , ~ t K,, K, compact, a Ko- subset of I-I so that m(L)=0 and/~s(L)=lLu~ll.
(14)
14 J. BOURGAIN
By classical results from peak-set theory (see [20], p. 203, possible to find a sequence (~0k) in A satisfying the conditions
### (iv) II 0kll l
(v) (~PD converges pointwise to 0 on the set L (vi) (~Pk) converges to 1 a.e. (with respect to m), Since in particular (~/'k) converges in LP(g), one can define
and
Clearly
T ~ ( O = lira T(tp~p k)
k ----> 0o
L e m m a 4.5), it is
T2= T - T 1.
and
### [IZ2( 0)ll )
from where (i).
A similar reasoning actually shows that ~rp(Ti) is realized by an m-regular meas- ure. Also, from definition, it follows immediately that
### .7"tq(Ti) ~ ,7l(q(T)
for all 0 < q ~< ~ . Denote for each n = I, 2 .... ,
R,, : C(H) ~ C(Kn) the restriction operator and
E , : C ( K , ) - + A a norm-preserving extension (cf. [44], Theorem 2.1).
If one defines S,=E,d~n, it is easily verified that the sequence (T2Sn) converges in ~rp- norm to an extension/~2 o f T2.
Remarks. (I) The first c o m p o n e n t Tt can be extended to H =, defining T1(9) = lim
### Tl(q~-~Pr).
r - - } l
<
It is indeed clear that the limit exists. This extension will be useful in what follows.
(15)
N E W B A N A C H SPACE P R O P E R T I E S OF T H E DISC A L G E B R A A N D H ~ 15 (2) In case T is 0-summing, one applies Proposition 2.1 with p = 1. Thus T1 and T2 are again 0-summing. Now, for each n, the operator T2Sn will therefore be nuclear (see [32], Theorem 5) and therefore also 7~2 and T2.
Let us now state our first main result.
THEOREM 2.2. Let p ~ 2 and Y a Banach space such that any bounded operator from C into Y is p-summing, i.e. B(C, Y)=FIp(C, Y). Then also B(A, Y)=lip(A, Y).
(lip can be replaced by Ip, taking Proposition O. 1 in account.)
Theorem 2.2 will be obtained from an extrapolation technique, which was already previously used in [43], [34] and G. Pisier's proof that the q u o t i e n t o f L 1 by a reflexive subspace verifies the Grothendieck theorem [48].
The main ingredient is an interpolation inequality on the p-summing norms, which will be presented in the next sub-section.
2.1. An interpolation inequality
Our purpose is to show the following fact:
PROPOSITION 2.3. A s s u m e l < p < ~ and T p-summing on A. Let p<q<oo and 0 such that
Then f o r all 0 < ~ < 0 , one has an inequality
iq(T) <~ C(p, q,
### ~9) II~ql~
where, more precisely, C(p, q, ~)=C(p)/(O-@).
This result turns out to be sharp, as will be clear from a discussion below.
The proof of Proposition 2.3 depends on the following preliminary decomposition property.
LEMMA 2.4. There is a constant C > I such that under the hypothesis o f Proposi- tion 2.3 and for given 0<t~<l, the operator T has a decomposition T=I+R, where
(i) I is strictly q-integral and
iq(1) <~ Cp(O-~)) -1 ~-c(I-~)/p
(16)
16 J. BOURGAIN
(ii)
### where Cp is the norm of the Riesz-projection regarded as an operator in L p.
Proposition 2.3 is then obtained by an iteration procedure. Starting from Ro=T, consider successive decompositions
### Rk=Ik+~+Rk+~
according to Lemma 2.4.
We see that for ~ small enough
and
### iq(lk) ~ Cp(O-~))-If~-c(I-r f~(I-~)*) k-1
IlZll@~rp(T) l-~.
Specifying
### 6=(2C) -~
leads to the estimation
and thus
k
### iq(T)~
C(p) ( 0 - ~ ) -jlITII~gT) 1-r as required.
### Proof of Lemma
2.4. First one can identify T with the component T~ in decomposi- tion 2.1. Indeed, since T2 is defined on C(H), H61der's inequality yields immediately (cf. [48], p. 75) that
i q( Z2) <~ i q( T2) <~
### II f211~176 ~< 2llTllSr g T) '-~
Thus T extends to H ~ and there exists f E LI+(H), S f = I, such that
forq06H |
Let
### (ci),
(0i), (ri) be the sequences obtained by application of Proposition 1.7 to the function f, taking 5 as in L e m m a 2.4. Define
and
### R = T - I
~ m
I(q~) =
which makes sense by (i), (ii) of Proposition 1.7. Also []RI[ ~< CI[T[[. Since
### IIR(cp)II<~p(T) lq~l p l-~,Oir~ l fdm I
(17)
N E W B A N A C H SPACE P R O P E R T I E S O F T H E DISC A L G E B R A A N D H = 17 estimation (v) of Proposition 1.7 combined with (i) and (ii) of Proposition 1.7 show that
### ~p(R) ~< c~ vOrp(T)
Remains to verify (i), Extend I to C(H) by the formula
which clearly makes sense if ~ has finite spectrum. Now
for F E Ll(II) satisfying
### fq3Fdm I
~IIT(~)II for a l l ~ E H | Also
### f(Ezi~+(OiricP))F=limfEOirig~(ri F)
defining for convenience ~_=~_-x-P,.
Fixing r < 1, application of the HOlder inequality leads to
By definition of 0, we get
### q'=(~q')a+(1--~q')p'
where a = p ( p - 1 + ; ) - ' .
So, applying the H N d e r inequality for sequences, we get the estimation U x g on the first factor of the preceding inequality, where
2 - 8 4 8 2 8 8
### Acta Mathematica
152. Imprim6 le 17 Avril 1984
(18)
18 J. BOURGAIN
To estimate U, apply first L e m m a 1.3 with w=lri[, which gives
### f I ~'-(~,~1 ~ 13,1 ~ ~ I1~,11',-~ I1~, ~1: 9
Thus, using (ii) and (iv) of Proposition 1.7
### ( c y'o ~"'-~176 ( c y'aa_c,(,_o),o
~< \ ~ ---2-d/ Ilelff,
<~ C- (OP)~ O-cto-~),p tlTIt~
(o-~) ~
~<p~(0-~)-I~ -cO-~p IITII ~
The L P - L p' duality shows that
I I/p"
over the sequence
### (~i)
fulfilling
Since
we obtain estimation by
_ I~ / ~ / ~
By Proposition 1.7 (ii), (iii) and the M. Riesz theorem
(19)
NEW BANACH SPACE PROPERTIES OF THE DISC ALGEBRA AND / ~
Consequently
/"
### v <~ (cc. ~.(r))~'("q'-~'~ <~ (cc~ ~(~)'-*.
By completion, I extend to C(H) and
### c~(o_~)., ~-c.-.)~ 11711' ~(r) ~-'
as desired.
19
2.2. Consequences
Let us first proceed with the proof of Theorem 2.2. Denote y~(T) the factorization constant of the operator T through an L~(~)-space. By Proposition 2.3, we get for T E IIp(A, Y)
### ~,| ~< iq(T) ~< ~ [Irll ~ ~,(T) ~-'
for p < q and ~ < 0 . Since 0--->1 for q-->~
### (*)
If now Y satisfies the hypothesis B(C, Y)=YIp(C, Y), we get also
~tp(T)~<C(Y)F| for a fixed constant C(Y).
Hence
### ytp(T) <~ ( C(~C(P).) I/q~ HT H.
proving the equivalence of operator- and p-summing norms for finite rank operators
(20)
20 J. BOURGAIN
from A into Y. Since A has the bounded approximation property (in fact A has a basis [3]), we conclude that B(A, Y)=Hp(A, Y).
If we choose in (*)
we find
/ ~p(T) \ - I
### ~p(T)
r~(T) ~ C(p)IITI[ log IITll "
Hence
THEOREM 2.5. I f TE IIp(A, II), then T has an extension 7" to C(II) satisfying
### %(I)
IITII ~< C(p)HTlilog IITII Since always
y/'2(T) ~ [[T[[ (rank T) v2
for finite rank operators, the following corollary is immediate:
COROLLARY 2.6. (i)A rank n operator T o n the disc algebra has an extension T to C(H) satisfying IITI[~<C(2) (log n) IIT[t.
(ii) I f X is an n-dimensional subspace o f A complemented by a projection P, then X is a Pa-space with 2~<C(2) (log n) IlPl[.
This result, answering affirmatively problems raised in [46] and [59], is best possible as we will indicate at the end of this section.
Combining Theorem 2.2 with Grothendieck's fundamental theorem B(C, P)=
H2(C, I l) and a result due to Maurey (see [40]), the following consequences are derived.
COROLLARY 2.7. B(A, ll)=H2(A, lZ), or equivalently, A* verifies the Grothen- dieck theorem.
COROLLARY 2.8. I f CO is not finitely representable in Y, then B(A, Y)=Hp(A, Y) f o r some p<oo. In particular, if Y is a cotype 2 space, then B(A, Y)=H2(A, Y),
(21)
N E W B A N A C H S P A C E P R O P E R T I E S O F T H E D I S C A L G E B R A A N D H ~ 21 Let us recall that Y has cotype q(2~<q<oo) provided
### f ['~ eiYi[lde>~7(~ HyiHq) .Iq
holds for some constant y > 0 and for all finite sequences (Yi) in Y.
(As usually,
### (el)
denotes the Rademacher sequence.)
### Proof of.Theorem
0.2. Consider elements (Xk)j~<k<~n in
and the operator
### T:A---,IIn
given by T(cp)=((q0,xk))~<k~< ~. Clearly
### 'llql<~2 sup I[~ ekxk
By the extension property for 2-summing norm and Corollary 2.7, there exists an operator T:C(I-I)-->II~ satisfying
where
### j:A--->C(FI)
is the injection, and
### 11 ll < 2( = 2(T) <CllTll.
Denote/zkEM(H) the kth component of ( ~ * . Then
ek = --.I
Moreover, if for k= 1 ... n we consider -~k in L~(II) representing
it follows that
### (~k-~) •
thus ~k--J?k is in H~ and in particular ~k<<m.
The following observation, due to Figiel and Pisier (cfr. [19] and the remarks at the end of [58]) is well known. We include its proof for selfcontainedness sake.
PROPOSITION 2.9.
### Proof.
If A denotes the Cantor group, then L~(A) verifies the Grothendieck theorem. For ~EL~(A) and S a finite set of positive integers, denote ~(S) the corre- sponding Fourier-Walsh coefficient of ~. Fix a sequence (x~ in X* satisfying
### I<x:,x>l <llxll
for all x f i X .
Then the map
### a:Llx---~l 2
defined by a(~)=((~{i}, x*)) is norm-1 bounded and hence
### zq(a)<~C(X).
Given an arbitrary (finite) sequence (x~) in X, it follows thus
### C(X)f[l~ixilld~>>'~lla(xi| x'>l"
(22)
22 J. BOURGAIN Hence, in particular
by an appropriate choice of the x~.
The reader is referred to [57] and [58] for the following facts:
PROPOSITION 2.10. (i) For l<<.p<~, the space lip is isomorphic to its direct sum (ii) The disc algebra A is isomorphic to (~'n~l A)c o"
Combining Corollary 2.7 and Proposition 2.10 (ii) we get
COROLLARY 2.11. The dual o f the disc algebra A* is a space o f cotype 2.
By arguments of local reflexivity, Corollaries 2.7 and 2.11 remain valid if A is replaced by H ~. Since it is not known i f / - ~ has the bounded approximation property, extension of Corollary 2.8 to H | is not clear. However, the result holds assuming that
Y has the bounded approximation property.
Next results are formal consequences of the cotype 2 and Grothendieck property, COROLLARY 2.12. (i) Co is up to isomorphism the only complemented subspace o f A possessing an unconditional basis.
(ii) I f E Xj is an unconditional decomposition o f A (resp. H~), then E Xj is a Co- sum (resp. l| (cf Proposition 2.10).
Corollary 2.8 allows to improve Theorem 2 of [30] as follows
COROLLARY 2.13. Given a reflexive subspace X o f A*, there exists an embedding fl :X--->C(II)* such that moreover j*iff(x)=x for x E X , where j:A-->C(1-I) is again the injection.
Proof. By Lemma 3 of [30], a reflexive subspace X of A* does not contain lln's uniformly and hence X* has a finite cotype. Therefore, by Corollary 2.8, denoting i:X--*A* the injection, i*[A=T is p-summing and thus p-integral for some p<oo. Thus T factors through an L ~ ) - s p a c e and can be extended to C(FI). Let/~ be this extension.
Since ~ = T , it follows that i=j*fl where fl is the restriction of ( ~ * to X.
Further results concerning projections in the spaces L~/H~ A and //~ will be presented in section 5.
(23)
NEW BANACH SPACE PROPERTIES OF THE DISC ALGEBRA AND H ~ 23 2.3. An alternative approach
I f X is a Banach space, the Grothendieck property of X* is a formal consequence of the fact that each 0-summing operator from X in Hilbert space is nuclear, thus the equality
### IIo(X, 12)= N(X, /2).
We give a direct proof of this fact, without using the theory of operator ideals. Denote by v~ the nuclear norm.
PROPOSITION 2.14.
Let
### TEB(X*,
/2n) be induced by the sequence
### (xi)l<_i<_n
in X. Take elements
### (Xjg)I<~j.<~N
in X* satisfying sups:_+ I lie
### ejXT[ [
~<1. Consider a matrix
### (a~i)l<~i<~n" I<~j~N
such that supj r.i[ao[2~l and denote
### M:l~N----)12,,
the corresponding operator, for which tlMII~ < 1. Consider the composition
2 T* R I M 2
In---> X---> l~---, ! n
where
Because
### B(l~,12)=IIo (l 1,12)
(see introduction) the hypothesis
g i v e s
v~(MR) <<.
~<2C(X) and thus
trace
2C(X)IITII.
But clearly
N
trace
### (MRT*) = ~ a# ( x i, xT)
i=l j = l
and for a suitable choice of (a0), it follows
Proposition 2.14 has no converse as will be indicated at the end of this section. The following theorem, which was conjectured in [32] (cf. Theorem 1), provides a different proof that A* has Grothendieck property.
(24)
2 4 J. BOURGAIN
THEOREM 2.15. A O-summing operator from the disc algebra into an arbitrary Banach space is nuclear.
Besides the l e m m a ' s presented in section 1, the p r o o f o f T h e o r e m 2.15 requires some further observations.
LEMMA 2.16. Assume TE 1-10(H ~176 Y) and
### (ri) a
sequence o f H~-functions such that
For each i, define the operator Ti by T,(qg) = T(rig).
Then
(i) ~
### fftp(Ti) ~ II I ,111| ,,03
(O<P ~<1)
(ii) The serie E Ti converges in II,(/-/~, Y) (p>O).
Proof. It i s clear that (ii) follows from (i), replacing the r i b y sums of the ri on consecutive blocs.
In order to verify (i), take for each i a system (q0;, D in H ~ such that
### X I( ~Di, k'X*
) l p ~ I for x* e (/-F)*, IIx*ll ~ 1.
k and
k
Then, for some sequence (Qi) o f positive numbers such that II(oi)llm_p,~l and for some x* (~ (H~) *, Ilx*ll < I.
Ti~gi, k, ) l p
### r i X*
(25)
N E W B A N A C H SPACE P R O P E R T I E S O F T H E DISC A L G E B R A A N D H ~ 25
If g is a function on II and 0 E l I , let
### go
be the translate, i.e.
### go(~)=g(O+Ip).
N e x t lemma goes back to [1] and was in slightly different form also used in [35] and [32]. The author is grateful to S. Kisliakov for an alternative proof.
LEMMA 2.17.
(i)
(ii)
### Proof.
Fix O < p < 1. The Pietch theorem yields a Radon probability m e a s u r e ~ on the closed unit ball o f C(Y[)* such, that for ~ EA
### [[T(qg)l[<~ygp(T)(f Ix*(cp) lPg-2(dx*)} lip.
Given x* E C(H)*, denote x~' the image measure of x* for the map
Observe that
### X*(~(Pr, o))=[~(x~*Pr)](O).
F o r 0 E H and 0~<r, s < l , it follows
J
and
### [[~r__~S[[Lpg~ys S II~((x~* Pr)-(xr* P ,) )ll~( dx*) ) i 1lip
.
Thus (i) follows from Proposition 1:I and the Lebesgue dominated convergence theo- rem. Further
sup
sup
### I~(xt.P r) (O)['O(dx*)
r < l . ) r
(26)
2 6 J. BOURGAIN
and hence, taking
(,0 m X[F;~,ll
### fP'~o<~,trrf/(supl~txt.e)W)l)"o)r
By Proposition I. 1 and L e m m a 1.2, we can estimate for fixed x*
### f (suPl~(x'~*P)(O)])Pto(O, dO
~< l_~2p ][to,]l-' ]1 sup ]~(/~'-x-P,,] ]]~v
Hence
and finally
### &P IIr ~ ~r~(T~-~Cp I1~o11] -p
completing the proof.
To prove Theorem 2.15, we proceed again by decomposition. Fix 0 < p < l . We show that for each 6 > 0 there is a constant K~<o0 such that any
decom-
poses as
where
(i)
### NEN(A, Y)
and vl(N)~<K~p(T) (ii) ~p(R)<6 9 ~p(T).
Fixing 0 < 6 < 1 and iterating, an estimation
### vl(T)<<.C:tp(T)
then follows.
By the second remark after Proposition 2.1, we can assume that ~ ( T ) is realized by an m-regular Pietch measure on II and hence is defined on H ~176
Consider again a Radon probability measure f~ on the unit ball of C(II)* for which
### ItTt~ll~<~p(~{f Ix*(~Qtd~*)} '/"
forq0EA.
Denoting Ix*] the variation of the measure x*, consider following measure on II
### = f Ix*l Q(dx*).
Thus/~ is positive and
Let
### dl~=fdm+dlus
be the Lebesgue decomposition.
(27)
NEW BANACH SPACE PROPERTIES OF THE DISC ALGEBRA AND H = 2 7
Fixing 6 > 0 , apply Proposition 1.7 to f , giving the sequences (ri), (0i) in H = and
### (ci)~>O.
If for each i, we define the operator
### T,.(cp)=T(Oi~cp),
Lemma 2.16 implies
<<.
If
### F,~(O)=SUPr<l [[Ti(~(Pr.-o))1[,
application of Lemma 2.17 gives
Take
### 2i=ci~rp(T)/6
and apply Lemma 1.5 to
to obtain an H*~
### Ki
such that II~/IL ~< 3
Define now
### N = E N i i R = T - N .
For 0 ~ r < l , consider the operator
on C(H).
Since
(ri
J we get
Hence
### ~(T)
(28)
28 J. BOURGAIN Also, for
s < l
I(r/xi) (W)l
### II
T,<~(P,, _~,))- T , ( ~ ( e s, -
### f I~,(,p)l
F i ( ~ ) ) 1 - p
### IITA~(P~._w))- T,(~(e,,_w))ll,
C
I- ]]T,(~(P,, w))-
### Ti(~(Ps,
~0))[I p
~.CA~ ~ p J
and L e m m a 2.17 implies the convergence of
### (Si, r)
in HI(C(II), Y) for r-->l. Since the
### Si. r are
clearly nuclear, the limit operator Si will also be nuclear and Pl(Si) =
Since
extends
we obtain
and
1/I(N) ~ ~ /
~p(T) ~.~
### c, IIr,lh -< C~-C-'~p(T).
So it remains to estimate :tp(R). Let for convenience r/= 1 - E 0i ~ u r Take a sequence 0Pk) in A such that
(i) ll~2kll| ~ 1
(ii)
### J I'Pkl
d ~ - - , o (iii) ~ k ~ l m-a.e.
Consider for each k the operator
Then
### IIRk(OII
= I[T((1-~-~ 0,. q ~i) ~Wk)l[ = lim
### IlT(OT*Pr)
q~Wk)[I r---~ I <
and
(29)
N E W B A N A C H SPACE P R O P E R T I E S OF T H E DISC A L G E B R A A N D H ~ 29 Hence
where the second factor is dominated by
### l ,f
( It/, -x-Pr) ,~k[ d/~.
Since lim
### IIR-RklI--0,
we get
k--~oo
Taking previous estimates in account, we see that the latter quantity is bounded by
### <~+C(p)6.
This establishes Theorem 2.15.
Theorem 2.15 permits to distinguish the disc algebra from certain other translation invariant spaces. Recall that a subset A of Z is a Ap-set provided L ~ and LP-norms are equivalent on linear combinations of the characters e int with n E A. Combining Theo- rem 2.15 with Lemma 1 of [30], following result is derived.
COROLLARY 2.18. Assume A c Z such that A n Z_ is a Ap-set for some p > l . Then the space CA, closure in C(II) o f the polynomials f = E , , e A c~ e i~t, is not a quotient of the disc algebra.
2.4. Remarks
(1) In proving the interpolation inequality in Proposition 2.3, the norms rtp(T) and :rq(T) were computed using different measures. It may be possible to derive the result from weighted norm inequalities on the Hilbert transform, using less the algebra structure.
(30)
3 0 J. BOURGAIN
(2) It is shown in [13] that the spaces L~0,1 ... ,} of polynomials of degree ~<n embed uniformly complementedly in A. Hence, previous results localize to these poly- nomial spaces. It is also proved in [13] that the Banach-Mazur distance d(l~+l,L(o,l ... ,))~<Clogn, while for an arbitrary finite subset A of Z, one always has that L A is only a P~-space for ~. of order log IA[, So Corollary 2.6 is sharp. Other examples of complemented subspaces of A are those obtained by spline interpolation in [3].
(3) Assume A ~ Z such that Z + = A and AflZ_ is a Hadamard lacunary set. From the result on the disc algebra, it is then straight forward to show that also B(CA, It) = I'I2(CA, ll). On the other hand, as we explained in the introduction, the orthogonal projection from CA o n L 2 A n Z is 0-summing and onto. Consequently, the previous property does not imply nuclearity, even in case of translation invariant spaces. The reader will find related results in [32], section 3.
(4) It should be mostly interesting to determine for what spaces X it is true that any operator from A into X can be extended to C(II). This property is obviously true for X = l | and, by our results, i f X has a finite cotype (the extreme case in the other sense).
The case X=B(I 2,12) is unsettled and a positive solution will have applications in operator theory.
3. An interpolation result for vector valued H~-spaces
The purpose of this section is to characterize certain spaces H~. Our motivation for studying such spaces was to simplify earlier work on the minimum-norm lifting L ' / H ~ L ! by using the interpolating sequence theory. The results presented in the first paragraph can be extended in the frame of the Lions-Peetre interpolation theory.
There are also possibly other applications than those considered here.
3.1. Characterization of certain vector-valued H a functions
Our purpose is to prove Theorem 0,3. In what follows, Proposition 1.7 will be again important.
We first show the following extension of Proposition 1.6.
LEMMA 3.1. Given f E Ll+(1-I) and 6>0, there exists f E Lt+(rI) and 9 E H | satisfy- ing the following conditions:
(ii)
### Ilfll,<-.c6-cllfll,
(31)
(iii) (iv)
N E W B A N A C H SPACE P R O P E R T I E S OF T H E DISC A L G E B R A A N D H ~
-q~lfdm<~61lfll,
### JiI
31
If we let
then obviously
and
Notice that by (iii) of Proposition 1.7 IF~l <~ C g and where
I r f l >~ cl]
and
### f2= E OiTif ~
Thus the decomposition
### Fq~=FI
+F2 satisfies (iv)9 Let us now fix some terminology.
### Proof.
Apply first Lemma 1.7 and put q0=E 0 i ~ and f = E
### girl[.
Then (i), (ii), (iii) hold9
Next, apply for fixed i E Z Lemma 1.6 to the H ~ function
### ~'iF,
taking ;t=ci. This gives a decomposition in H ~
(32)
32 J. BOURGA~
IfXo, X1 are linear subspaces of a vector space X and
### II
respectively, we equip
and
with the norm
### I10, II
Ill norms on Xo, X1
X o f~ X 1 with the norm
### Ilxll~0, x,
= max (llxll0, llxlh)
X otJX I= {xEX, x = x o + x I for some xoEXo, xlEXi}
### Ilxllxoox= inf (llxollo+llxlll0.
X = X 0 + X I
Fix a positive integer N and e>O.
L e t Xo be C N equipped with sup-norm and XI obtained by defining on C N the
n o r m
### II(z~ ... zN)ll~
We will use the following simple fact:
k
LEMMA 3.2. For given x=(zl . . . ZN) E C N, define x' =(z[ . . . z'N) by { z'k = zk if Izkl ~>
x,
z'k 0 otherwise.
Then
### IIx'lh.<211xllxoo~.
Proof. L e t X=Xo+X I where Xo=(Zl, o .. . . . ZN.0),
I f
### Izkl~211xllxooX,,
then clearly
and hence
x~=(Zl,i . . . ZN, ~) are such that
### IIx,ll, ~ ' T ~ Izkl, IlxllxouX,
proving the lemma.
I f X i s a Banach space, denote for l<~p~<oo b y / - F x the subspace o f L~ of f u n c t i o n s f such that f ( n ) = 0 if n < 0 .
(33)
N E W B A N A C H SPACE P R O P E R T I E S O F T H E DISC A L G E B R A A N D H ~ 33 Let us prove T h e o r e m 0.3.
PROPOSITION 3.3. The norms o f the spaces HloUHlxt and HIxouX~ are equivalent up to a fixed constant (which does not depend on N or e).
Proof. The H 1 ~ U H)x~ norm clearly dominates the HlxouX~ norm, since if ~o+ ~1, then
### =11~o11%+11~111,,,,.
Conversely, assume ~=(F1 ... FN) in H~roUX and define f b y
= Eli(F1(0) ...
### eN(O))llxoox,
for o ~ n .
Fixing 6 > 0 , take f and ~ as in L e m m a 3.1. For k = l ... N, let further Fkcp=Fk, o+Fk, 1
### IIFk, llh ~C f IFkl.
J[lFkl ~>f]
be an H L d e c o m p o s i t i o n satisfying [Fk, o]<~f and
Define ~0=(F1,0 ... FN, 0) and ~1 =(F1,1 ... FN, i)- Then by L e m m a 3. I (ii)
max
### I&,ol <~ Ilflll ~<
2c6-C)1~11,,,oO~, and using L e m m a 3.2
k k
### <<. 2of
II(fl . . . fDIIxo o x,.
Finally, b y L e m m a 3.1 (iii)
### ][~-(~o+OllZ,Xoo~ = f ll-~l llW, ... F~)llxoo X =-~ f ll-~olf
~< 6 II~ll,%ox,.
3 - 8 4 8 2 8 8 A c t a Mathematica 152. Imprim6 le 17 Avril 1984
(34)
34 J. BOURGAIN
For ~>0, denote ~ the ball with midpoint 0 and radius Q in
### nlooxi.
From the preceding, it follows that ~1 is contained in
x0 u x, :llr
### ~2c~-C+2c} +flt~"
Choosing 6 < I , we conclude that
II~ll.~, u., ~< const, t[~[In, 9
X 0 X 1 X 0 0 X I
Dualization of Proposition 3.3 (in which one can obviously replace H l by H01) leads to PROPOSITION 3.4. The norms o f the spaces Lx.o/~nLx;/~ and Lxonx;/~ are equiv- oo
alent up to a f i x e d constant.
We denote here by ~ the subspace of Lcs of those elements which have H ~ components.
3.2. Application to interpolating sequences in the disc
Proposition 3.4 can be applied to obtain certain P. Beurling type functions.
Consider the following vector-valued interpolation problem. Let N be a fixed positive integer and ~ , ~2 . . . ~fN (finite) subsets of the open disc D. Let further for each k= 1 ... N a complex valued function Vk on ~k be given. Consider
~ = ( ~ l . . . ~N) suchthat ~ k E H ~ and ~kl~k=V~ (*) Let now X be C N equipped with an unconditional norm. Define
ax = inf IIr where the infimum is taken over all 9 satisfying (*).
Let Bk be the Blashke-product of the points in ~fk. If qb is a particular solution of (*), the general solution becomes
tlJ=0pl ... ~0 u) where ~k=~k+Bk'Wk and w k E H ~.
By unconditionality of X, this fact leads to the formula
a x = \ B 1 .... BN
If Xo, X1 are as above, Proposition 1.4 leads to the following result:
(35)
N E W B A N A C H SPACE P R O P E R T I E S OF T H E DISC A L G E B R A A N D H ~ 35 PROPOSITION 3.5. Oxonx~<.Kmax (axo, Ox~), for some numerical K.
This property can be restated as follows.
COROLLARY 3.6. I f ~o, ~1 are solutions o f (*), then (*) has a solution ~ for which
Thus in solving (*), information on 1 ~- a n d / l - e s t i m a t i o n s can always be combined.
In particular, one has
COROLLARY 3.7. A s s u m e ~1 ... ~N subsets o f D for which there exist H ~- functions qgl ... Cpu satisfying
(i) ~k(Z)= 1 f o r each z E ~ and k= I ... n
### (ii) lIE I~l [l~ ~<M-
Then there exist also H~-functions v/l,..., v/N fulfilling (i) and moreover (iii)
### II ,kl/ <g
f o r each k = l , . . ~ , n
### (iv) Ilr IV, hi
< KM.
Recall that a sequence (z,) in the open unit disc is b-interpolating (5>0) provided to each 5-bounded sequence (an) o f complex numbers corresponds some cp E H ~~ with
and
### Cp(Zn)=an
for each n. A result of L. Carleson asserts that a sequence (zn) is interpolating if and only if the sequence is uniformly separated, i.e.
### H Iz l
inf d(z,, z,~) > 0 where d(z, w) = l - s
m n * m
(see [141). F o r z E D , denote 5z its Dirac measure. The sequence (z,) is called a Carleson sequence provided the measure ~ = E ( 1 - 1 z , I)5~o is a Carleson measure on D (see [20], p. 31 for definition). The constant of the Carleson sequence (z~) is the Carleson norm o f ~.
A sequence in D is known to be Carleson iff it is finite union o f interpolating sequences. If (z,) is interpolating, then, by a result of P. Beurling, there is a sequence (q~) in H ~ satisfying
### (i) lie [q~[ II~<o~
(ii) qOm(Z~)=5,~,, ( K r o n e c k e r ' s symbol).
(36)
36 J. BOURGAIN
For an explicit formula for the functions tp~, see [28] (Theorem 1). In the next section, we will make use of the following consequence of Corollary 3.7.
COROLLARY 3.8. Given M<oo there is Ml<oo such that if ~1,@2 ... ~N is a partition o f a Carleson sequence o f constant M, then there exist H~-functions
qh, cp2 ... q~N satisfying
(i) Cpk(Z)= 1 for each z E ~k and k= 1,2 ... N (ii) II~kll~<g for each k = l , 2 . . . . N
(iii)
### IIr~ I~kl II=~<M~-
The important thing is that K does not depend on M. A slightly weaker version of previous result was obtained in [8] by different techniques.
4. Properties of the minimum norm lifting
Let us repeat that the minimum-norm lifting o : LI/H~,,~,L 1 maps x ELt/H~ on the unique f E L I satisfying
### Ilxll=l l,
and q(f)=x.
If AcL1/HIo is a W C C set (see introduction), then o(A) is relatively weakly compact ([44], T h e o r e m 7.1). The purpose of this section is to prove a local version of this property. The following result implies Theorem 0.4.
THEOREM 4.1. For each 6 > 0 there exists 61>0 such that given Ll(Fl)-functions fl,f2 ... fn satisfying the following conditions:
(i) ]]q(fm)]l>(1--6 2) ][f,,,]l' for l<~m<~n
(ii) j" maxm,~.m [fm[~C6E,~m Ilfmll, whenever ~rn~0, then there are H~-functions g~, g2 ... gn such that
(iii)
### Igd+lg2l+...+lg~l<.l
pointwise on FI, (iv)
### (fm, g,~)=ffmg,n=6~llf,.lllfor
l<.m<-n.
Condition (ii) o f T h e o r e m 4.1 also means that the fm have mass at least
### C611fmll,
on disjoint subsets of H (cf. [17], Proposition 2.2). We will derive Theorem 4.1 as a consequence of Corollary 3.8. The author obtained the result previously by a more direct method.
Next lemma, based on an argument of successive extractions, is left as an exercice.
(37)
N E W B A N A C H SPACE P R O P E R T I E S OF T H E DISC A L G E B R A A N D H ~ 37 LEMMA 4.2. Given •>0, n > 0 , there exists r/=r/((9, n ) > 0 such that if
### (am)l<~m<~n are
positive functions in L1(FI) and
### f max am >~ Q ~ ]lCtmlh,
m m = l
one can find a subset S o f {1,2 . . . n} and a system
### (Am)m E S of
disjoint measurable subsets o f H satisfying
(i) Ilamlh
(ii) IAmam~(Q/2)llamlh foreach m E S (iii) f
### maXmes(t2mXil\am)<~;g f maXmesa m.
Another elementary fact needed for the proof of Theorem 4.1 is the following approximation principle.
LEMMA 4.3. To each e > 0 corresponds ~,=y(e)>0 such that for positive, disjointly supported L l(FI)-functions at, a2 ... an o f norm 1, there exist functions a~, a~ ... a'~
such that
(i) Nam-a'~ll,<e for each m
(ii) the functions a'm are obtained by taking disjoint convex combinations o f the Poisson-kernels Pzk, f o r some y-interpolating sequence (zk) in D.
Recall that
pz(o)= l-lzl 2 leiO-zl 2"
Sketch o f p r o o f o f L e m m a 4.3. If we define for fixed e > 0 and positive integer K for k = 0 , 1,2 . . . e - l K - 1
Z k = ( 1 - - 1 ) e i~ w h e r e
### Ok = 2~e k ,
then (zD is a v-interpolating sequence, where y does not depend on K. Also 1 and I z - w l < K IlPz-Pw[[l - d(z, w) <- const, e iflz I = Iwl = 1 - - ~
Choose now K sufficiently large to ensure in particular that arn~am-X-Pr for each m--1,2 . . . n, taking r = l - 1 / K . The functions a " are then obtained by replacement of
(38)
38 J. BOURGAIN
the Poisson-integrals by convex combination of the Pz,. Since the a m were assumed to be disjointly supported, it is clear that one can choose these combinations to be disjointly supported on the sequence (zk).
P r o o f o f Theorem 4.1. We show that for some 61>0 one has for a~ ... a n E C
### f mamX lamfm +hml >I" ~1 Z
laml IVmlll (*)
inf
where the infimum is taken over all systems
### (hm)l<m<~n
with h m E 111o . The proof is then concluded by a Hahn-Banach extension argument. Notice that the a m in (*) can be taken positive. Let K be the numerical constant appearing in Corollary 3.8 and put t = I / 2 K . Take F=~(e) as in Lemma 4.3 and let M < ~ be such that y-interpolating sequences are Carleson sequences of constant at most M. Denote M1 be constant associated to M by Corollary 3.8. Defining p = C r , x = 6 , it follows from (ii) of Theorem 4.1 and Lemma 4.2 that there exist a subset S of { 1,2 ... n} and disjoint measurable subsets (Am)m~ s of FI satisfying
(i) X S o~ m Ibemll,>~r/X~.=, a m I]fm]],
(ii) fAmlfml~(C/2) 6llfmll, for each m E S (iii) J" maxm~s(a,. [fm[ g n \ a . ) ~<u ~ maxs am ]fm["
Application of Lemma 4.3 gives a Carleson sequence (zD of constant M and disjoint subsets (Vm)me s of the index set. such that
(iv) where
(v)
(m E S)
### flm~-Wm'COnVex
hull (Pzk;kE Vm) where ~m=llfmZAml[1.
Defining @m:
k ~..
### Vm}
for rn E S, we can choose H~176 (~m)m ~ s fulfilling the conditions of Corollary 3.8.
By (i) of Theorem 4.1, there are norm-I H~-functions Vim SO that (vi)
### <fro' ~m)=(1
_r
First, one deduce easily from (vi) that
[ max lamfm + hm[>>- ( max lam[fm [+ h m ~pm[--36 Z am [[fm[ll"
3 3 s s
(39)
NEW BANACH SPACE PROPERTIES OF THE DISC ALGEBRA AND H ~ Then, by (iii) of Corollary 3.8 and (iii)
### Earn f ~fmlqgm s
Since, by (i) of Corollary 3.8
= IV.zAmlI, we deduce from (iv) and (ii) cf. Corollary 3.8
(!-eK)llfmZ,
### AJll
m
Hence, combining inequalities, it follows
l f am
[Ifmlh.
for
for
39
(
M, / max
(1-eK)
### E am JJf,.zAmll,--xM, E am
[[fro[J,
J s s s
and hence, using (ii) and (i)
### f maxlamfm+hml>~I(4-~l -4) 6~am,]fml],.
m
Since Mj is a numerical constant, we can take C=20M~ and let 6~=r/6.
So we obtain (*) and Theorem 4.1 is proved.
Theorem 4.1 implies clearly the following property.
COROLLARY 4.4.
6 > 0
61>0
Updating...
## References
Related subjects : |
# How would gravity/acceleration be perceived by a human orbiting Earth at sea level?
I understand the impracticalities of this concept, but humor the 'what-ifs.'
Ignoring physical obstacles and the effects of atmospheric fluctuations affecting the trajectory.
Say it is possible to have a craft capable of orbiting in Earth's atmosphere just above sea level, that in no way generates lift (just powering through that atmosphere).
How would gravity be perceived by the passenger onboard? On one hand I think they'd be weightless since they are technically always falling... But I could be wrong.
Bonus: How fast would a 200 kg spherical (I guess) vessel be traveling?
• Sea level is not at a constant distance from earth’s center of gravity. – Paul Nov 9 '18 at 21:14
• Fine then: equatorial sea level with no moon – anon Nov 9 '18 at 21:20
• Related scifi story: The Holes Around Mars by Jerome Bixby scifi.stackexchange.com/questions/143541/… – Organic Marble Nov 9 '18 at 22:02
• @OrganicMarble I'll have to find a copy, that looks fun! – uhoh Nov 10 '18 at 0:02
• Re equatorial sea level: there are higher order terms ("frequencies") - the deviation even on the equator is still on the order of 100 m. – Peter Mortensen Nov 10 '18 at 7:55
If you're orbiting, and the rocket thrusters are off, you experience weightlessness. This is true pretty much everywhere.
It's a common misconception that earth's gravity doesn't extend beyond the atmosphere. Craft in space are weightless because they are in orbit, not because earth's gravity is really weak out there. In fact, the Hill sphere (the radius at which the earth's gravitation is no longer dominant) is about 4 times the radius of the moon's orbit. That's quite far out.
The velocity of any circular orbit can be found by $$v=\sqrt\frac{GM}{r}$$ where G is the gravitational constant, M is the earth's mass, and r is the radius of the orbit.
Plugging in the Eath's mass and its mean radius of 6371 km gives a velocity of 7909 $$ms^{-1}$$. That's about Mach 23.
• Excellent answer. – Russell Borogove Nov 9 '18 at 21:25
• But the boosters would never be off because they would have to be fighting atmospheric drag. Though I still think even then you are still weightless since the passenger is no longer accelerating. The boosters acceleration is countered by the drags deceleration. – anon Nov 9 '18 at 21:32
• If you're matching the orbiting velocity at all times, the huge deceleration from plowing through the lower atmosphere at mach 23 much be matched by the huge acceleration from your (presumably nuclear powered) thrusters. Beyond the large amount of vibrations you'd feel, you'd still be effectively weightless. – Ingolifs Nov 9 '18 at 21:53
• I think there is a simple but important point here. The first sentence "If you're orbiting, and the rocket thrusters are off, you experience weightlessness." is really written assuming you are in a vacuum. What it means to say is that the only force is that of the central gravitational field of the Earth. 1. thrusters are off in vacuum, or 2. thrusters are on and perfectly compensating for drag both lead to stable orbit and weightlessness. Astronauts on the ISS would drift to the front of the station over time if the air was still, because the station is always decelerating due to drag. – uhoh Nov 9 '18 at 23:50
• This "thrust matches drag" is exactly what the pilots of the "Vomit Comet" aircraft do when following a zero-g trajectory... – DJohnM Nov 10 '18 at 6:23 |
# scipy.interpolate.InterpolatedUnivariateSpline.roots¶
InterpolatedUnivariateSpline.roots(self)[source]
Return the zeros of the spline.
Restriction: only cubic splines are supported by fitpack. |
# Lab 6: Capillary Electrophoresis
Electrophoresis is a class of separation techniques in which we separate analytes by their ability to move through a conductive medium—usually an aqueous buffer—in response to an applied electric field. In the absence of other effects, cations migrate toward the electric field’s negatively charged cathode. Cations with larger charge-to-size ratios—which favors ions of larger charge and of smaller size—migrate at a faster rate than larger cations with smaller charges. Anions migrate toward the positively charged anode and neutral species do not experience the electrical field and remain stationary.
There are several forms of electrophoresis. In slab gel electrophoresis the conducting buffer is retained within a porous gel of agarose or polyacrylamide. Slabs are formed by pouring the gel between two glass plates separated by spacers. Typical thicknesses are 0.25–1 mm. Gel electrophoresis is an important technique in biochemistry where it is frequently used for separating DNA fragments and proteins. Although it is a powerful tool for the qualitative analysis of complex mixtures, it is less useful for quantitative work.
In capillary electrophoresis, the conducting buffer is retained within a capillary tube whose inner diameter is typically 25–75 μm. Samples are injected into one end of the capillary tube. As the sample migrates through the capillary its components separate and elute from the column at different times.
## Introduction
The electrophoretic mobility of an object in an applied electric field is determined by the charge on the molecule via Stokes' Law, the frictional coefficient of the molecule, which depends on size and shape, and the viscosity of the solvent:
$\mu_{e} = \dfrac{q}{6\pi\eta{r}} \label{6.1}$
The velocity of the particle in an applied field is $$μ_e \times E$$, where $$E$$ is the applied field. Slab or gel electrophoresis is commonly used in biochemistry to separate macromolecules, nucleic acids and proteins. Proteins and nucleic acid fragments are separated by differences in mobility through a sieving gel under the force of an applied electric field. Capillary electrophoresis is a technique in which molecules are separated in narrow capillaries under an applied electric field. The electric field rather than gas or solvent flow moves the molecules through the capillary. Molecules in solution will then be separated based on their electrophoretic mobility. Figure 6.1 shows the components of the instrument.
Figure 6.1: Schematic diagram of the basic instrumentation for capillary electrophoresis. The sample and the source reservoir are switched when making injections.
There are several different methods used in capillary electrophoresis. All work on the same premise that molecules will travel through the capillary under the influence of the applied electric field.
## Capillary Zone Electrophoresis
The simplest CE method is capillary zone electrophoresis (CZE), a method by which molecules, ions, or particles are separated solely by their electrophoretic mobility. The figure below shows the relative velocities of particles with different electrophoretic mobilities. The simplification that holds true for this technique is that the velocity is proportional to the charge to mass ratio. The capillaries are usually made of silica. In uncoated capillaries at pH greater than 3 the SiOH groups are ionized to SiO-. This leads to a phenomenon called electroosmotic flow (EOF).
Figure 6.2: Ionized Silica Capillary Walls
The negative charge on the capillary wall leads to the formation of a double layer of cations along the wall. The inner layer is tightly bound to the capillary wall and the outer layer is a diffuse layer of cations. A zeta potential forms at the boundary between the inner and outer layers. When an electric field is applied the cations on the diffuse layer move towards the cathode. The cations are more solvated than the anions and pull the bulk solvent towards the cathode. The concentration of positive charges along the capillary wall pulls the bulk solvent towards the cathode (Figure 6.3).
Figure 6.2.5: Schematic diagram showing the origin of the double layer within a capillary tube. Although the net charge within the capillary is zero, the distribution of charge is not. The walls of the capillary have an excess of negative charge, which decreases across the fixed layer and the diffuse layer, reaching a value of zero in bulk solution.
Figure 6.3: Electroosmotic and Electrophoretic flow. Visual explanation for the general elution order in capillary electrophoresis. Each species has the same electroosmotic flow. Cations elute first because they have a positive electrophoretic velocity, νep. Anions elute last because their negative electrophoretic velocity partially offsets the electroosmotic flow velocity. Neutrals elute with a velocity equal to the electroosmotic flow.
The relative mobilities of the particles are the same, but now neutral molecules and negative particles are pulled toward the cathode by the EOF. Neutral molecules will not be separated from one another. But, the negatively charged particles will be separated because the electrophoretic mobility counters the EOF. A solute’s total velocity, $$v_{tot}$$, as it moves through the capillary is the sum of its electrophoretic velocity and the electroosmotic flow velocity.
$ν_{tot} =ν_{ep} + ν_{eof}$
As shown in Figure 6.2, under normal conditions the following general relationships hold true.
$(ν_{tot})_{cations} > ν_{eof}$
$(ν_{tot})_{neutrals} = ν_{eof}$
$(ν_{tot})_{anions} < ν_{eof}$
Cations elute first in an order corresponding to their electrophoretic mobilities, with small, highly charged cations eluting before larger cations of lower charge. Neutral species elute as a single band with an elution rate equal to the electroosmotic flow velocity. Finally, anions are the last components to elute, with smaller, highly charged anions having the longest elution time.
The EOF is extremely useful for separating molecules with both positive and negative charges. If the EOF is not necessary, or it is desired to completely separate positively and negatively charged particles, the EOF can be abolished by changing the buffer conditions. Using running buffer at very low pH will abolish the EOF. If low pH is a problem for the stability of the samples, the inside of the capillary can be coated with an uncharged layer. Low concentrations of ionic detergent, below the critical micelle concentration will also diminish the EOF. There is no separation of molecules with similar charge to mass ratios. It is frequently desirable to improve or alter the separation. Molecules with similar electrophoretic mobilities can be separated by the addition of carrier compounds to the running buffer.
## Micellar Electrokinetic Chromatography
The addition of molecules to the running buffer will separate molecules based on their affinity for those molecules. There are several different ways to do this. The addition of cyclodextrins to the running buffer allows the separation of chiral species. MEKC, or micellar electrokinetic chromatography, will separate compounds with similar mobilities in CZE experiments by the difference in affinity for detergent micelles that are added to the running buffer. Neutral species will partition between the running buffer and the hydrophobic interior of the micelles. The micelles, which are negatively charged, have a retention time greater than the EOF. Thus as molecules enter the micelles they are slowed down. The stronger an affinity the neutral species has for the micelle, the longer its retention time. The more nonpolar neutral species have the highest affinity for the micelles. Charged particles that have hydrophobic groups will also be retained by interaction with the hydrophobic core of the micelle. Highly positively charged particles will interact with the surface of the micelle and also be retained.
Figure 6.4: Micellular Interactions within the Capillary. (a) Structure of sodium dodecylsulfate and its representation, and (b) cross section through a micelle showing its hydrophobic interior and its hydrophilic exterior.
You can see that the separation of the species in the mixture will be changed by addition of detergent to the running buffer. You can tailor your separation to exactly suit your needs by experimenting with different additions to the running buffer.
## Procedure
In this experiment you will repeat the analysis you did (or will do) in the HPLC experiment. You will compare two different modes, Capillary Zone Electrophoresis (CZE) and Micellar Electrokinetic Capillary Chromatography (MEKC), to achieve the same separation. Differences in resolution and retention times will be observed and explained. Differences, if any, between the results obtained by HPLC and CE will also be addressed.
Figure 6.5: Agilent Technologies 7100 Capillary Electrophoresis, UCD Capillary Electrophoresis instrument. Note that multiple vials can be probed sequentially if required.
Use the samples and standards that you prepared for the HPLC experiment. If you haven’t done the HPLC experiment yet, prepare the standards and samples as described in the HPLC experiment.
### Solutions necessary
Write out the recipes for the buffer solutions A and B before coming to lab and have your TA check them before you proceed. The solutions A and B below have been prepared for you already by the procedure stated below.
• Solution A: 0.05M borate buffer, pH=9.0. Dissolve boric acid in water; add NaOH until pH = 9.0.
• Solution B: 0.05M SDS (sodium dodecyl sulfate) in 0.05M borate buffer pH =9.0. Once SDS is added, measure pH to ensure pH = 9.0. Adjust pH with either HCl or NaOH if necessary.
Prepare 100 mL of solution A and then use that to prepare 50 mL of solution B. Use a pH meter identical to that used in Chem 105. The molecular weight of SDS is 288.38 g/mol. The molecular weight of boric acid is 61.83 g/mol.
Filter approximately 10 mLs of each buffer into a clean, labeled vial, just as you did with your samples and standards for the HPLC experiment. It is imperative that the vials are not filled more than 75% full! Vial liquid levels at and above the vial shoulder is too much. Over filling vials can lead to salt build-up and over-pressurization of the capillary. If arcing or over-pressurization occurs, the capillary is almost certain to be rendered useless. The proper solvent level is only 1.4 cm, which translates to about a 75% full vial. As always, ask your TA if you have any questions.
### Setting up the instrument
The first experiment you will run is the CZE separation. All the steps are the same for both methods.
The instrument used in this experiment is an Agilent CE. The software used to control the instrument and collect the data is the Agilent Chemstation, which is also used for the HPLC.
Turn on the instrument, if it is not already on, by pressing the power button at the bottom left of the front face of the CE. The light in the middle of the button should be green when its on.
Open the Chemstation software if its not already on by double clicking the "Instrument 2 Online" icon with an image of the CE.
You will start in "Instrument View."
Instrument View shows a diagram of the complete system, with clickable features to make many actions easier. The menu bar could also be used, but takes more steps per action.
The instrument must first be initialized. Click the "On" button next to the question mark button on the bottom right of the "CE" window. Click on the power button icon in the top left corner of the "DAD" (diode array detector) window to "make device ready".. Wait until the system goes to "Ready." This may take some time if the temperature controlled zones need to stabilize.
When the software goes to "Ready", in the "CE" window, right click "Inlet" and unload by selecting "Unload Inlet Lifter." Do the same for the "Outlet." You should then hear the instrument lowering the vials in positions 1 and 2. Right click the sample wheel and select "Get Vial." "Vial" should be set to "1" - click "Get." The tray will rotate vial 1 to the front of the instrument. Open the tray door and remove vials 1 and 2, if they are present.
Fill a new vial with the pH 9 0.05 M borate buffer by filtering it with a syringe filter and label it "Buffer A" with a pen.
Do Not Use Tape to Label Vials!!!!! It Will Jam the instrument. Use a sharpie pen (ask TA) !!!
Place this vial in position 1. Place an 50% filled vial in position 2 and label it 'outlet'. Close the tray door. The tray will rotate back to its operating position.
Right click on the regulator pressure icon in the "CE" window and click "Flush" on the menu. Enter a time of "600" seconds and click "OK". The pressure should increase to about 930 mBar and slowly decrease as liquid is pushed from one vial to the other during the flush time. This will wash out any residues from the previous runs and equilibrate the capillary with the running buffer.
Make sure that the method is "CAFFEINE.M" and that the sequence is "CAFFEINE.S." The method and sequence names can be found in the drop down boxes in the main menu bar. Check and make sure the CZE method is using 20 kV by selecting "Method" then "Edit Entire Method". Check with your TA what the run time is for the current capillary. Press "OK" in the first two windows that appear until you get to the "Setup Method" window. Select the "CE" tab to check the voltage and run time under "Stop Time." Check that under "Injection," on the right side of the "CE" tab window, there is one row that under "Function" has "Apply Pressure" and under "Parameter" it says "50mbar for 5s (Inlet: Injection Vial Outlet: Oulet Home Vial)." Then click "OK". "OK" through or cancel through the remaining windows.
Make sure you filter your samples into vials and make sure you labeled the vials with a pen. Right click on the tray icon and select "Get Vial." Set the "Vial" number to 7, and click "Get." You can now open the tray door and put all your samples in, starting at position 4 (leave position 3 empty). Start with the 0.01 g/L standard followed by the rest of the standards in increasing concentration, then the four samples. Make sure you keep track of the order in which the samples are run. Close the tray door.
On the menu bar, click "Sequence" and select "Sequence Parameters". Enter an operator name, make sure "Prefix/Counter" is checked, and enter a prefix for your datafile names. Click "OK".
To run the sequence click the "Sequence" play button above the "DAD" window.
After the sequence has started, watch the "Online Plot." You should see a peak for your first standard about 2-3 minutes into the run. The electrical current, which can be monitored in the "CE" window should be 15-16 microamperes.
You can print the the reports after each sample is run. When the entire sequence has completed, click "Data Analysis" at the bottom left of the window. Click on "Chem_115" in the data tree and select your sequence. Select the data you want and print. Repeat for all your other data files. A prompt to save each trial as a PDF will appear after each trial. Create a file and save the trials to the file. Then print the PDFs.
IMPORTANT
Make sure that you get a peak for each standard. You can always abort the run and restart it if there is a problem.
Now do the MEKC run using the SDS-containing buffer, buffer B, as the run buffer. You should flush the capillary with buffer for "300" seconds before the first run.
The parameters are slightly different for the MEKC run. The voltage is 15 kV, rather than 20 kV and run time is longer. Check the exact run time with your TA. Everything else is the same. Select the "CAFFEINE_MEKC.M" method in the menu bar. Select "CAFFEINE_MEKC.S" sequence in the menu bar.
Make a calibration curve in Chemstation. Since you have to wait awhile while the data is being acquired it is good to do this during data acquisition. You can begin to do this even if you have not finished all your runs. From the "Method and Run Control" view click the "Data Analysis" tab in the bottom left corner. It will open up another window of "Instrument 2" but it will be "Offline." Under the "Data Analysis" window on the left there will be a file tree. Go to the "Chem_115" folder and select your data file. The sequence runs you have done will appear in the main window. Double click on your first standard run in the "Sequence" window at the top of the screen. The line for the first standard run should now appear in bold font. On the menus bar go to "Calibration" and select "New Calibration Table." The window "Calibrate: Instrument 2" will appear. Select "Automatic Setup" set the "Level" to "1" and put in the concentration of your first run in "Default Amount." Click "OK." Then double click on the second run. Go to "Calibration" on the menu bar and select "Add Level." Set the "Level" to "2" and enter the concentration in the "Default Amount." Click "OK." Repeat for the rest of the standards. The "Calibration Table" and the "Calibration Curve" windows are at the bottom of the view.
Make sure all reports have been printed. Using the peak areas of the standards plot a calibration curve in either Chemstation, MatLab or other data analysis software. From the calibration curve, calculate the concentrations of caffeine in each of the unknowns.
The vials and filter you used should be thrown away.
## Report
1. Include print outs of all your electroferograms for each sample and standard, and the integration reports for each set run. Also print the calibration data.
2. Explain the difference in retention time for the two different experiments.
3. What would you expect to happen to the retention time of the caffeine peak if you decreased the run voltage for the first experiment to 10 kV?
4. Did you get the same answer for the two different CE experiments? Explain.
5. Did you get the same answer as you did for the HPLC experiment? Is this surprising? If the answers are different, suggest some possible explanations.
## References
1. Skoog, D. A.; Holler, F. J.; Nieman, T. A. Principles of Instrumental Analysis, Fifth Edition; Harcourt Brace: Philadelphia, 1998; 591-621.
2. Copper, C. L. Capillary Electrophoresis Part I. Theoretical and Experimental Background. J. Chem. Ed. 1998, 75, 343-347. pdf
3. Copper, C. L.; Whitaker, K. W. Capillary Electrophoresis Part II. Applications. J. Chem Ed. 1998, 75, 347-351. pdf
4. McDevitt, V. L.; Rodriguez, A.; Williams, K. R. Analysis of Soft Drinks: UV Spectrophotometry, Liquid Chromatography, and Capillary Electrophoresis. J. Chem. Ed. 1998, 75, 625-629. pdf |
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Combinations | CK-12 Foundation
# 12.7: Combinations
Created by: CK-12
## Introduction
Decorating the Stage
The decorating committee is getting the stage ready for the Talent Show. There was a bunch of different decorating supplies ordered, and the students on the committee are working on figuring out the best way to decorate the stage.
They have four different colors of streamers to use to decorate.
Red
Blue
Green
Yellow
“I think four is too many colors. How about if we choose three of the four colors to decorate with?” Keith asks the group.
“I like that idea,” Sara chimes in. “How many ways can we decorate the stage if we do that?”
The group begins to figure this out on a piece of paper.
Combinations are arrangements where order does not make a difference. The decorating committee is selecting three colors from the possible four options. Therefore, the order of the colors doesn’t matter.
Combinations are the way to solve this problem. Look at the information in this lesson to learn how to figure out the possible combinations.
What You Will Learn
In this lesson you will learn how to:
• Recognize combinations as arrangements in which order is not important.
• Count all combinations of $n$ objects or events
• Count combinations of $n$ objects taken $r$ at a time
• Evaluate combinations using combination notation.
Teaching Time
I. Recognize Combinations as Arrangements in Which Order is Not Important
In the last section, you saw that order is important for some groups of items but not important for others. For example, consider a list of three words: HOPS, SHOP, and POSH.
• For the spelling of each individual word, order is important. The words HOPS, SHOP, and POSH all use the same letters, but spell out very different words.
• For the list itself, order is not important. Whether the words are presented in one order–such as HOPS, SHOP, POSH, or another order, such as SHOP, POSH, HOPS, or a third order, such as POSH, HOPS, SHOP–makes no difference. As long as the list includes all 3 words, the order of the 3 words doesn’t matter.
A combination is a collection of items in which order, or how the items are arranged, is not important. The collection of one order of the items is not functionally different than any other order.
Combinations and permutations are related. To solve problems in which order matters, you use permutations. To solve problems in which order does NOT matter, use combinations.
Let’s look at an example.
Example
The winning 3-digit lottery numbers are drawn from a drum as 641, 224, and 806. Does order matter in the way the three winning numbers are drawn?
Step 1: Write out a single order.
641, 224, 806
Step 2: Now rearrange the order. Did you change the outcome? If so, then order matters.
$224, 806, 641\Longleftarrow$ different order, same 3 winning numbers
Order does NOT matter for this problem. Use combinations.
Write the difference between combinations and permutations down in your notebook.
Example
A bag has 4 marbles: red, blue, yellow, and green. In how many different ways can you reach into the bag and draw out 1 marble, then return the marble to the bag and draw out a second marble?
Step 1: Write out a single order.
red, blue
Step 2: Now rearrange the order. Did you change the outcome? If so, then order matters.
blue, red $\Longleftarrow$ different order, meaning is DIFFERENT
Order DOES matter for this problem. Use permutations.
12K. Lesson Exercises
Write whether you would use combinations or permutations for each example.
1. Cesar the dog-walker has 5 dogs but only 3 leashes. How many different ways can Cesar take a walk with groups of 3 dogs at once?
2. Five different horses entered the Kentucky Derby. In how many different ways can the horses finish the race?
3. How many different 5-player teams can you choose from a total of 8 basketball players?
II. Count All Combinations of $n$ Objects or Events
Once you figure out if you are going to be using permutations or combinations, it is necessary to count the combinations.
There are several different ways to count combinations. When counting, try to keep the following in mind:
• Go one by one through the items. Don’t stop your list until you’ve covered every possible link of one item to all other items.
• Keep in mind that order doesn’t matter. For combinations, there no difference between $AB$ and $BA$. So if both $AB$ and $BA$ are on your list, cross one of the choices off your list.
• Check your list for repeats. If you accidentally listed a combination more than once, cross the extra listings off your list.
Example
James needs to choose a 2-color combination for his intramural team t-shirts. How many different 2-color combinations can James make out of red, blue, and yellow?
One way to find the number of combinations is to make a tree diagram. Here, if red is chosen as one color, that leaves only blue and yellow for the second color.
The diagram shows all 6 permutations of the 3 colors. But wait–since we are counting COMBINATIONS here order doesn’t matter.
So in this tree diagram we will cross out all outcomes that are repeats. For example, the first red-blue is no different from blue-red, so we’ll cross out blue-red.
In all, there are 3 combinations that are not repeats.
This method of making a tree diagram and crossing out repeats is reliable, but it is not the only way to find combinations.
Let’s look at another example.
Example
James has added a fourth color, green, to choose from in selecting a 2-color combination for his intramural team. How many different 2-color combinations can James make out of red, blue, yellow, and green?
Step 1: Write the choices. Match the first choice, red, with the second, blue. Add the combination, red-blue, to your list. Match the other choices in turn. Add the combinations to your list.
Step 2: Now move to the second choice, blue. Match blue up with every possible partner other than red, since we already included all of the combinations involving red. Add the combinations to your list.
Step 3: Now move to the third choice, yellow. There is only one new combination left to match it with. Add the combination to your list.
Your list is now complete. There are 6 combinations.
II. Count All Combinations of $n$ Objects Taken $r$ at a Time
Sometimes, you won’t want to use all of the possible options in the combination. Think about it as if you have 16 flavors of ice cream, but you only want to use three flavors at a time. This is an example where there are 16 flavors to work with, but you can only use three at a time. With an example like this one, you are looking for combinations of object where only a certain number of them are used in any one combination.
This happens a lot with teams. Let’s look at an example.
Example
How many different 2-player soccer teams can Jean, Dean, Francine, Lurleen, and Doreen form?
$&\underline{\text{Combination}} \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \underline{\text{List}}\\&\text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Dean}\\&\text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Francine}\\&\text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Lurleen}\\&\text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Doreen}$
Step 2: You’ve covered all combinations that begin with Jean. Now go through all combinations that begin with Dean, Francine, and Lurleen.
$&\underline{\text{Combination}} \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \underline{\text{List}}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Dean}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Francine}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Lurleen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Dean-Francine}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Dean-Lurleen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Dean-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Francine-Lurleen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Francine-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Lurleen-Doreen}$
Your list is now complete. There are 10 combinations.
Example
How many different 3-player soccer teams can Jean, Dean, Francine, Lurleen, and Doreen form?
Use the process above to go through all of the combinations.
$&\underline{\text{Combination}} \qquad \qquad \qquad \qquad \qquad \qquad \quad \ \underline{\text{List}}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Dean-Francine}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Dean-Lurleen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Dean-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Francine-Lurleen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Francine-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Jean-Lurleen-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Dean, Francine-Lurleen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Dean-Francine-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Dean-Lurleen-Doreen}\\& \text{Jean, Dean, Francine, Lurleen, Doreen} \qquad \text{Francine-Lurleen-Doreen}$
Your list is now complete. There are 10 combinations.
Try a few of these on your own.
12L. Lesson Exercises
1. On Monday Cesar the dog-walker has 3 dogs–Looie, Huey, and Dewey-but only 2 leashes. How many different ways can Cesar take a walk with 2 dogs? List the ways.
2. On Tuesday Cesar has 4 dogs-Looie, Huey, Dewey, and Stewie–but only 2 leashes. How many different ways can Cesar take a walk with 2 dogs? List the ways.
3. On Wednesday Cesar has 4 dogs-Looie, Huey, Dewey, and Stewie–but now has 3 leashes. How many different ways can Cesar take a walk with 3 dogs? List the ways.
Take a few minutes to discuss your findings with a partner. Share your method of finding all of the possible combinations.
IV. Evaluate Combinations Using Combination Notation
We can use a formula to help us to calculate combinations. This is very similar to the work that you did in the last section with factorials and permutations.
Example
Suppose you have 5 marbles in a bag–red, blue, yellow, green, and white. You want to know how many combinations there are if you take 3 marbles out of the bag all at the same time. In combination notation you write this as:
${_5}C_3 \Longleftarrow 5 \ \text{items taken 3 at a time}$
In general, combinations are written as:
${_n}C_r \Longleftarrow n \ \text{items taken} \ r \ \text{at a time}$
To compute ${_n}C_r$ use the formula:
${_n}C_r = \frac{n!}{r!(n - r)!}$ This may seem a bit confusing, but it isn’t. Notice that the factorial symbol is used with the number of object $(n)$ and the number taken at any one time $(r)$. This helps us to understand which value goes where in the formula.
Now let’s look at applying the formula to the example.
For ${_5}C_3$:
${_5}C_3 = \frac{5!}{3!(5 - 3)!} = \frac{5!}{3! 2!}$ Simplify.
${_5}C_3 = \frac{5 (4)(3)(2)(1)}{(3 \cdot 2 \cdot 1)(2 \cdot 1)} = \frac{120}{12} = 10$
There are 10 possible combinations.
Example
Find ${_6}C_2$
Step 1: Understand what ${_6}C_2$ means.
${_6}C_2 \Longleftarrow 6 \ \text{items taken 2 at a time}$
Step 2: Set up the problem.
${_6}C_2 = \frac{6!}{2!(6 -2)!}$ Step 3: Fill in the numbers and simplify.
${_6}C_2 = \frac{6(5)(4)(3)(2)(1)}{(2 \cdot 1)(4 \cdot 3 \cdot 2 \cdot 1)} = \frac{720}{48} = 15$
There are 15 possible combinations.
12M. Lesson Exercises
Find the number of combinations in each example.
1. ${_5}C_2$
2. ${_4}C_3$
3. ${_6}C_4$
Copy down the formula for figuring out combinations in your notebook
## Real–Life Example Completed
Decorating the Stage
Here is the original problem once again. Reread it and then figure out the decorations.
The decorating committee is getting the stage ready for the Talent Show. There was a bunch of different decorating supplies ordered, and the students on the committee are working on figuring out the best way to decorate the stage.
They have four different colors of streamers to use to decorate.
Red
Blue
Green
Yellow
“I think four is too many colors. How about if we choose three of the four colors to decorate with?” Keith asks the group.
“I like that idea,” Sara chimes in. “How many ways can we decorate the stage if we do that?”
The group begins to figure this out on a piece of paper.
Combinations are arrangements where order does not make a difference. The decorating committee is selecting three colors from the possible four options. Therefore, the order of the colors doesn’t matter.
We can use combination notation to figure out this problem.
${_4}C_3 = \frac{4!}{3!(4 - 3)!} = \frac{4(3)(2)(1)}{(3 \cdot 2 \cdot 1)(1)}= \frac{24}{6} = 4$
There are four possible ways to decorate the stage.
Now that the students have this information, they can look at their color choices and vote on which combination they like best.
## Vocabulary
Here are the vocabulary words used in this lesson.
Combination
an arrangement of objects or events where order does not matter.
Permutations
an arrangement of objects or events where the order does matter.
## Time to Practice
Directions: Write whether you are more likely to use permutations or combinations for each of the following examples.
1. A bag has 4 marbles: red, blue, yellow, and green. In how many different ways can you reach into the bag and draw out 2 marbles at once and drop them in a cup?
2. A bag contains 5 slips of paper with letters $A, B, C, D$, and $E$ written on them. Pull out one slip, mark down the letter and replace it in the bag. Do this 3 times so you have written 3 letters. How many different ways can you write the 3 letters?
3. Eight candidates are running for the 4-person Student Council. How many different Student Councils are possible?
4. Mario’s gym locker uses the numbers 14, 6, and 32. How many different arrangements of the three numbers must Mario try to be sure he opens his locker?
5. Five horn players are running for 2 seats in a jazz band. How many different ways can the two horn players be chosen?
Directions: Use what you have learned about combinations to answer each question.
6. The Ace, King, Queen, and Jack of Spades are face down on a table. Draw three cards all at once. How many different 3-card hands can you draw?
7. How many different 4-player teams can you choose from a total of 5 volleyball players:
Andy, Randi, Sandy, Mandy, and Chuck?
8. How many different 3-player teams can you choose from a total of 5 volleyball players:
Andy, Randi, Sandy, Mandy, and Chuck?
9. A bag contains 6 slips of paper with letters $A, B, C, D, E$, and $F$ written on them. Pull out 4 slips. How many different 4-slip combinations can you get?
Directions: Evaluate each factorial.
10. 5!
11. 4!
12. 3!
13. 8!
14. 9!
15. 6!
Directions: Evaluate each combination using combination notation.
16. ${_7} C_2$
17. ${_7} C_6$ 18. ${_8} C_4$
19. ${_9} C_6$
20. ${_8} C_3$
21. ${_{10}}C_7$
22. ${_{12}}^*C_9$ 23. ${_{11}}^*C_9$
24. ${_{16}}^*C_{14}$
Feb 22, 2012
Dec 10, 2014 |
# INVARIANT SUBSPACES FOR THE BACKWARD SHIFT ON THE HARDY SPACE
• Lee, Hong Youl (Department of Mathematics Education, Woosuk University)
• Accepted : 2014.10.28
• Published : 2014.12.25
• 139 24
#### Abstract
In this note we provide a concrete description on the invariant subspaces for the backward shift on the Hardy space $H^2(\mathbb{T})$.
#### References
1. M. B. Abrahamse, Subnormal Toeplitz operators and functions of bounded type, Duke Math. J. 43 (1976), 597-604. https://doi.org/10.1215/S0012-7094-76-04348-9
2. R. G. Douglas, Banach algebra techniques in the theory of Toeplitz operators, CBMS 15, Providence, Amer. Math. Soc. 1973.
3. J. B. Garnett, Bounded Analytic Functions, Academic Press, New York, 1981.
4. C. Gu, J. Hendricks and D. Rutherford, Hyponormality of block Toeplitz operators, Pacific J. Math. 223 (2006), 95-111. https://doi.org/10.2140/pjm.2006.223.95
5. N. K. Nikolskii, Treatise on the Shift Operator, Springer, New York, 1986.
6. V. V. Peller, Hankel Operators and Their Applications, Springer, New York, 2003. |
# Mathematics 2011 | Study Mode
Question 41
A
6!
B
7!
C
5!
D
8!
##### Explanation
ELATION
Since there are 7 letters. The first letter can be arranged in 7 ways, , the second letter in 6 ways, the third letter in 5 ways, the 4th letter in four ways, the 3rd letter in three ways, the 2nd letter in 2 ways and the last in one way.
therefore, 7 x 6 x 5 x 4 x 3 x 2 x 1 = 7! ways
Question 42
A
24
B
60
C
12
D
120
##### Explanation
The first person will sit down and the remaining will join.
i.e. (n - 1)!
= (5 - 1)! = 4!
= 24 ways
Question 43
A
$$\frac{2}{3}$$
B
$$\frac{1}{3}$$
C
$$\frac{2}{9}$$
D
$$\frac{7}{9}$$
##### Explanation
Prime numbers = (43,47,53,59)
N = (43, 44, 45,..., 60)
The universal set contains 18 numbers.
The prime numbers between 43 and 60 are 4
Probability of picking a prime number = $$\frac{4}{18}$$
= $$\frac{2}{9}$$
Question 44
A
sec2 $$\theta$$
B
tan $$\theta$$ cosec $$\theta$$
C
cosec $$\theta$$sec $$\theta$$
D
cosec2$$\theta$$
##### Explanation
$$\frac {\sin\theta}{\cos\theta}$$
$$\frac{\cos \theta {\frac{d(\sin \theta)}{d \theta}} - \sin \theta {\frac{d(\cos \theta)}{d \theta}}}{\cos^2 \theta}$$
$$\frac{\cos \theta. \cos \theta - \sin \theta (-\sin \theta)}{cos^2\theta}$$
$$\frac{cos^2\theta + \sin^2 \theta}{cos^2\theta}$$
Recall that sin2 $$\theta$$ + cos2 $$\theta$$ = 1
$$\frac{1}{\cos^2\theta}$$ = sec2 $$\theta$$
Question 45
A
{a, b, d, e}
B
{b, d}
C
{a, e}
D
{c}
Question 46
A
30%
B
25%
C
35%
D
20%
##### Explanation
$$\frac{70}{360} \times 36 = \frac{70}{10}$$
= 7
Question 47
A
180
B
135
C
210
D
105 |
# What are some interesting ways to structure a random encounter table? [closed]
The standard tables I've seen have been basic 'Wandering Monster' tables where you roll some dice in order to generate a random encounter.
I'm sure many have taken this concept further, but I haven't seen much advice around the net about how others construct random tables for their games. I'm looking for general ideas as well as examples.
-
## closed as too broad by doppelgreener, SnakeDr68, Wibbs, C. RossJul 13 '13 at 11:05
There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.If this question can be reworded to fit the rules in the help center, please edit the question.
Inspired by Numenetics' and Trollsmyth's comments: Monster group encounter tables; independent of character, dungeon, and average monster level. Monster group tables are referenced by wandering monster tables in places that the group is active This table represents a group of monsters(ex: goblin tribe, wolf pack, bandits. It uses a bell curve, with large territories using more d6. On the low end are discoveries representing clues to the groups existence, mid-way are common encounters, the high end are named characters, and the lair at the top. A bonus is added the closer you get to the lair. – Michael Makali'i Fernandez Aug 21 '10 at 5:26
In the wilderness I use a simple d6 or d8 and fill it with creatures depending on the surrounding areas. I'll usually have two or three extra entries and will add +2 or +3 at night. Thus, the first two or three entries are day-time only where as the last two or three creatures are night-time only.
The creatures encountered depend solely on the surrounding area. So a table can feature kobolds and dragons at the same time. The dragon might just fly overhead, obviously. When the party encounters the dragon, this means that it's lair must be somewhere in the surrounding area and they can start avoiding or searching the area.
-
You bring up a great point: for the purpose of random encounter tables, "encounter" needn't mean "hostile combat situation," but just the more standard definition of "find: come upon, as if by accident; meet with;" – Numenetics Aug 20 '10 at 12:30
Nice! I believe I will use this as well in the near future. – Michael Makali'i Fernandez Aug 21 '10 at 5:28
You can turn an encounter table into a story by crossing elements off as they come up, and shifting the numbers appropriately (so that after 4 is rolled, it's crossed off, and the old five is now four). This dovetails nicely with the previous idea of changing die size, since the trick is that the most interesting encounters are on higher numbers, and as encounters get crossed off, it pushes the game towards them. In this way you can make an outdoor adventure have a similar cadence to a dungeon, getting tougher and more interesting as you press on.
If you find you're dealing with a very generic encounter table (just monster names, no details) try rolling an additional d8 and using this chart to set tone.
1. Deliberate: maybe they know something, maybe they just picked up a scent. Whatever the reason, they're hear because they're looking for the PCs.
2. Far From Home: This is not familiar territory for the creature. It wants to leave, but it's jumpy, and quick to violence.
3. Prepared: this is home base, and the monster know this terrain and has had time to prepare, choosing a good hiding place, laying traps or otherwise making ready. Monsters can't be surprised unless party is actively sneaking.
4. Distracted: something else has the creature's attention. May be avoidable.
5. Surprised: monster was clearly not expecting you. Players won't be surprised, monsters must roll.
6. Wary: monster is inclined to observe and follow the PCs and assess their strength before acting.
7. Civilized: monster is unexpectedly open to parlay, though communication maybe tricky.
8. Maddened: whether it's sickness, loss or just sheer cussedness, the creature is blind with rage.
While not incredibly deep, this sort of chart can help kickstart ideas for how to make the encounter something other than entirely generic.
-
I agree with Numenetics. Also, I've arranged the tables to take advantage of the bell-curve you get from rolling multiple dice (3d6 or the like). So the monsters at the middle of the table are more likely to occur than those at the ends.
I also have "countdown" tables. Say you have a portal to the Abyss slowly opening in your dungeon. Every day it remains open, I add one to all my rolls on the wandering monster table. The higher numbers have more demons in them, to the point where certain things that can't be rolled at first become possible and then very likely.
-
Aha! Those two ideas really mesh well. – Michael Makali'i Fernandez Aug 20 '10 at 7:01
One good idea is to use different die types depending on the time of day, or the season, how close the party is to civilisation, and so on.
For example, if the nastier things come out at night, then you might stack those encounters at the top end of a 2d10 table, but during the day, you only roll 1d10+1d6. Close to town, you might only roll 1d10.
It's also worth cross-pollinating your encounter tables. While there may be no goblins in hex H6, for example, there is a goblin tribe in H7, so it's plausible that a small number might wander into the former hex. This helps to make the area seem alive and independent of the player-characters, and less like a series of discrete modules.
I'll also throw in my support for the very simple idea that "random encounter" doesn't necessarily mean "a monster attacks". Come up with a bunch of non-combat encounters to throw in there, whether they be traps, mysterious occurrences, a wandering merchant, and so on.
For the encounters which do turn out to be monsters, you can get a lot of use from a secondary "What Are the Monsters Doing?" type table, so even if you do roll 1d6 kobolds, they may in fact be chasing butterflies rather than getting ready for a fight.
-
Two good "What Are the Monsters Doing?" tables are Kellri's Group Activities Table and the one in Jeff Rients' Miscellaneum of Cinder. – SevenSidedDie Apr 18 '12 at 18:58
I've never been a fan of wandering monster table, but random encounters. The difference being that each of the encounters or situations is occuring when the players arrive at the scene. I'll try not to give too bad of an example.
1. Players come across two goblins fighting over a large fish. They have each other in headlocks and rolling around on the ground. Meanwhile the fish is flopping closer to the creek.
2. A goblin scouting party is marching down the trail. One of them is complaining loud enough to warn the players of their arrival. They have been marching for over two days and are starved. If they see they players they will collapse and plead for mercy. All they want it food.
3. The bushes move from the left of the party and as the players turn a large bear charges out attacking the closest character. Two bear cubs wait in a nearby tree. A player may try to lure the bears down with food. If the players are interested these black bear cubs can be trained.
4. The players come across a wounded woman (Clara). She claims to have been attacked by a goblin, but was able to kill it. She points towards the tall grass and a small goblin lies smoldering She will no say how she burned the goblin. She has a Wand of Fireballs hidden on her and doesn't want it taked away. She wants to return to town to her family. She has 10gp also hidden on her. She is the one who told the goblins which road the caravan was traveling in exchange for gold. One goblin thought he could make some easy money by attacking her.
The drawback of course is you can only use them once or twice. But I think if you develop ten or more that is all you need. The situation is set when the players come across it and their a small backstory of what is going on.
-
Not sure if this is what you mean, but I think that making wilderness encounter tables level-independent increases their utility in creating interesting scenarios because it provides great opportunities to solve problems in ways other than direct combat, including running. I've had characters run from a nasty wilderness encounter and mark the area on the map, knowing that they wanted to come back and take care of that pesky X, creating some fun opportunities for fleshing out why the X was there in the first place, where exactly it lives, etc.
-
Nice. So by level-independent, which type of level do you mean? Character level? Dungeon level? Average level of the monsters on the table? What I mean is, do you still consider any one of the three above when making an encounter table? – Michael Makali'i Fernandez Aug 20 '10 at 6:58
By level-independent I mean character level independent, but that only applies to wilderness encounter tables. To me, those are a near free-for-all limited only by setting details. Dungeon encounter tables are a different matter, and are by dungeon level, although they will still usually have one or two potentially nasty encounters. I do like the two stage dungeon encounter tables, though, where you roll a die to determine which monster table you'll be rolling from. The dungeon level determines the first table. – Numenetics Aug 20 '10 at 12:28
I've seen people write up random encounter charts with more than just monsters on 'em: weather-related challenges, terrain obstacles, interesting locations or just plain, you know, random stuff ("While tromping through the swamp, one randomly-selected character falls, face-first, into the mud, no save. Character is all mucky and stuff, smells bad and gets a malus on reaction rolls until he/she can get cleaned up.").
As for distribution, well, how about:
(D6)
1-3: Monster
(subtable)
4-5: Terrain Obstacle
(Subtable)
6: Weirdness
(Subtable)
...or whatever balance you want. Maybe Weirdness gets the 1-3 slot, and monsters are only 1/6.
-
Tables with a bell curve can be fun if you want some monsters to be rarer than others. Just use more than one dice. 3d6 for example and have entries for 3 to 18.
Also using a D100 with ranges for different monsters can be a nice way of controlling the weighting of the randomness to a finer degree.
A short example:
1: Red Dragon
2-10: Bulette
11-30: Orc
31-60: Goblin
61-70: Wolf
71-80: Owlbear
81-85: The Dragon of Tyr
86-90: Giant Boar
91-95: White Dragon
96-98: Purple worm
00: Mr. Hat
You can assign ranges of different sizes, and different probabilities, to the monsters you want to appear more or less often.
-
I often use two dissimilar dice, such as d6+d10, in order to get the flat spot.
I pick a general threat level for an area, and put slightly higher level encounters at the ends, and slightly lower in the flat-spot.
For example, a relatively safe woods might be "level 3" and fairly uniform: d4+d6
2 (1/24): young wyvern
3 (2/24): Carrion Crawler
4 (3/24): Elven Hunters
5 (4/24): Herd Animal (3hd)
6 (4/24): Herd Animal (2hd)
7 (4/24): wolves
8 (3/24): 3hd hunting cat
9 (2/24): other hominids
10 (1/24): Sprites
-
The method given in the D&D 4e DMG isn't bad: use a card deck.
I use index cards:
• 16 skirmishers
• 8 brutes
• 8 soldiers
• 6 artillery
• 4 controllers
• 2 lurkers
• 4 minions
• 1 solo.
So you make one card for each, just go through the deck.
Next, shuffle them up, and then run through it again to assign a level that ranges on each card from party level -2 to party level +5. with the majority of the numbers being equal to party level.
Shuffle them up again, turn to the back of the monster book of your choice (or monster builder or what-have-you) and pick out monsters that fit each card. For example "Brute level 4"- you just look what brutes fit. Mark it down.
You could be totally random about this or more structuyred- for example, doing an orc or fey themed deck. See examples below.
Finally, building encounters: figure out the encounter budget and draw the first few cards to fill up that encounter. Skirmishers count as two monsters, solo's count as 1, minions count as 4. A lurker gets added in after you do the entire encounter. Keep drawing until it fills the XP budget up.
Final step: try to figure out how this makes an encounter. if it makes sense, you have an encounter. If it doesn't? Just draw more cards.
Examples:
Good: You draw a gnome arcanist, a pseudo dragon, and a group of wisp-wraiths (minions): great encounter. The gnome and the dragon are friends, adventuring together. The wisp wraiths are firefly spirits in a jar the gnome is carrying around. He just smashes the jar in round 1.
Needs work: You draw an orc raider, an orc brute, a halfling slinger, and several spiders. hmm. You could drop the spiders, and redraw. Maybe the halfling is a particularly charismatic bad guy, and the orcs are his gang.
As you start to increase the encounter area, adding new monster cards can be used to create a more themed area. For example- you could create a less random deck that is used just for underdark areas or a sylvan woods.
-
Check out Abulafia, which is a sort of wiki random-stuff generator. It's had some server issues, but when it's up, it's awesome. You edit wiki pages to create lists of stuff. It can be nested and complicated, so you can use it to auto-generate words in a language using some simple rules and syllable lists, for example. You could easily do "nested" random encounter lists with different probabilities using this tool.
-
This works for lists in electronic format that you can easily edit, e.g. text files.
Have a list of encounter and events and whatnot, as usual. Whenever you should roll on it, roll for example a d6. That's the encounter. Next consider: Was the encounter a one-off thing? If yes, remove it from the list. If no, change its place to be the last one on the list.
When you get an idea for a new encounter you want to happen soon, add it to the beginning of the list. If you want it to maybe happen sometimes but there's no hurry, add it to the end.
As a bonus, you can roll pretty much any die and this works well. If your table has 13 entries and your roll of d20 is 16, simply treat it as 16-13 = 3 (so roll d20 and use the equivalence class modulo 13), though I'd rather use a smaller die to speed the table look-up. If you roll several dice and the table is long, the first few results won't be available, and though this could be handled, I'd rather simply roll a single die.
Clearing an area
To simulate clearing an area, add a bunch of "empty" or non-creature entries to the table. They'll get more common if you move them to the end of the table when they come up but remove other entries (perhaps unique monsters) when they are rolled and if they are defeated.
-
Random encounter tables are a way to fill your game with encounters. In reality by themselves they just tell the referee that in the immediate vicinity of the party there is a kind of creature os "local special", but tell nothing much more about the encounters (what the creature is doing, how it will react, what kind of interaction will be).
The typical table is filled with local fauna (wolves, snakes, bloodbriar, cultists) and local specials (hymns sung by cultists in their hidden temple brought by the wind, volcano erupts): just cram in what you expect to be encountered while moving about there. I recommend a healty mix of the two kinds.
-
The Savage Worlds edition of The Day After Ragnarok has some very nice encounter tables for The Poisoned Lands that you could adapt to many different settings. After determining that there's an encounter at all, you roll for an encounter type (People, Animal, Event, etc.,) and then each type has another table to roll on and sometime sub-tables. There's an Allegiance table for People, for example. And on top of that, there's an entire Adventure Generator in the book too. Like most Ken Hite products, DaR is full of interesting ideas.
-
I created wilderness encounter cards that include poison ivy, losing items, hostile creatures , etc.
-
Welcome to the site. Please see our about page, and flip through the help when you get a chance. – LitheOhm Jul 14 '13 at 0:30 |
# What is integration Class 11?
## What is the integration in physics?
Integration is the calculation of an integral. Integrals in maths are used to find many useful quantities such as areas, volumes, displacement, etc.
## How do you solve integration in physics class 11?
1. ∫ xn. dx = x(n + 1)/(n + 1)+ C.
2. ∫ 1. dx = x + C.
3. ∫ ex. dx = ex + C.
4. ∫1/x. dx = log|x| + C.
5. ∫ ax. dx = ax /loga+ C.
6. ∫ ex(f(x) + f'(x)). dx = ex. f(x) + C.
## What is integral calculus in physics class 11?
Integration is a method to find definite and indefinite integrals. The integration of a function f(x) is given by F(x) and it is represented by: where. R.H.S. of the equation indicates integral of f(x) with respect to x. F(x) is called anti-derivative or primitive.
## What is integration formula?
Integral calculus is the study of integrals and their properties. It is mostly useful for the following two purposes: To calculate f from f’ (i.e. from its derivative). If a function f is differentiable in the interval of consideration, then f’ is defined in that interval.
## What is integration concept?
Formula for Integration: \int e^x \;dx = e^x+C.
## What are the 5 basic integration formulas?
• ∫ xn dx = x(n + 1)/(n + 1)+ C.
• ∫ 1 dx = x + C.
• ∫ ex dx = ex + C.
• ∫ 1/x dx = log |x| + C.
• ∫ ax dx = ax /log a+ C.
• ∫ ex [f(x) + f'(x)] dx = ex f(x) + C.
## What is integration with example?
In an IT context, integration refers to the end result of a process that aims to stitch together different, often disparate, subsystems so that the data contained in each becomes part of a larger, more comprehensive system that, ideally, quickly and easily shares data when needed.
## Who is the father of integration?
That is, if a function is the product of two other functions, f and one that can be recognized as the derivative of some function g, then the original problem can be solved if one can integrate the product gDf. For example, if f = x, and Dg = cos x, then ∫x·cos x = x·sin x − ∫sin x = x·sin x − cos x + C.
## What are the rules of integration?
• Power Rule.
• Sum Rule.
• Different Rule.
• Multiplication by Constant.
• Product Rule.
## Why integration is used in physics?
Sir Isaac Newton was a mathematician and scientist, and he was the first person who is credited with developing calculus.
## What is practical use of integration?
Now, integration is nothing but addition. It is used when you are required to add many things together in less time. When quantities are rarely constant as they vary with time, or space, or energy, or any of a thousand other parameters, calculus i.e. differentiation or integration is the engine to drive all of physics.
## Which chapters are included in calculus class 11?
• Basic Concepts of Relations and Function.
• Ordered pairs, sets of ordered pairs.
• Cartesian Product (Cross) of two sets, cardinal number of a cross product.
• Types of Relations: reflexive, symmetric, transitive and equivalence relation.
• Binary Operation.
• Domain, Range and Co-domain of a Relation.
• Functions.
## What is differentiation in physics class 11?
In real life, integrations are used in various fields such as engineering, where engineers use integrals to find the shape of building. In Physics, used in the centre of gravity etc. In the field of graphical representation, where three-dimensional models are demonstrated.
## Is integration easy?
Differentiation is a process by which we can measure the rate of change of some quantity with respect to another quantity. These rates we get after differentiation are called derivatives. Suppose that we have a function y=f(x). Now, this is a function which has an independent variable x and a dependent variable y.
## What is definite integral?
Integration is hard! Integration is generally much harder than differentiation. This little demo allows you to enter a function and then ask for the derivative or integral. You can also generate random functions of varying complexity.
## What is integration of DX?
Definition of definite integral : the difference between the values of the integral of a given function f(x) for an upper value b and a lower value a of the independent variable x.
## What is integration and types?
The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) is called the integrand, the points a and b are called the limits (or bounds) of integration, and the integral is said to be over the interval [a, b], called the interval of integration.
## What is method of integration?
Integration is one of the two main concepts of Maths, and the integral assigns a number to the function. The two different types of integrals are definite integral and indefinite integral.
## Why do we need integration?
Integration is a method of adding values on a large scale, where we cannot perform general addition operation. But there are multiple methods of integration, which are used in Mathematics to integrate the functions.
## How many methods of integration are there?
Integration ensures that all systems work together and in harmony to increase productivity and data consistency. In addition, it aims to resolve the complexity associated with increased communication between systems, since they provide a reduction in the impacts of changes that these systems may have.
## What is standard integral formula?
How many Types of Methods of Integration are there? There are many methods of integration that we use but the most common ones are 5, namely Integration by Parts, Method of Integration Using Partial Fractions, Integration by Substitution Method, Integration by Decomposition, and Reverse Chain Rule.
## What is the integration of 1?
Hence the integration of 1 is x + c , where c is integral constant.
## What are the 4 types of system integration?
• Point-to-Point Integration.
• Vertical Integration.
• Star Integration.
• Horizontal Integration. |
# User:Tohline/Appendix/Ramblings/BiPolytropeStability
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
# Marginally Unstable Bipolytropes
Our aim is to determine whether or not there is a relationship between (1) equilibrium models at turning points along bipolytrope sequences and (2) bipolytropic models that are marginally (dynamically) unstable toward collapse (or dynamical expansion).
## Overview
Figure 1: Equilibrium Sequences
of Pressure-Truncated Polytropes
We expect the content of this chapter — which examines the relative stability of bipolytropes — to parallel in many ways the content of an accompanying chapter in which we have successfully analyzed the relative stability of pressure-truncated polytopes. Figure 1, shown here on the right, has been copied from a closely related discussion. The curves show the mass-radius relationship for pressure-truncated model sequences having a variety of polytropic indexes, as labeled, over the range $1 \le n \le 6$. (Another version of this figure includes the isothermal sequence.) On each sequence for which $~n \ge 3$, the green filled circle identifies the model with the largest mass. We have shown analytically that the oscillation frequency of the fundamental-mode of radial oscillation is precisely zero for each one of these maximum-mass models. As a consequence, we know that each green circular marker identifies the point along its associated sequence that separates dynamically stable (larger radii) from dynamically unstable (smaller radii) models.
In each case, the fundamental-mode oscillation frequency is precisely zero if, and only if, the adiabatic index governing expansions/contractions is related to the underlying structural polytropic index via the relation, $~\gamma_g = (n + 1)/n$, and if a constant surface-pressure boundary condition is imposed.
In another accompanying chapter, we have used purely analytic techniques to construct equilibrium sequences of spherically symmetric bipolytropes that have, $~(n_c,n_e) = (5,1)$. For a given choice of $~\mu_e/\mu_c$ — the ratio of the mean-molecular weight of envelope material to the mean-molecular weight of material in the core — a physically relevant sequence of models can be constructed by steadily increasing the value of the dimensionless radius at the core/envelope interface, $~\xi_i$, from zero to infinity. Figure 2, which has been copied from this separate chapter, shows how the fractional core mass, $\nu \equiv M_\mathrm{core}/M_\mathrm{tot}$, varies with the fractional core radius, $q \equiv r_\mathrm{core}/R$, along sequences having six different values of $~\mu_e/\mu_c$: 1 (blue diamonds), ½ (red squares), 0.345 (dark purple crosses), ⅓ (pink triangles), 0.309 (light green dashes), and ¼ (purple asterisks). Along each of the model sequences, points marked by solid-colored circles correspond to models whose interface parameter, $~\xi_i$, has one of three values: 0.5 (green circles), 1 (dark blue circles), or 3 (orange circles).
When modeling bipolytropes, the default expectation is that an increase in $\xi_i$ along a given sequence will correspond to an increase in the relative size — both the radius and the mass — of the core. As Figure 2 illustrates, this expectation is realized along the sequences marked by blue diamonds ($~\mu_e/\mu_c = 1$) and by red squares ($~\mu_e/\mu_c =$½). But the behavior is different along the other four illustrated sequences. For sufficiently large $~\xi_i$, the relative radius of the core begins to decrease. Furthermore, along sequences for which $~\mu_e/\mu_c < \tfrac{1}{3}$, eventually the fractional mass of the core reaches a maximum and, thereafter, decreases even as the value of $~\xi_i$ continues to increase. (Additional properties of these equilibrium sequences are discussed in yet another accompanying chapter.)
The principal question is: Along bipolytropic sequences, are maximum-mass models associated with the onset of dynamical instabilities?
## Planned Approach
Figure 2: Equilibrium Sequences of Bipolytropes with $~(n_c,n_e) = (5,1)$
Ideally we would like to answer the just-stated "principal question" using purely analytic techniques. But, to date, we have been unable to fully address the relevant issues analytically, even in what would be expected to be the simplest case: bipolytropic models that have $~(n_c,n_e) = (0, 0)$. Instead, we will streamline the investigation a bit and proceed — at least initially — using a blend of techniques. We will investigate the relative stability of bipolytropic models having $~(n_c,n_e) = (5,1)$ whose equilibrium structures are completely defined analytically; then the eigenvectors describing radial modes of oscillation will be determined, one at a time, by solving the relevant LAWE(s) numerically. We are optimistic that this can be successfully accomplished because we have had experience numerically integrating the LAWE that governs the oscillation of:
A key reference throughout this investigation will be the paper by J. O. Murphy & R. Fiedler (1985b, Proc. Astr. Soc. of Australia, 6, 222). They studied Radial Pulsations and Vibrational Stability of a Sequence of Two Zone Polytropic Stellar Models. Specifically, their underlying equilibrium models were bipolytropes that have $~(n_c,n_e) = (1, 5)$. In an accompanying chapter, we describe in detail how Murphy & Fiedler obtained these equilibrium bipolytropic structures and detail some of their equilibrium properties.
Here are the steps we initially plan to take:
• Governing LAWEs:
• Identify the relevant LAWEs that govern the behavior of radial oscillations in the $~n_c = 5$ core and, separately, in the $~n_e = 1$ envelope. Check these LAWE specifications against the published work of Murphy & Fiedler (1985b).
• Determine the matching conditions that must be satisfied across the core/envelope interface. Be sure to take into account the critical interface jump conditions spelled out by P. Ledoux & Th. Walraven (1958), as we have already discussed in the context of an analysis of radial oscillations in zero-zero bipolytropes.
• Determine what surface boundary condition should be imposed on physically relevant LAWE solutions, i.e., on the physically relevant radial-oscillation eigenvectors.
• Initial Analysis:
• Choose a maximum-mass model along the bipolytropic sequence that has, for example, $~\mu_e/\mu_c = 1/4$. Hopefully, we will be able to identify precisely (analytically) where this maximum-mass model lies along the sequence. Yes! Our earlier analysis does provide an analytic prescription of the model that sits at the maximum-mass location along the chosen sequence.
• Solve the relevant eigenvalue problem for this specific model, initially for $~(\gamma_c, \gamma_e) = (6/5, 2)$ and initially for the fundamental mode of oscillation.
# Review of the Analysis by Murphy & Fiedler (1985b)
In the stability analysis presented by Murphy & Fiedler (1985b), the relevant polytropic indexes are, $~(n_c, n_e) = (1,5)$. Structural properties of the underlying equilibrium models have been reviewed in our accompanying discussion.
The Linear Adiabatic Wave Equation (LAWE) that is relevant to polytropic spheres may be written as,
$~0 = \frac{d^2x}{d\xi^2} + \biggl[ 4 - (n+1) Q \biggr] \frac{1}{\xi} \cdot \frac{dx}{d\xi} + (n+1) \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_g } \biggr) \frac{\xi^2}{\theta} - \alpha Q\biggr] \frac{x}{\xi^2}$ where: $~Q(\xi) \equiv - \frac{d\ln\theta}{d\ln\xi} \, ,$ $~\sigma_c^2 \equiv \frac{3\omega^2}{2\pi G\rho_c} \, ,$ and, $~\alpha \equiv \biggl(3 - \frac{4}{\gamma_\mathrm{g}}\biggr)$
See also … Accompanying chapter showing derivation and overlap with multiple classic papers: A. S. Eddington (1926), especially equation (127.6) on p. 188 — The Internal Constitution of Stars P. Ledoux & C. L. Pekeris (1941, ApJ, 94, 124) — Radial Pulsations of Stars M. Schwarzschild (1941, ApJ, 94, 245) — Overtone Pulsations for the Standard Model R. F. Christy (1966, Annual Reviews of Astronomy & Astrophysics, 4, 353) — Pulsation Theory J. P. Cox (1974, Reports on Progress in Physics, 37, 563) — Pulsating Stars Accompanying chapter detailing specific application to polytropes along with a couple of additional key references: M. Hurley, P. H. Roberts, & K. Wright (1966, ApJ, 143, 535) — The Oscillations of Gas Spheres Murphy & Fiedler (1985b) — Radial Pulsations and Vibrational Stability of a Sequence of Two Zone Polytropic Stellar Models
As we have detailed separately, the boundary condition at the center of a polytropic configuration is,
$~\frac{dx}{d\xi} \biggr|_{\xi=0} = 0 \, ;$
and the boundary condition at the surface of an isolated polytropic configuration is,
$~\frac{d\ln x}{d\ln\xi}$ $~=$ $~- \alpha + \frac{\omega^2}{\gamma_g } \biggl( \frac{1}{4\pi G \rho_c } \biggr) \frac{\xi}{(-\theta^')}$ at $~\xi = \xi_s \, .$
But this surface condition is not applicable to bipolytropes. Instead, let's return to the original, more general expression of the surface boundary condition:
$~ \frac{d\ln x}{d\ln\xi}\biggr|_s$ $~=$ $~- \alpha + \frac{\omega^2 R^3}{\gamma_g GM_\mathrm{tot}} \, .$
Utilizing an accompanying discussion, let's examine the frequency normalization used by Murphy & Fiedler (1985b) (see the top of the left-hand column on p. 223):
$~\Omega^2$ $~\equiv$ $~ \omega^2 \biggl[ \frac{R^3}{GM_\mathrm{tot}} \biggr]$ $~=$ $~ \omega^2 \biggl[ \frac{3}{4\pi G \bar\rho} \biggr] = \omega^2 \biggl[ \frac{3}{4\pi G \rho_c} \biggr] \frac{\rho_c}{\bar\rho} = \frac{3\omega^2}{(n_c+1)} \biggl[ \frac{(n_c+1)}{4\pi G \rho_c} \biggr] \frac{\rho_c}{\bar\rho}$ $~=$ $~ \frac{3\omega^2}{(n_c+1)} \biggl[ \frac{a_n^2\rho_c}{P_c} \cdot \theta_c \biggr] \frac{\rho_c}{\bar\rho} = \frac{3\gamma}{(n_c+1)} \frac{\rho_c}{\bar\rho} \biggl[ \frac{a_n^2\rho_c}{P_c} \cdot \frac{\omega^2 \theta_c}{\gamma} \biggr] \, .$
For a given radial quantum number, $~k$, the factor inside the square brackets in this last expression is what Murphy & Fiedler (1985b) refer to as $~\omega^2_k \theta_c$. Keep in mind, as well, that, in the notation we are using,
$~\sigma_c^2$ $~\equiv$ $~\frac{3\omega^2}{2\pi G \rho_c}$ $~\Rightarrow ~~~ \sigma_c^2$ $~=$ $~ \biggl( \frac{2\bar\rho}{\rho_c}\biggr) \Omega^2 = \frac{6\gamma}{(n_c+1)} \biggl[ \frac{a_n^2\rho_c}{P_c} \cdot \frac{\omega^2 \theta_c}{\gamma} \biggr] = \frac{6\gamma}{(n_c+1)} \biggl[ \omega_k^2 \theta_c \biggr] \, .$
This also means that the surface boundary condition may be rewritten as,
$~ \frac{d\ln x}{d\ln\xi}\biggr|_s$ $~=$ $~\frac{\Omega^2}{\gamma_g } - \alpha \, .$
Let's apply these relations to the core and envelope, separately.
## Envelope Layers With n = 5
The LAWE for n = 5 structures is, then,
$~0$ $~=$ $~ \frac{d^2x}{d\eta^2} + \biggl[ 4 - 6Q_5 \biggr] \frac{1}{\eta} \cdot \frac{dx}{d\eta} + 6 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{env} } \biggr) \frac{\eta^2}{\phi} - \alpha_\mathrm{env} Q_5\biggr] \frac{x}{\eta^2}$
where,
$~Q_5$ $~\equiv$ $~- \frac{d\ln\phi}{d\ln\eta} \, .$
$~\phi$ $~=$ $~\frac{B_0^{-1}\sin\Delta}{\eta^{1/2}(3-2\sin^2\Delta)^{1/2}} \, ,$
and,
$~\frac{d\phi}{d\eta}$ $~=$ $~ \frac{B_0^{-1}[3\cos\Delta-3\sin\Delta + 2\sin^3\Delta] }{2\eta^{3/2}(3-2\sin^2\Delta)^{3/2}} \, .$
where $~A_0$ is a "homology factor," $~B_0$ is an overall scaling coefficient, and we have introduced the notation,
$~\Delta \equiv \ln(A_0\eta)^{1/2} = \frac{1}{2} (\ln A_0 + \ln\eta) \, .$
Hence,
$~Q_5$ $~=$ $~ - \eta \biggl[ \frac{\eta^{1/2}(3-2\sin^2\Delta)^{1/2}}{B_0^{-1}\sin\Delta} \biggr] \frac{B_0^{-1}[3\cos\Delta-3\sin\Delta + 2\sin^3\Delta] }{2\eta^{3/2}(3-2\sin^2\Delta)^{3/2}}$ $~=$ $~ \frac{ 3\sin\Delta - 3\cos\Delta - 2\sin^3\Delta }{2 \sin\Delta (3-2\sin^2\Delta)} \, .$
And,
$~0$ $~=$ $~ \frac{d^2x}{d\eta^2} ~+~ \biggl[ 4 + \frac{ 3(3\cos\Delta - 3\sin\Delta + 2\sin^3\Delta) }{ \sin\Delta (3-2\sin^2\Delta)} \biggr] \frac{1}{\eta} \cdot \frac{dx}{d\eta} ~+~ \biggl[ \biggl( \frac{\sigma_c^2}{\gamma_\mathrm{env} } \biggr) \frac{B_0 \eta^{1/2}(3-2\sin^2\Delta)^{1/2}}{\sin\Delta} ~+~ \frac{ 3\alpha_\mathrm{env} (3\cos\Delta -3\sin\Delta + 2\sin^3\Delta )}{\eta^2 \sin\Delta (3-2\sin^2\Delta)}\biggr] x$ $~=$ $~ \frac{d^2x}{d\eta^2} ~+~ \biggl[ 4 ~+~ \frac{ 3(3\cos\Delta - \tfrac{3}{2}\sin\Delta - \tfrac{1}{2}\sin3\Delta) }{ \sin\Delta (2 + \cos2\Delta)} \biggr] \frac{1}{\eta} \cdot \frac{dx}{d\eta} ~+~ \biggl[\omega^2_k \theta_c \biggl( \frac{\gamma_g}{\gamma_\mathrm{env} } \biggr) \frac{B_0 \eta^{1/2}(2 + \cos2\Delta)^{1/2}}{\sin\Delta} ~+~ \frac{ 3\alpha_\mathrm{env} (3\cos\Delta -\tfrac{3}{2}\sin\Delta - \tfrac{1}{2}\sin3\Delta )}{\eta^2 \sin\Delta (2 + \cos2\Delta)}\biggr] x \, ,$
which matches the expression presented by Murphy & Fiedler (1985b) (see middle of the left column on p. 223 of their article) if we set $~\theta_c = 1$ and $~\gamma_g/\gamma_\mathrm{env} = 1$.
## Surface Boundary Condition
Next, pulling from our accompanying discussion of the stability of polytropes and an accompanying table that details the properties of $~(n_c, n_e) = (1, 5)$ bipolytropes, the surface boundary condition is,
$~ \frac{d\ln x}{d\ln\eta}\biggr|_s$ $~=$ $~- \biggl(\frac{\gamma_g}{\gamma_\mathrm{env}}\biggr) \alpha + \frac{\omega^2 R^3}{\gamma_\mathrm{env} GM_\mathrm{tot}}$ $~\Rightarrow ~~~ \frac{d\ln x}{d\ln\eta}\biggr|_s + \biggl(\frac{\gamma_g}{\gamma_\mathrm{env}}\biggr) \alpha$ $~=$ $~ \frac{\omega^2 (R_s^*)^3}{\gamma_\mathrm{env} GM^*_\mathrm{tot}} \biggl( \frac{K_c}{G}\biggr)^{3 / 2}\biggl( \frac{K_c}{G}\biggr)^{-3 / 2} \frac{1}{\rho_0}$ $~=$ $~ \frac{\omega^2 }{\gamma_\mathrm{env} G\rho_0 } \biggl[ (2\pi)^{-1/2} \xi_i e^{2(\pi - \Delta_i)} \biggr]^3 \biggl[ \biggl( \frac{3}{2\pi} \biggr)^{1/2} \sin\xi_i \biggl( \frac{3}{\sin^2\Delta_i} - 2 \biggr)^{1/2} e^{(\pi - \Delta_i)} \biggr]^{-1} \biggl( \frac{\mu_e}{\mu_c}\biggr)$ $~=$ $~ \frac{\omega^2 }{\gamma_\mathrm{env}(2\pi G\rho_0)} \biggl( \frac{\mu_e}{\mu_c}\biggr) \frac{1}{\sqrt{3}} \biggl[ \frac{\xi_i^2}{\theta_i} \biggr] \biggl( \frac{3}{\sin^2\Delta_i} - 2 \biggr)^{-1 / 2} e^{5(\pi - \Delta_i)}$ $~=$ $~ \frac{\omega^2 }{\gamma_\mathrm{env}(2\pi G\rho_0)} \biggl( \frac{\mu_e}{\mu_c}\biggr) \frac{e^{5\pi}}{\sqrt{3}} \biggl[ \frac{\xi_i^2}{\theta_i} \biggr] \xi_i^{1 / 2}B\theta_i (\xi_i A)^{-5/2}$ $~=$ $~ \frac{\omega^2 }{\gamma_\mathrm{env}(2\pi G\rho_0)} \biggl( \frac{\mu_e}{\mu_c}\biggr) \frac{B e^{5\pi}}{\sqrt{3} ~A^{5 / 2}}$ $~=$ $~ \frac{2\omega_k^2 \theta_c}{(n_c+1)} \biggl( \frac{\mu_e}{\mu_c}\biggr) \frac{B e^{5\pi}}{\sqrt{3} ~A^{5 / 2}} \, .$
After acknowledging that, in their specific stability analysis, $~\theta_c = 1$, $~n_c = 1$, and $~\mu_e/\mu_c = 1$, this right-hand-side expression matches the equivalent term published by Murphy & Fiedler (1985b) (see the bottom of the left-hand column on p. 223).
## Core Layers With n = 1
And for n = 1 structures the LAWE is,
$~0$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl[ 4 - 2 Q_1 \biggr] \frac{1}{\xi} \cdot \frac{dx}{d\xi} + 2 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{core} } \biggr) \frac{\xi^2}{\theta} - \alpha_\mathrm{core} Q_1\biggr] \frac{x}{\xi^2}$
where,
$~Q_1$ $~\equiv$ $~- \frac{d\ln\theta}{d\ln\xi} \, .$
Given that, for $~n = 1$ polytropic structures,
$\theta(\xi) = \frac{\sin\xi}{\xi}$ and $\frac{d\theta}{d\xi} = \biggl[ \frac{\cos\xi}{\xi}- \frac{\sin\xi}{\xi^2}\biggr]$
we have,
$~Q_1$ $~=$ $~ - \frac{\xi^2}{\sin\xi} \biggl[ \frac{\cos\xi}{\xi}- \frac{\sin\xi}{\xi^2}\biggr]$ $~=$ $~ 1 - \xi\cot\xi \, .$
Hence, the governing LAWE for the core is,
$~0$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl[ 4 - 2 ( 1 - \xi\cot\xi ) \biggr] \frac{1}{\xi} \cdot \frac{dx}{d\xi} + 2 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{core} } \biggr) \frac{\xi^3}{\sin\xi} - \alpha_\mathrm{core} ( 1 - \xi\cot\xi )\biggr] \frac{x}{\xi^2}$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl[ 1 + \xi\cot\xi \biggr] \frac{2}{\xi} \cdot \frac{dx}{d\xi} + 2 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{core} } \biggr) \frac{\xi^3}{\sin\xi} - \alpha_\mathrm{core} ( 1 - \xi\cot\xi )\biggr] \frac{x}{\xi^2} \, .$
This can be rewritten as,
$~0$ $~=$ $~ \frac{d^2x}{d\xi^2} + \frac{2}{\xi} \biggl[ 1 + \xi\cot\xi \biggr]\frac{dx}{d\xi} + \biggl[ \biggl( \frac{\sigma_c^2}{3\gamma_\mathrm{core} } \biggr) \frac{\xi}{\sin\xi} + \frac{2 \alpha_\mathrm{core} ( \xi\cos\xi - \sin\xi) }{\xi^2 \sin\xi} \biggr] x$ $~=$ $~ \frac{d^2x}{d\xi^2} + \frac{2}{\xi} \biggl[ 1 + \xi\cot\xi \biggr]\frac{dx}{d\xi} + \biggl[ \frac{\gamma_g}{\gamma_\mathrm{core}}\biggl( \omega_k^2 \theta_c \biggr) \frac{\xi}{\sin\xi} + \frac{2 \alpha_\mathrm{core} ( \xi\cos\xi - \sin\xi) }{\xi^2 \sin\xi} \biggr] x \, ,$
which matches the expression presented by Murphy & Fiedler (1985b) (see middle of the left column on p. 223 of their article) if we set $~\theta_c = 1$ and $~\gamma_g/\gamma_\mathrm{core} = 1$. This LAWE also appears in our separate discussion of radial oscillations in n = 1 polytropic spheres.
## Interface Conditions
Here, we will simply copy the discussion already provided in the context of our attempt to analyze the stability of $~(n_c, n_e) = (0, 0)$ bipolytropes; specifically, we will draw from STEP 4: in the Piecing Together subsection. Following the discussion in §§57 & 58 of P. Ledoux & Th. Walraven (1958), the proper treatment is to ensure that fractional perturbation in the gas pressure (see their equation 57.31),
$~\frac{\delta P}{P}$ $~=$ $~- \gamma x \biggl( 3 + \frac{d\ln x}{d\ln \xi} \biggr) \, ,$
is continuous across the interface. That is to say, at the interface $~(\xi = \xi_i)$, we need to enforce the relation,
$~0$ $~=$ $~\biggl[ \gamma_c x_\mathrm{core} \biggl( 3 + \frac{d\ln x_\mathrm{core}}{d\ln \xi} \biggr) - \gamma_e x_\mathrm{env} \biggl( 3 + \frac{d\ln x_\mathrm{env}}{d\ln \xi} \biggr)\biggr]_{\xi=\xi_i}$ $~=$ $~\gamma_e \biggl[ \frac{\gamma_c}{\gamma_e} \biggl( 3 + \frac{d\ln x_\mathrm{core}}{d\ln \xi} \biggr) - \biggl( 3 + \frac{d\ln x_\mathrm{env}}{d\ln \xi} \biggr)\biggr]_{\xi=\xi_i}$ $~\Rightarrow~~~ \frac{d\ln x_\mathrm{env}}{d\ln \xi} \biggr|_{\xi=\xi_i}$ $~=$ $~3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \biggl( \frac{d\ln x_\mathrm{core}}{d\ln \xi} \biggr)_{\xi=\xi_i} \, .$
In the context of this interface-matching constraint (see their equation 62.1), P. Ledoux & Th. Walraven (1958) state the following: In the static (i.e., unperturbed equilibrium) modeldiscontinuities in $~\rho$ or in $~\gamma$ might occur at some [radius]. In the first case — that is, a discontinuity only in density, while $~\gamma_e = \gamma_c$ — the interface conditions imply the continuity of $~\tfrac{1}{x} \cdot \tfrac{dx}{d\xi}$ at that [radius]. In the second case — that is, a discontinuity in the adiabatic exponent — the dynamical condition may be written as above. This implies a discontinuity of the first derivative at any discontinuity of $~\gamma$.
The algorithm that Murphy & Fiedler (1985b) used to "… [integrate] through each zone …" was designed "… with continuity in $~x$ and $~dx/d\xi$ being imposed at the interface …" Given that they set $~\gamma_c = \gamma_e = 5/3$, their interface matching condition is consistent with the one prescribed by P. Ledoux & Th. Walraven (1958).
## Our Numerical Integration
Let's try to integrate this bipolytrope's LAWE from the center, outward, using as a guideline an accompanying Numerical Integration outline. Generally, for any polytropic index, the relevant LAWE can be written in the form,
$~\theta_i {x_i}$ $~=$ $~- \biggl[\mathcal{A} \biggr] \frac{x_i'}{\xi_i} - \frac{(n+1)}{6} \biggl[ \mathcal{B} \biggr] x_i$
where,
$~\mathcal{A}$ $~\equiv$ $~ 4\theta_i - (n+1)\xi_i (- \theta^')_i = \theta_i [ 4 - (n+1)Q_i]$ $~\mathcal{B}$ $~\equiv$ $~ \frac{\sigma_c^2}{\gamma_g} - 2\alpha \biggl(- \frac{3\theta^'}{\xi} \biggr)_i = \mathfrak{F} + 2\alpha \biggl[ 1 - \biggl(- \frac{3\theta^'}{\xi} \biggr)_i \biggr] = \mathfrak{F} + 2\alpha \biggl[ 1 - \frac{3\theta_i}{\xi_i^2} \cdot Q_i \biggr]$ $~ \mathfrak{F}$ $~\equiv$ $~ \biggl[ \frac{\sigma_c^2}{\gamma_g} - 2\alpha\biggr] = \biggl[ \frac{\sigma_c^2}{\gamma_g} - 2\biggl(3 - \frac{4}{\gamma_g} \biggr) \biggr] = \biggl[ \frac{(8 + \sigma_c^2)}{\gamma_g} - 6\biggr]$ $~\Rightarrow~~~$ $~ \sigma_c^2 = \gamma_g (\mathfrak{F} + 6) -8 \, .$
This leads to a discrete, finite-difference representation of the form,
$~x_+ \biggl[2\theta_i + \frac{\delta\xi}{\xi_i} \cdot \mathcal{A}\biggr]$ $~=$ $~ x_- \biggl[\frac{\delta\xi }{\xi_i} \cdot \mathcal{A} - 2\theta_i\biggr] + x_i\biggl\{4\theta_i - \frac{(\delta\xi)^2(n+1)}{3}\cdot \mathcal{B} \biggr\} \, .$
This provides an approximate expression for $~x_+ \equiv x_{i+1}$, given the values of $~x_- \equiv x_{i-1}$ and $~x_i$; this works for all zones, $~i = 3 \rightarrow N$ as long as the center of the configuration is denoted by the grid index, $~i=1$. Note that,
$~\delta\xi$ $~\equiv$ $~\frac{\xi_\mathrm{max}}{(N - 1)}$ and $~\xi_i$ $~=$ $~(i-1)\delta\xi \, .$
In order to kick-start the integration, we will set the displacement function value to $~x_1 = 1$ at the center of the configuration $~(\xi_1 = 0)$, then we will draw on the derived power-series expression to determine the value of the displacement function at the first radial grid line, $~\xi_2 = \delta\xi$, away from the center. Specifically, we will set,
$~ x_2$ $~=$ $~ x_1 \biggl[ 1 - \frac{(n+1) \mathfrak{F} (\delta\xi)^2}{60} \biggr] \, .$
### Integration Through the n = 1 Core
For an $~n = 1$ core, we have,
$\theta_i = \frac{\sin\xi_i}{\xi_i}$ and $Q_i = 1 - \xi_i \cot\xi_i \, .$
Hence,
$~\mathcal{A}_\mathrm{core}$ $~=$ $~ \frac{\sin\xi_i}{\xi_i} \biggl[ 4 - 2(1 - \xi_i \cot\xi_i) \biggr] = \frac{2\sin\xi_i}{\xi_i} \biggl[ 1 + \xi_i \cot\xi_i \biggr]$ $~\mathcal{B}_\mathrm{core}$ $~=$ $~ \mathfrak{F}_\mathrm{core} + 2\alpha_\mathrm{core} \biggl[ 1 - \frac{3\theta_i}{\xi_i^2} \cdot Q_i \biggr] = \mathfrak{F}_\mathrm{core} + 2\alpha_\mathrm{core} \biggl[ 1 - \frac{3\sin\xi_i}{\xi_i^3} \biggl( 1 - \xi_i \cot\xi_i \biggr)\biggr] \, .$
So, first we choose a value of $~\sigma_c^2$ and $~\gamma_c$, which means,
$~ \mathfrak{F}_\mathrm{core}$ $~\equiv$ $~ \biggl[ \frac{(8 + \sigma_c^2)}{\gamma_c} - 6\biggr]$
Then, moving from the center of the configuration, outward to the interface at $~\xi_i = \xi_\mathrm{interface} ~~ \Rightarrow ~~\delta\xi = \xi_\mathrm{interface}/(N-1)$, we have,
$~x_1$ $~=$ $~1 \, ,$ $~ x_2$ $~=$ $~ x_1 \biggl[ 1 - \frac{\mathfrak{F}_\mathrm{core} (\delta\xi)^2}{30} \biggr] \, ,$ for $~i = 2 \rightarrow N \, ,$ $~x_{i+1} \biggl[2\theta_i + \frac{\delta\xi}{\xi_i} \cdot \mathcal{A}_\mathrm{core} \biggr]$ $~=$ $~ x_{i-1} \biggl[\frac{\delta\xi }{\xi_i} \cdot \mathcal{A}_\mathrm{core} - 2\theta_i\biggr] + x_i\biggl\{4\theta_i - \frac{(\delta\xi)^2(n+1)}{3}\cdot \mathcal{B}_\mathrm{core} \biggr\} \, .$
At the interface — that is, when $~i=N$ — the logarithmic slope of the displacement function is,
$~\frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface}$ $~\approx$ $~ \frac{\xi_N}{x_N} \cdot \frac{(x_{N+1} - x_{N-1})}{2\delta\xi} \, .$
### Interface
Keep in mind that, as has been detailed in the accompanying equilibrium structure chapter, for $~(n_c, n_e) = (1, 5)$ bipolytropes,
$~r^*$ $~=$ $~\biggl( \frac{1}{2\pi}\biggr)^{1 / 2} \xi \, ,$ for, $~0 \le \xi \le \xi_\mathrm{interface} \, .$ $~r^*$ $~=$ $~\biggl[ \biggl( \frac{\mu_e}{\mu_c}\biggr)^{-1} \biggl( \frac{3}{2\pi}\biggr)^{1 / 2} \biggr] \eta \, ,$ for, $~\eta_\mathrm{interface} \le \eta \le \eta_s \, ;$ $~\eta_s$ $~=$ $~ \frac{1}{\sqrt{3}}\biggl( \frac{\mu_e}{\mu_c}\biggr) \xi_s = \frac{1}{\sqrt{3}}\biggl( \frac{\mu_e}{\mu_c}\biggr) \biggl[ \xi e^{2(\pi - \Delta)} \biggr]_\mathrm{interface} \, .$
We now need to determine what the slope is at the interface, viewed from the perspective of the envelope. From above, we deduce that,
$~\frac{d\ln y}{d\ln \eta} \biggr|_\mathrm{interface}$ $~=$ $~3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \cdot \frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface} \, .$
Hence, letting the subscript "1" denote the interface location as viewed from the envelope, we have,
$~\frac{\eta_1}{y_1} \cdot \frac{(y_2 - y_0)}{ (\eta_2 - \eta_0)}$ $~=$ $~3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \cdot \frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface} \, .$ $~\Rightarrow ~~~ y_0$ $~=$ $~y_2 - \frac{2 (\delta\eta) y_1}{\eta_1} \biggl\{ 3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \cdot \frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface} \biggr\} \, .$
### Integration Through the n = 5 Envelope
For an $~n = 5$ envelope, we have,
$\phi_i = \frac{B_0^{-1}\sin\Delta}{\eta^{1/2}(3-2\sin^2\Delta)^{1/2}}$ and $Q_i = - \frac{d\ln\phi}{d\ln\eta} = \frac{ 3\sin\Delta - 3\cos\Delta - 2\sin^3\Delta }{2 \sin\Delta (3-2\sin^2\Delta)}\, ,$
where $~A_0$ is a "homology factor," $~B_0$ is an overall scaling coefficient, and we have introduced the notation,
$~\Delta \equiv \ln(A_0\eta)^{1/2} = \frac{1}{2} (\ln A_0 + \ln\eta) \, .$
Hence,
$~\mathcal{A}_\mathrm{env}$ $~\equiv$ $~ \phi_i [ 4 - (n+1)Q_i]$ $~=$ $~ \frac{B_0^{-1}\sin\Delta}{\eta^{1/2}(3-2\sin^2\Delta)^{1/2}}\biggl\{ 4 ~-~ 6 \biggl[ \frac{ 3\sin\Delta - 3\cos\Delta - 2\sin^3\Delta }{2 \sin\Delta (3-2\sin^2\Delta)} \biggr] \biggr\}$ $~\mathcal{B}_\mathrm{env}$ $~\equiv$ $~ \frac{\sigma_c^2}{\gamma_e} - 2\alpha_\mathrm{env} \biggl(- \frac{3\phi^'}{\eta} \biggr)_i$ $~=$ $~ \frac{1}{\gamma_e}\biggl[ \gamma_c (\mathfrak{F}_\mathrm{core} + 6) -8 \biggr] - 2\biggl[ 3 - \frac{4}{\gamma_e}\biggr] Q_i \biggl( \frac{3\phi_i }{\eta_i^2} \biggr)$ $~=$ $~ \frac{\gamma_c}{\gamma_e}\biggl[ \mathfrak{F}_\mathrm{core} + 6 -\frac{8}{\gamma_c} \biggr] - 6\biggl[ 3 - \frac{4}{\gamma_e}\biggr] \frac{B_0^{-1}\sin\Delta}{\eta^{5/2}(3-2\sin^2\Delta)^{1/2}} \biggl[ \frac{ 3\sin\Delta - 3\cos\Delta - 2\sin^3\Delta }{2 \sin\Delta (3-2\sin^2\Delta)} \biggr]$ $~=$ $~ \frac{\gamma_c}{\gamma_e}\biggl[ \mathfrak{F}_\mathrm{core} + 6 -\frac{8}{\gamma_c} \biggr] - 3B_0^{-1}\biggl[ 3 - \frac{4}{\gamma_e}\biggr] \biggl[ \frac{ 3\sin\Delta - 3\cos\Delta - 2\sin^3\Delta }{ \eta^{5/2}(3-2\sin^2\Delta)^{3 / 2}} \biggr] \, .$
This leads to a discrete, finite-difference representation of the form,
$~y_+ \biggl[2\phi_i + \frac{\delta\eta}{\eta_i} \cdot \mathcal{A}_\mathrm{env} \biggr]$ $~=$ $~ y_- \biggl[\frac{\delta\eta }{\eta_i} \cdot \mathcal{A}_\mathrm{env} - 2\phi_i\biggr] + y_i\biggl[4\phi_i - 2(\delta\eta)^2 \cdot \mathcal{B}_\mathrm{env} \biggr]$
This provides an approximate expression for $~y_+ \equiv y_{i+1}$, given the values of $~y_- \equiv y_{i-1}$ and $~y_i$; this works for all zones, $~i = 3 \rightarrow M$ as long as the interface between the core and the envelope of the configuration is denoted by the grid index, $~i=1$. Note that,
$~\delta\eta$ $~\equiv$ $~\frac{\eta_\mathrm{surf}- \eta_\mathrm{interface} }{M - 1}$ and $~\eta_i$ $~=$ $~\eta_\mathrm{interface} + (i-1)\delta\eta \, .$
At the interface, we need special treatment in order to ensure that both the amplitude and the first derivative of the displacement function behave properly. Specifically, when $~i = 1$, we must set, $~y_1 = x_N$ and $~\eta_1 = (\mu_e/\mu_c)\xi_N/\sqrt{3}$. Then the value of $~y_2$ is obtained from the expression,
$~y_2 \biggl[2\phi_1 + \frac{\delta\eta}{\eta_1} \cdot \mathcal{A}_\mathrm{env} \biggr]$ $~=$ $~ y_0 \biggl[\frac{\delta\eta }{\eta_1} \cdot \mathcal{A}_\mathrm{env} - 2\phi_1\biggr] + y_1\biggl\{4\phi_1 - 2(\delta\eta)^2 \cdot \mathcal{B}_\mathrm{env} \biggr\}$ $~=$ $~ y_1\biggl[ 4\phi_1 - 2(\delta\eta)^2 \cdot \mathcal{B}_\mathrm{env} \biggr] + \biggl[\frac{\delta\eta }{\eta_1} \cdot \mathcal{A}_\mathrm{env} - 2\phi_1\biggr] \biggl\{ y_2 ~-~ \frac{2 (\delta\eta) y_1}{\eta_1} \biggl[ 3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \cdot \frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface} \biggr] \biggr\}$ $~\Rightarrow~~~y_2 \biggl[4\phi_1 \biggr]$ $~=$ $~ y_1\biggl[ 4\phi_1 - 2(\delta\eta)^2 \cdot \mathcal{B}_\mathrm{env} \biggr] ~-~ \frac{2 (\delta\eta) y_1}{\eta_1} \biggl[\frac{\delta\eta }{\eta_1} \cdot \mathcal{A}_\mathrm{env} - 2\phi_1\biggr] \biggl\{3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \cdot \frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface} \biggr\}$ $~\Rightarrow~~~\frac{y_2}{y_1} \biggl[\phi_1 \biggr]$ $~=$ $~ \biggl[ \phi_1 - \frac{(\delta\eta)^2}{2} \cdot \mathcal{B}_\mathrm{env} \biggr] ~-~ \frac{ (\delta\eta) }{2\eta_1} \biggl[\frac{\delta\eta }{\eta_1} \cdot \mathcal{A}_\mathrm{env} - 2\phi_1\biggr] \biggl\{3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \cdot \frac{d\ln x}{d\ln\xi}\biggr|_\mathrm{interface} \biggr\} \, .$
# Regroup
## Foundation
In an accompanying discussion, we derived the so-called,
$~ \frac{d^2x}{dr_0^2} + \biggl[\frac{4}{r_0} - \biggl(\frac{g_0 \rho_0}{P_0}\biggr) \biggr] \frac{dx}{dr_0} + \biggl(\frac{\rho_0}{\gamma_\mathrm{g} P_0} \biggr)\biggl[\omega^2 + (4 - 3\gamma_\mathrm{g})\frac{g_0}{r_0} \biggr] x = 0$
whose solution gives eigenfunctions that describe various radial modes of oscillation in spherically symmetric, self-gravitating fluid configurations. Assuming that the underlying equilibrium structure is that of a bipolytrope having $~(n_c, n_e) = (1, 5)$, it makes sense to adopt the normalizations used when defining the equilibrium structure, namely,
$~\rho^*$ $~\equiv$ $~\frac{\rho_0}{\rho_c}$ ; $~r^*$ $~\equiv$ $~\frac{r_0}{(K_c/G)^{1/2}}$ $~P^*$ $~\equiv$ $~\frac{P_0}{K_c\rho_c^{2}}$ ; $~M_r^*$ $~\equiv$ $~\frac{M(r_0)}{\rho_c (K_c/G)^{3/2}}$ $~H^*$ $~\equiv$ $~\frac{H}{K_c\rho_c}$ .
We note as well that,
$~g_0$ $~=$ $~\frac{GM(r_0)}{r_0^2}$ $~=$ $~ G \biggl[ M_r^* \rho_c \biggl( \frac{K_c}{G}\biggr)^{3 / 2} \biggr] \biggl[ r^*\biggl( \frac{K_c}{G}\biggr)^{1 / 2} \biggr]^{-2}$ $~=$ $~ \frac{ M_r^*}{(r^*)^2}\biggl[ G\rho_c \biggl( \frac{K_c}{G}\biggr)^{1 / 2} \biggr] \, .$
Hence, multiplying the LAWE through by $~(K_c/G)$ gives,
$~0$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl[\frac{4}{r^*} -\biggl( \frac{K_c}{G} \biggr)^{1 / 2}\biggl(\frac{g_0 \rho_0}{P_0}\biggr) \biggr] \frac{dx}{dr*} + \biggl( \frac{K_c}{G} \biggr)\biggl(\frac{\rho_0}{\gamma_\mathrm{g} P_0} \biggr)\biggl[\omega^2 + (4 - 3\gamma_\mathrm{g})\frac{g_0}{r_0} \biggr] x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ \frac{4}{r^*} -\biggl( \frac{K_c}{G} \biggr)^{1 / 2}\biggl(\frac{\rho_c \rho^*}{P^* K_c \rho_c^2}\biggr)\frac{ M_r^*}{(r^*)^2}\biggl[ G\rho_c \biggl( \frac{K_c}{G}\biggr)^{1 / 2} \biggr] \biggr\} \frac{dx}{dr*} + \biggl( \frac{K_c}{G} \biggr)\biggl(\frac{\rho^*\rho_c}{\gamma_\mathrm{g} P^* K_c \rho_c^2} \biggr)\biggl\{ \omega^2 + (4 - 3\gamma_\mathrm{g})\frac{1}{r^*} \biggl(\frac{G}{K_c}\biggr)^{1 / 2}\frac{ M_r^*}{(r^*)^2}\biggl[ G\rho_c \biggl( \frac{K_c}{G}\biggr)^{1 / 2} \biggr]\biggr\} x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ \frac{4}{r^*} -\biggl(\frac{\rho^*}{P^*}\biggr)\frac{ M_r^*}{(r^*)^2}\biggr\} \frac{dx}{dr*} + \biggl( \frac{1}{\gamma_\mathrm{g}G\rho_c} \biggr)\biggl(\frac{\rho^*}{ P^* } \biggr)\biggl\{ \omega^2 + (4 - 3\gamma_\mathrm{g})\frac{1}{r^*} \frac{ M_r^*}{(r^*)^2}\biggl[ G\rho_c \biggr]\biggr\} x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ \frac{4}{r^*} -\biggl(\frac{\rho^*}{P^*}\biggr)\frac{ M_r^*}{(r^*)^2}\biggr\} \frac{dx}{dr*} + \biggl(\frac{\rho^*}{ P^* } \biggr)\biggl\{ \frac{\omega^2}{\gamma_\mathrm{g} G\rho_c} + \biggl(\frac{4}{\gamma_\mathrm{g}} - 3\biggr)\frac{1}{r^*} \frac{ M_r^*}{(r^*)^2}\biggr\} x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ 4 -\biggl(\frac{\rho^*}{P^*}\biggr)\frac{ M_r^*}{(r^*)}\biggr\}\frac{1}{r^*} \frac{dx}{dr*} + \biggl(\frac{\rho^*}{ P^* } \biggr)\biggl\{ \frac{2\pi \sigma_c^2}{3\gamma_\mathrm{g}} ~-~\frac{\alpha_\mathrm{g} M_r^*}{(r^*)^3}\biggr\} x \, .$
## Profile
Now, referencing the derived bipolytropic model profile, we should incorporate the following relations:
Variable Throughout the Core $0 \le r^* \le \frac{\xi_i}{\sqrt{2\pi}}$ Throughout the Envelope† $\frac{\xi_i}{\sqrt{2\pi}} \le r^* \le \frac{\xi_i e^{2(\pi - \Delta_i)}}{\sqrt{2\pi}}$ Plotted Profiles $\xi_i = 0.5$ $\xi_i = 1.0$ $\xi_i = 3.0$ $\xi = \sqrt{2\pi}~r^*$ $\eta = \biggl( \frac{\mu_e}{\mu_c} \biggr) \biggl(\frac{2\pi}{3}\biggr)^{1 / 2}~r^*$ $~\rho^*$ $\frac{\sin\xi}{\xi}$ $\biggl( \frac{\mu_e}{\mu_c} \biggr) \theta_i [\phi(\eta)]^5$ $~P^*$ $\biggl( \frac{\sin\xi}{\xi} \biggr)^2$ $\theta^{2}_i [\phi(\eta)]^{6}$ $~M_r^*$ $\biggl( \frac{2}{\pi}\biggr)^{1/2} (\sin\xi - \xi\cos\xi)$ $\biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2} \biggl( \frac{2\cdot 3^3 }{\pi} \biggr)^{1/2} \theta_i \biggl(-\eta^2 \frac{d\phi}{d\eta} \biggr)$ †In order to obtain the various envelope profiles, it is necessary to evaluate $~\phi(\eta)$ and its first derivative using the information presented in Step 6 of our accompanying discussion.
Therefore, throughout the core we have,
$~\frac{\rho^*}{P^*}$ $~=$ $~\frac{\xi}{\sin\xi} \, ;$ $~\frac{M_r^*}{r^*}$ $~=$ $~\frac{\sqrt{2\pi}}{\xi}\biggl( \frac{2}{\pi}\biggr)^{1/2} (\sin\xi - \xi\cos\xi) = \frac{2\sin\xi}{\xi} (1 - \xi\cot\xi) \, .$
In which case the governing LAWE throughout the core is,
$~0$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ 4 -2(1-\xi\cot\xi )\biggr\}\frac{1}{r^*} \frac{dx}{dr*} + \frac{\xi}{\sin\xi} \biggl\{ \frac{2\pi \sigma_c^2}{3\gamma_\mathrm{g}} ~-~\alpha_\mathrm{g}~\frac{4\pi \sin\xi}{\xi^3} (1 - \xi\cot\xi) \biggr\} x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ 4 -2(1-\xi\cot\xi )\biggr\}\frac{1}{r^*} \frac{dx}{dr*} + \frac{2\pi \xi}{\sin\xi} \biggl\{ \frac{\sigma_c^2}{3\gamma_\mathrm{g}} ~+~\frac{2\alpha_\mathrm{g}}{\xi^3} \biggl(\xi\cos\xi - \sin\xi \biggr) \biggr\} x \, .$
Next, throughout the envelope we have,
$~\frac{\rho^*}{P^*}$ $~=$ $~ \biggl( \frac{\mu_e}{\mu_c} \biggr) \theta_i [\phi(\eta)]^5 \biggl\{\theta^{2}_i [\phi(\eta)]^{6}\biggr\}^{-1} = \biggl( \frac{\mu_e}{\mu_c} \biggr) \frac{1}{\theta_i \phi(\eta) } \, ;$ $~\frac{M_r^*}{r^*}$ $~=$ $~ \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2} \biggl( \frac{2\cdot 3^3 }{\pi} \biggr)^{1/2} \theta_i \biggl(-\eta^2 \frac{d\phi}{d\eta} \biggr) \biggl\{ \frac{1}{\eta} \biggl( \frac{\mu_e}{\mu_c} \biggr) \biggl(\frac{2\pi}{3}\biggr)^{1 / 2} \biggr\} = 6\biggl( \frac{\mu_e}{\mu_c} \biggr)^{-1} \theta_i \biggl(-\eta \frac{d\phi}{d\eta} \biggr)$
So, the governing LAWE throughout the envelope is,
$~0$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl\{ 4 - 6\biggl( \frac{\mu_e}{\mu_c} \biggr)^{-1} \theta_i \biggl(-\eta \frac{d\phi}{d\eta} \biggr)\biggl( \frac{\mu_e}{\mu_c} \biggr) \frac{1}{\theta_i \phi(\eta) } \biggr\}\frac{1}{r^*} \frac{dx}{dr*} + \biggl( \frac{\mu_e}{\mu_c} \biggr) \frac{1}{\theta_i \phi(\eta) } \biggl\{ \frac{2\pi \sigma_c^2}{3\gamma_\mathrm{g}} ~-~6\alpha_\mathrm{g}~\biggl( \frac{\mu_e}{\mu_c} \biggr)^{-1} \theta_i \biggl(-\eta \frac{d\phi}{d\eta} \biggr) \biggl[ \frac{1}{\eta} \biggl( \frac{\mu_e}{\mu_c} \biggr) \biggl(\frac{2\pi}{3}\biggr)^{1 / 2} \biggr]^2 \biggr\} x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl[ 4 - 6 \biggl(-\frac{d\ln\phi}{d\ln\eta} \biggr) \biggr] \frac{1}{r^*} \frac{dx}{dr*} + \biggl( \frac{\mu_e}{\mu_c} \biggr) \frac{1}{\theta_i \phi(\eta) } \biggl\{ \frac{2\pi \sigma_c^2}{3\gamma_\mathrm{g}} ~-~6\alpha_\mathrm{g}~\biggl( \frac{\mu_e}{\mu_c} \biggr) \frac{ \theta_i}{\eta} \biggl(- \frac{d\phi}{d\eta} \biggr) \biggl(\frac{2\pi}{3}\biggr) \biggr\} x$ $~=$ $~ \frac{d^2x}{dr*^2} + \biggl[ 4 - 6 \biggl(-\frac{d\ln\phi}{d\ln\eta} \biggr) \biggr] \frac{1}{r^*} \frac{dx}{dr*} + \frac{2\pi}{3}\biggl( \frac{\mu_e}{\mu_c} \biggr)^2\frac{ 1}{\eta^2} \biggl\{ \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-1} \biggl( \frac{\sigma_c^2}{\gamma_\mathrm{g}}\biggr) \frac{\eta^2}{\theta_i \phi(\eta) } ~-~6\alpha_\mathrm{g}~\biggl(- \frac{d\ln \phi}{d\ln\eta} \biggr) \biggr\} x \, .$
## Model 10
As we have reviewed in an accompanying discussion, equilibrium Model 10 from Murphy & Fiedler (1985, Proc. Astr. Soc. of Australia, 6, 219) is defined by setting $~(\xi_i, m) = (2.5646, 1)$. Drawing directly from our reproduction of their Table 1, we see that a few relevant structural parameters of Model 10 are,
$~\xi_s$ $~=$ $~6.5252876$ $~\frac{r_i}{R} = \frac{\xi_i}{\xi_s}$ $~=$ $~0.39302482$ $~\frac{\rho_c}{\bar\rho}$ $~=$ $~34.346$ $~\frac{M_\mathrm{env}}{M_\mathrm{tot}}$ $~=$ $~5.89 \times 10^{-4}$
Here we list a few other model parameter values that will aid in our attempt to correctly integrate the LAWE to find various radial oscillation eigenvectors.
A Sampling of Model 10's Equilibrium Parameter Values† GridLine $~\frac{r}{R}$ $~\xi$ $~\eta$ $~\Delta$ $~\phi$ $~- \frac{d\phi}{d\eta}$ $~r^*$ $~\rho^*$ $~P^*$ $~M_r^*$ $~g_0^*\equiv \frac{M_r^*}{(r^*)^2}$ 25 0.12093071 0.789108 0.31480842 0.89940188 0.80892374 0.122726799 1.23835945 40 0.19651241 1.2823 0.51156369 0.74761972 0.55893525 0.473819194 1.81056130 79 0.393025 2.5646 1.02312737 0.21270605 0.04524386 2.150231108 2.05411964 79 0.393025 1.4806725 2.6746514 1.000000 1.112155 1.02312737 0.21270605 0.04524386 2.15023111 2.0541196 100 0.49883919 1.8793151 2.7938569 0.6505914 0.69070815 1.2985847 0.0247926 0.0034309 2.15127319 1.2757189 150 0.7507782 2.8284641 2.9982701 0.2149684 0.30495637 1.95443562 9.7646E-05 4.4649E-06 2.15149752 0.563246 199 0.9976784 3.7586302 3.1404305 0.00150695 0.17269514 2.59716948 1.653E-15 5.2984E-19 2.15149876 0.31896316 †Our chosen (uniform) grid spacing is, $~\frac{\delta r}{R} = \frac{1}{78}\biggl( \frac{r_i}{R} \biggr) \approx 0.00503878 \, ;$ as a result, the center is at zone 1, the interface is at grid line 79, and the surface is just beyond grid line 199.
## Numerical Integration
### General Approach
Here, we begin by recognizing that the 2nd-order ODE that must be integrated to obtain the desired eigenvectors has the generic form,
$~0$ $~=$ $~ x + \frac{\mathcal{H}}{r^*} x' + \mathcal{K}x \, ,$
where,
$~x'$ $~=$ $~\frac{dx}{dr^*}$ and $~x$ $~=$ $~\frac{d^2x}{d(r^*)^2} \, .$
Adopting the same approach as before when we integrated the LAWE for pressure-truncated polytropes, we will enlist the finite-difference approximations,
$~x'$ $~\approx$ $~ \frac{x_+ - x_-}{2\delta r^*}$ and $~x$ $~\approx$ $~ \frac{x_+ -2x_j + x_-}{(\delta r^*)^2} \, .$
The finite-difference representation of the LAWE is, therefore,
$~\frac{x_+ -2x_j + x_-}{(\delta r^*)^2}$ $~=$ $~ -~ \frac{\mathcal{H}}{r^*} \biggl[ \frac{x_+ - x_-}{2\delta r^*} \biggr] ~-~ \mathcal{K}x_j$ $~\Rightarrow ~~~ x_+ -2x_j + x_-$ $~=$ $~ -~ \frac{\delta r^*}{2r^*} \biggl[ x_+ - x_- \biggr]\mathcal{H} ~-~ (\delta r^*)^2\mathcal{K}x_j$ $~\Rightarrow ~~~ x_{j+1} \biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr]$ $~=$ $~ \biggl[ 2 - (\delta r^*)^2\mathcal{K}\biggr] x_j ~-~\biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr]x_{j-1} \, .$
In what follows we will also find it useful to rewrite $~\mathcal{K}$ in the form,
$~\mathcal{K} ~\rightarrow ~\biggl(\frac{\sigma_c^2}{\gamma_\mathrm{g}}\biggr) \mathcal{K}_1 - \alpha_\mathrm{g} \mathcal{K}_2 \, .$
Case A: From the above Foundation discussion, the relevant coefficient expressions for all regions of the configuration are,
$~\mathcal{H}$ $~\equiv$ $~ \biggl\{ 4 -\biggl(\frac{\rho^*}{P^*}\biggr)\frac{ M_r^*}{(r^*)}\biggr\}$ , $~\mathcal{K}_1$ $~\equiv$ $~ \frac{2\pi }{3}\biggl(\frac{\rho^*}{ P^* } \biggr)$ and $~\mathcal{K}_2$ $~\equiv$ $~ \biggl(\frac{\rho^*}{ P^* } \biggr)\frac{M_r^*}{(r^*)^3} \, .$
Case B: Alternatively, immediately following the above Profile discussion, the relevant coefficient expressions for the core are,
$~\mathcal{H}$ $~\equiv$ $~ \biggl\{ 4 -2(1-\xi\cot\xi)\biggr\}$ , $~\mathcal{K}_1$ $~\equiv$ $~ \frac{2\pi }{3}\biggl(\frac{\xi}{ \sin\xi} \biggr)$ and $~\mathcal{K}_2$ $~\equiv$ $~ \frac{4\pi }{\xi^2 \sin\xi} \biggl(\sin\xi - \xi\cos\xi \biggr) \, ;$
while the coefficient expressions for the envelope are,
$~\mathcal{H}$ $~=$ $~ \biggl\{ 4 - 6 \biggl(-\frac{d\ln\phi}{d\ln\eta} \biggr) \biggr\}$ , $~\mathcal{K}_1$ $~=$ $~ \frac{2\pi}{3}\biggl( \frac{\mu_e}{\mu_c} \biggr) \biggl\{ \frac{1}{\theta_i \phi(\eta) } \biggr\}$ and $~\mathcal{K}_2$ $~=$ $~ \frac{12\pi}{3}\biggl( \frac{\mu_e}{\mu_c} \biggr)^2\frac{ 1}{\eta^2} \biggl(- \frac{d\ln \phi}{d\ln\eta} \biggr) \, .$
GridLine $~\frac{r}{R}$ $~\xi$ $~\eta$ Case A Case B $~\mathcal{H}$ $~\mathcal{K}_1$ $~\mathcal{K}_2$ $~\mathcal{H}$ $~\mathcal{K}_1$ $~\mathcal{K}_2$ 25 0.12093071 0.789108 3.566549 2.328653 4.373676 3.566549 2.328653 4.373676 40 0.19651241 1.2823 2.761112 2.801418 4.734049 2.761112 2.801418 4.734049 79 0.393025 2.5646 -5.880425 9.846430 9.4387879 -5.880424 9.846430 9.438787 79 0.393025 1.4806725 -5.880425 9.846430 9.4387879 -5.880424 9.846430 9.438787 100 0.49883919 1.8793151 -7.971244 15.134659 7.099025 -7.971184 15.134583 7.098989 150 0.7507782 2.8284641 -2.00748E+01 4.58038E+01 6.30260 -2.00749E+01 4.58041E+01 6.30264 199 0.9976784 3.7586302 -2.58045E+03 6.53411E+03 3.83150E+02 -2.58041E+03 6.53401E+03 3.83144E+02
### Special Handling at the Center
In order to kick-start the integration, we set the displacement function value to $~x_1 = 1$ at the center of the configuration $~(\xi_1 = 0)$, then draw on the derived power-series expression to determine the value of the displacement function at the first radial grid line, $~\xi_2 = \delta\xi$, away from the center. Specifically, we set,
$~ x_2$ $~=$ $~ x_1 \biggl[ 1 - \frac{(n+1) \mathfrak{F} (\delta\xi)^2}{60} \biggr] \, .$
### Special Handling at the Interface
Integrating outward from the center, the general approach will work up through the determination of $~x_{j+1}$ when "j+1" refers to the interface location. In order to properly transition from the core to the envelope, we need to determine the value of the slope at this interface location. Let's do this by setting j = i, then projecting forward to what $~x_+$ would be — that is, to what the amplitude just beyond the interface would be — if the core were to be extended one more zone. Then, the slope at the interface (as viewed from the perspective of the core) will be,
$~x'_i\biggr|_\mathrm{core}$ $~\approx$ $~ \frac{1}{2\delta r^*} \biggl\{ x_+ - x_{i-1} \biggr\}$ $~=$ $~ -\frac{x_{i-1}}{2\delta r^*} + \frac{1}{2\delta r^*} \biggl\{ \biggl[ 2 - (\delta r^*)^2\mathcal{K}\biggr] x_i ~-~\biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr]x_{i-1} \biggr\}\biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr]^{-1}$ $~=$ $~ \frac{1}{2\delta r^*} \biggl\{ \biggl[ 2 - (\delta r^*)^2\mathcal{K}\biggr] x_i ~-~\biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr]x_{i-1} ~-~\biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr]x_{i-1} \biggr\}\biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr]^{-1}$ $~=$ $~ \frac{1}{2\delta r^*} \biggl\{ \biggl[ 2 - (\delta r^*)^2\mathcal{K}\biggr] x_i ~-~2x_{i-1} \biggr\}\biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr]^{-1}$
Conversely, as viewed from the envelope, if we assume that we know $~x_i$ and $~x'_i$, we can determine the amplitude, $~x_{i+1}$, at the first zone beyond the interface as follows:
$~x_-$ $~\approx$ $~ x_{i+1} - 2\delta r^*\cdot x'_i\biggr|_\mathrm{env}$ $~\Rightarrow ~~~ x_{i+1} \biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr]$ $~=$ $~ \biggl[ 2 - (\delta r^*)^2\mathcal{K}\biggr] x_i ~-~\biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr] \biggl[ x_{i+1} - 2\delta r^*\cdot x'_i\biggr|_\mathrm{env} \biggr]$ $~\Rightarrow ~~~ x_{i+1} \biggl[1 + \biggl( \frac{\delta r^*}{2r^*}\biggr) \mathcal{H} \biggr] ~+~ \biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr] x_{i+1}$ $~=$ $~ \biggl[ 2 - (\delta r^*)^2\mathcal{K}\biggr] x_i ~+~ \biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr] 2\delta r^*\cdot x'_i\biggr|_\mathrm{env}$ $~\Rightarrow ~~~ x_{i+1}$ $~=$ $~ \biggl[ 1 - \tfrac{1}{2}(\delta r^*)^2\mathcal{K}\biggr] x_i ~+~ \biggl[ 1 - \biggl( \frac{\delta r^*}{2r^*} \biggr) \mathcal{H} \biggr] \delta r^*\cdot x'_i\biggr|_\mathrm{env}$
## Eigenvectors
Keep in mind that, for all models, we expect that, at the surface, the logarithmic derivative of each proper eigenfunction will be,
$~\frac{d\ln x}{d\ln r^*}\biggr|_\mathrm{surf}$ $~=$ $~\frac{\Omega^2}{\gamma} - \alpha \, .$
Also, keep in mind that, for Model 10 $~(\xi_i = 2.5646)$:
$~\frac{r_i}{R}$ $~=$ $~0.39302482$ , $~\frac{\rho_c}{\bar\rho}$ $~=$ $~34.3460405$
Our Determinations for Model 10 Mode $~\sigma_c^2$ $~\Omega^2 \equiv \frac{\sigma_c^2}{2} \biggl( \frac{\rho_c}{\bar\rho}\biggr)$ $~x_\mathrm{surf}$ $~\frac{d\ln x}{d\ln r^*}\biggr|_\mathrm{surf}$ $~\frac{r}{R}\biggr|_1$ $~1 - \frac{M_r}{M_\mathrm{tot}}\biggr|_1$ $~\frac{r}{R}\biggr|_2$ $~1 - \frac{M_r}{M_\mathrm{tot}}\biggr|_2$ $~\frac{r}{R}\biggr|_3$ $~1 - \frac{M_r}{M_\mathrm{tot}}\biggr|_3$ expected measured 1(Fundamental) 0.92813095170326 15.93881161 +85.17 8.963286966 8.963085 n/a n/a n/a n/a n/a n/a 2 1.237156768978 21.24571822 - 610 12.14743093 12.147337 0.5724 3.05E-05 n/a n/a n/a n/a 3 1.8656033984 32.0380449 +3225 18.62282676 18.6228 0.4845 1.35E-04 0.787 2.05E-07 n/a n/a 4 2.65901504799 45.66331921 -9410 26.79799153 26.797977 0.4459 2.620E-04 0.7096 1.834E-06 0.8632 1.189E-08
For Model 17 $~(\xi_i = 3.0713)$:
$~\frac{r_i}{R}$ $~=$ $~0.93276717$ , $~\frac{\rho_c}{\bar\rho}$ $~=$ $~3.79693903$
Our Determinations for Model 17 Mode $~\sigma_c^2$ $~\Omega^2 \equiv \frac{\sigma_c^2}{2} \biggl( \frac{\rho_c}{\bar\rho}\biggr)$ $~x_\mathrm{surf}$ $~\frac{d\ln x}{d\ln r^*}\biggr|_\mathrm{surf}$ $~\frac{r}{R}\biggr|_1$ $~1 - \frac{M_r}{M_\mathrm{tot}}\biggr|_1$ $~\frac{r}{R}\biggr|_2$ $~1 - \frac{M_r}{M_\mathrm{tot}}\biggr|_2$ $~\frac{r}{R}\biggr|_3$ $~1 - \frac{M_r}{M_\mathrm{tot}}\biggr|_3$ expected measured 1(Fundamental) 1.149837904 2.182932207 +1.275 0.7097593 0.7097550 n/a n/a n/a n/a n/a n/a 2 7.34212930615 13.93880866 - 2.491 7.763285 7.763244 0.7215 0.24006 n/a n/a n/a n/a 3 16.345072567 31.03062198 +4.33 18.01837 18.01826 0.5806 0.5027 0.848 0.0541 n/a n/a 4 27.746934203 52.6767087 -9.1 31.0060 31.0058 0.4859 0.6737 0.7429 0.1974 0.8957 0.0171
Numerical Values for Some Selected $~(n_c, n_e) = (1, 5)$ Bipolytropes [to be compared with Table 1 of Murphy & Fiedler (1985)] MODEL Source $~\frac{r_i}{R}$ $~\Omega_0^2$ $~\Omega_1^2$ $~\frac{r}{R}\biggr|_1$ $~1-\frac{M_r}{M_\mathrm{tot}}\biggr|_1$ 10 MF85 0.393 15.9298 21.2310 0.573 1.00E-03 Here 0.39302 15.93881161 21.24571822 0.5724 3.05E-05 17 MF85 0.933 2.1827 13.9351 0.722 0.232 Here 0.93277 2.182932207 13.93880866 0.7215 0.24006
## Try Splitting Analysis Into Separate Core and Envelope Components
### Core:
Given that, $~\sqrt{2\pi}~r^* = \xi$, lets multiply the LAWE through by $~(2\pi)^{-1}$. This gives,
$~0$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl\{ 4 -\biggl(\frac{\rho^*}{P^*}\biggr)\frac{ M_r^*}{(r^*)}\biggr\}\frac{1}{\xi} \cdot \frac{dx}{d\xi} + \frac{1}{2\pi}\biggl(\frac{\rho^*}{ P^* } \biggr)\biggl\{ \frac{2\pi \sigma_c^2}{3\gamma_\mathrm{g}} ~-~\frac{\alpha_\mathrm{g} M_r^*}{(r^*)^3}\biggr\} x \, .$
Specifically for the core, therefore, the finite-difference representation of the LAWE is,
$~\frac{x_+ -2x_j + x_-}{(\delta \xi)^2}$ $~=$ $~ -~ \frac{\mathcal{H}}{\xi} \biggl[ \frac{x_+ - x_-}{2\delta \xi} \biggr] ~-~ \biggl[ \frac{\mathcal{K}}{2\pi} \biggr]x_j$ $~\Rightarrow ~~~ x_+ -2x_j + x_-$ $~=$ $~ -~ \frac{\delta \xi}{2\xi} \biggl[ x_+ - x_- \biggr]\mathcal{H} ~-~ (\delta \xi)^2 \biggl[ \frac{\mathcal{K}}{2\pi} \biggr] x_j$ $~\Rightarrow ~~~ x_{j+1} \biggl[1 + \biggl( \frac{\delta \xi}{2\xi}\biggr) \mathcal{H} \biggr]$ $~=$ $~ \biggl[ 2 - (\delta \xi)^2\biggl( \frac{\mathcal{K}}{2\pi} \biggr) \biggr] x_j ~-~\biggl[ 1 - \biggl( \frac{\delta \xi}{2\xi} \biggr) \mathcal{H} \biggr]x_{j-1} \, .$
This also means that, as viewed from the perspective of the core, the slope at the interface is
$~\biggl[ \frac{dx}{d\xi}\biggr]_\mathrm{interface}$ $~=$ $~ \frac{1}{2\delta \xi} \biggl\{ \biggl[ 2 - (\delta \xi)^2 \biggl( \frac{\mathcal{K}}{2\pi} \biggr)\biggr] x_i ~-~2x_{i-1} \biggr\}\biggl[1 + \biggl( \frac{\delta \xi}{2\xi}\biggr) \mathcal{H} \biggr]^{-1} \, .$
### Envelope:
Given that,
$~\biggl( \frac{\mu_e}{\mu_c} \biggr) \biggl(\frac{2\pi}{3}\biggr)^{1 / 2}~r^* = \eta \, ,$
let's multiply the LAWE through by $~(3/2\pi)( \mu_e/\mu_c)^{-2}$. This gives,
$~0$ $~=$ $~ \frac{d^2x}{d\eta^2} + \biggl\{ 4 -\biggl(\frac{\rho^*}{P^*}\biggr)\frac{ M_r^*}{(r^*)}\biggr\}\frac{1}{\eta} \cdot \frac{dx}{d\eta} + \frac{3}{2\pi} \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2} \biggl(\frac{\rho^*}{ P^* } \biggr)\biggl\{ \frac{2\pi \sigma_c^2}{3\gamma_\mathrm{g}} ~-~\frac{\alpha_\mathrm{g} M_r^*}{(r^*)^3}\biggr\} x \, .$
Specifically for the envelope, therefore, the finite-difference representation of the LAWE is,
$~\frac{x_+ -2x_j + x_-}{(\delta \eta)^2}$ $~=$ $~ -~ \frac{\mathcal{H}}{\eta} \biggl[ \frac{x_+ - x_-}{2\delta \eta} \biggr] ~-~ \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2}\biggl[ \frac{3\mathcal{K}}{2\pi} \biggr]x_j$ $~\Rightarrow ~~~ x_+ -2x_j + x_-$ $~=$ $~ -~ \frac{\delta \eta}{2\eta} \biggl[ x_+ - x_- \biggr]\mathcal{H} ~-~ (\delta \eta)^2 \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2}\biggl[ \frac{3\mathcal{K}}{2\pi} \biggr] x_j$ $~\Rightarrow ~~~ x_{j+1} \biggl[1 + \biggl( \frac{\delta \eta}{2\eta}\biggr) \mathcal{H} \biggr]$ $~=$ $~ \biggl[ 2 - (\delta \eta)^2 \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2} \biggl( \frac{3\mathcal{K}}{2\pi} \biggr) \biggr] x_j ~-~\biggl[ 1 - \biggl( \frac{\delta \eta}{2\eta} \biggr) \mathcal{H} \biggr]x_{j-1} \, .$
This also means that, once we know the slope at the interface (see immediately below), the amplitude at the first zone outside of the interface will be given by the expression,
$~x_{i+1}$ $~=$ $~ \biggl[ 1 - \tfrac{1}{2}(\delta \eta)^2 \biggl( \frac{\mu_e}{\mu_c} \biggr)^{-2} \biggl( \frac{3\mathcal{K}}{2\pi} \biggr)\biggr] x_i ~+~ \biggl[ 1 - \biggl( \frac{\delta \eta}{2\eta} \biggr) \mathcal{H} \biggr] \delta \eta \cdot \biggl[ \frac{dx}{d\eta} \biggr]_\mathrm{interface} \, .$
### Interface
If we consider only cases where $~\gamma_e = \gamma_c$, then at the interface we expect,
$~\frac{d\ln x}{d\ln r^*}$ $~=$ $~\frac{d\ln x}{d\ln \xi} = \frac{d\ln x}{d\ln \eta}$ $~\Rightarrow ~~~ r^*\frac{dx}{d r^*}$ $~=$ $~\xi \frac{dx}{d \xi} = \eta \frac{d x}{d \eta}$ $~\Rightarrow ~~~ \frac{dx}{dr^*}$ $~=$ $~(2\pi)^{1 / 2}\frac{dx}{d\xi} = \biggl(\frac{\mu_e}{\mu_c}\biggr) \biggl(\frac{2\pi}{3}\biggr)^{1 / 2} \frac{dx}{d\eta} \, .$
Switching at the interface from $~\xi$ to $~\eta$ therefore means that,
$~ \biggl[ \frac{dx}{d\eta}\biggr]_\mathrm{interface}$ $~=$ $~\sqrt{3}\biggl(\frac{\mu_e}{\mu_c}\biggr)^{-1} \biggl[ \frac{dx}{d\xi}\biggr]_\mathrm{interface} \, .$
# Begin Our Analysis
## Relevant LAWEs
The LAWE that is relevant to polytropic spheres may be written as,
$~0 = \frac{d^2x}{d\xi^2} + \biggl[ 4 - (n+1) Q \biggr] \frac{1}{\xi} \cdot \frac{dx}{d\xi} + (n+1) \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_g } \biggr) \frac{\xi^2}{\theta} - \alpha Q\biggr] \frac{x}{\xi^2}$ where: $~Q(\xi) \equiv - \frac{d\ln\theta}{d\ln\xi} \, ,$ $~\sigma_c^2 \equiv \frac{3\omega^2}{2\pi G\rho_c} \, ,$ and, $~\alpha \equiv \biggl(3 - \frac{4}{\gamma_\mathrm{g}}\biggr)$
### Core Layers With n = 5
The LAWE for n = 5 structures is,
$~0$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl[ 4 - 6Q_5 \biggr] \frac{1}{\xi} \cdot \frac{dx}{d\xi} + 6 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{core} } \biggr) \frac{\xi^2}{\theta_5} - \alpha_\mathrm{core} Q_5\biggr] \frac{x}{\xi^2}$
where,
$~Q_5$ $~\equiv$ $~- \frac{d\ln\theta_5}{d\ln\xi} \, .$
From our study of the equilibrium structure of $~(n_c, n_e) = (5, 1)$ bipolytropes, we have,
$~ \theta_5$ $~=$ $~\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-1 / 2} \, ;$ $~ \frac{d\theta_5}{d\xi}$ $~=$ $~- \frac{\xi}{3}\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-3/2} \, .$ $~\Rightarrow~~~ Q_5 = - \frac{\xi}{\theta_5} \cdot \frac{d\theta_5}{d\xi}$ $~=$ $~ \biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{1 / 2} \frac{\xi^2}{3}\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-3 / 2}$ $~=$ $~ \frac{\xi^2}{3}\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-1} \, .$
Hence, for the core the governing LAWE is,
$~0$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl\{ 4 - 2\xi^2\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-1} \biggr\} \frac{1}{\xi} \cdot \frac{dx}{d\xi} + 6 \biggl\{ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{core} } \biggr) \xi^2\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{1 / 2} - ~\alpha_\mathrm{core} \cdot \frac{\xi^2}{3}\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-1} \biggr\} \frac{x}{\xi^2}$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl\{ 2 - \xi^2\biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-1} \biggr\} \frac{2}{\xi} \cdot \frac{dx}{d\xi} + \biggl\{ \biggl( \frac{\sigma_c^2}{\gamma_\mathrm{core} } \biggr) \biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{1 / 2} - ~2\alpha_\mathrm{core} \biggl[ 1 + \frac{1}{3}\xi^2 \biggr]^{-1} \biggr\} x$ $~=$ $~ \frac{d^2x}{d\xi^2} + \biggl\{ 2 - 3\xi^2\biggl[ 3 + \xi^2 \biggr]^{-1} \biggr\} \frac{2}{\xi} \cdot \frac{dx}{d\xi} + \biggl\{ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) \biggl[ 3 + \xi^2 \biggr]^{1 / 2} - ~6\alpha_\mathrm{core} \biggl[ 3 + \xi^2 \biggr]^{-1} \biggr\} x$ $~\Rightarrow ~~~ 0$ $~=$ $~ (3 + \xi^2) \frac{d^2x}{d\xi^2} + \biggl\{ 2(3 + \xi^2) - 3\xi^2 \biggr\} \frac{2}{\xi} \cdot \frac{dx}{d\xi} + \biggl\{ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} - ~6\alpha_\mathrm{core} \biggr\} x$ $~=$ $~ (3 + \xi^2) \frac{d^2x}{d\xi^2} + ( 6 - \xi^2 ) \frac{2}{\xi} \cdot \frac{dx}{d\xi} + \biggl[ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} - ~6\alpha_\mathrm{core} \biggr] x \, .$
This exactly matches our derivation performed in the context of pressure-truncated polytropes. When we insert the eigenfunction obtained via a Eureka Moment on 3/6/2017,
$~x$ $~=$ $~x_0\biggl[1 - \frac{\xi^2}{15}\biggr] \, ,$ $~\Rightarrow~~~ \frac{dx}{d\xi}$ $~=$ $~- \frac{2x_0 \xi}{15}$ $~\Rightarrow~~~ \frac{d^2x}{d\xi^2}$ $~=$ $~- \frac{2x_0 }{15} \, ,$
we obtain,
$~ (3 + \xi^2) \frac{d^2x}{d\xi^2} + ( 6 - \xi^2 ) \frac{2}{\xi} \cdot \frac{dx}{d\xi} + \biggl[ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} - ~6\alpha_\mathrm{core} \biggr] x$ $~=$ $~ -~(3 + \xi^2) \frac{2x_0 }{15} - 2( 6 - \xi^2 ) \frac{2x_0 }{15} + \frac{x_0}{15}\biggl[ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} - ~6\alpha_\mathrm{core} \biggr] (15 - \xi^2)$ $~=$ $~ -\frac{x_0}{15} \biggl\{ 2(3 + \xi^2) + 4( 6 - \xi^2 ) - \biggl[ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} - ~6\alpha_\mathrm{core} \biggr] (15 - \xi^2) \biggr\}$ $~=$ $~ -\frac{x_0}{15} \biggl\{ 30 - 2\xi^2 ~+ ~6\alpha_\mathrm{core} (15 - \xi^2) ~- \biggl[ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} \biggr] (15 - \xi^2) \biggr\} \, .$
The right-hand-side goes to zero if $~\alpha_\mathrm{core} = - 1/3$ and $~\sigma_c = 0$. Also, notice that,
$~ \frac{d\ln x}{d\ln \xi}\biggr|_\mathrm{core} = \frac{\xi}{x}\cdot \frac{dx}{d\xi} \biggr|_\mathrm{core}$ $~=$ $~- \frac{2x_0 \xi^2}{15} \biggl\{ x_0\biggl[1 - \frac{\xi^2}{15}\biggr] \biggr\}^{-1}$ $~=$ $~- \biggl\{ \frac{15}{2\xi^2}\biggl[\frac{15 -\xi^2}{15}\biggr] \biggr\}^{-1}$ $~=$ $~- \biggl[\frac{15 -\xi^2}{2\xi^2}\biggr]^{-1}$ $~=$ $~2 \biggl[1 - \frac{15}{\xi^2} \biggr]^{-1} \, .$
So, with $~\gamma_c = 6/5$ and $~\gamma_e = 2$ we need,
$~\frac{d\ln x_\mathrm{env}}{d\ln \xi} \biggr|_{\xi=\xi_i}$ $~=$ $~3\biggl(\frac{\gamma_c}{\gamma_e} -1\biggr) + \frac{\gamma_c}{\gamma_e} \biggl( \frac{d\ln x_\mathrm{core}}{d\ln \xi} \biggr)_{\xi=\xi_i}$ $~=$ $~3\biggl(\frac{3}{5} -1\biggr) + \frac{6}{5} \biggl[1 - \frac{15}{\xi_i^2} \biggr]^{-1}$ $~=$ $~\frac{6}{5} \biggl\{ \biggl[\frac{\xi_i^2 - 15}{\xi_i^2} \biggr]^{-1} - 1 \biggr\}$ $~=$ $~\frac{6}{5} \biggl[\frac{15}{\xi_i^2 - 15} \biggr] \, .$
### Envelope Layers With n = 1
And for n = 1 structures the LAWE is,
$~0$ $~=$ $~ \frac{d^2x}{d\eta^2} + \biggl[ 4 - 2 Q_1 \biggr] \frac{1}{\eta} \cdot \frac{dx}{d\eta} + 2 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{env} } \biggr) \frac{\eta^2}{\theta_1} - \alpha_\mathrm{env} Q_1\biggr] \frac{x}{\eta^2}$
where,
$~Q_1$ $~\equiv$ $~- \frac{d\ln\theta_1}{d\ln\eta} \, .$
As has already been pointed out, above, for n = 1 polytropic spheres, this LAWE becomes,
$~0$ $~=$ $~ \frac{d^2x}{d\eta^2} + \frac{2}{\eta} \biggl[ 1 + \eta\cot\eta \biggr]\frac{dx}{d\eta} + \biggl[ \frac{\gamma_g}{\gamma_\mathrm{env}}\biggl( \omega_k^2 \theta_c \biggr) \frac{\eta}{\sin\eta} + \frac{2 \alpha_\mathrm{env} ( \eta\cos\eta - \sin\eta) }{\eta^2 \sin\eta} \biggr] x \, .$
In a separate chapter, we explain that the analytically defined eigenfunction that satisfies this LAWE — when $~\omega_k^2 = 0$ and $~\alpha_\mathrm{env} = +1$ — is,
$~x_P\biggr|_{n=1}$ $~=$ $~ \frac{3b_e}{\eta^2}\biggl[ 1- \eta \cot\eta \biggr] \, .$
### Summary
For a given choice of the equilibrium model parameters, $~\xi_i$ and $~\mu_e/\mu_c$, we can pull the parameters and profiles of the base equilibrium model from our accompanying chapter on $~(n_c, n_e) = (5, 1)$ bipolytropes. Note, in particular, that:
$~r^*$ $~=$ $~\biggl( \frac{3}{2\pi}\biggr)^{1 / 2} \xi \, ,$ for, $~0 \le \xi \le \xi_i \, .$ $~r^*$ $~=$ $~\biggl[ \biggl( \frac{\mu_e}{\mu_c}\biggr)^{-1} \frac{1}{\sqrt{2\pi}} \biggl(1 + \frac{\xi_i^2}{3} \biggr) \biggr] \eta \, ,$ for, $~\eta_i \le \eta \le \eta_s \, ;$ $~\eta_s$ $~=$ $~ \eta_i + \frac{\pi}{2} + \tan^{-1} \biggl[\frac{1}{\eta_i} - \frac{\xi_i}{\sqrt{3}} \biggr] \, .$
Then, for a choice of the pair of exponents, $~\gamma_\mathrm{core}$ and $~\gamma_\mathrm{env}$, that govern the behavior of adiabatic oscillations, the LAWE to be numerically integrated is:
$~0$ $~=$ $~ (3 + \xi^2) \frac{d^2x_\mathrm{core}}{d\xi^2} + ( 6 - \xi^2 ) \frac{2}{\xi} \cdot \frac{dx_\mathrm{core}}{d\xi} + \biggl[ \biggl( \frac{\sigma_c^2}{\sqrt{3}~\gamma_\mathrm{core} } \biggr) ( 3 + \xi^2 )^{3 / 2} - ~6\alpha_\mathrm{core} \biggr] x_\mathrm{core} \, ,$ for, $~0 \le \xi \le \xi_i \, ;$ $~0$ $~=$ $~ \frac{d^2x_\mathrm{env}}{d\eta^2} + \biggl[ 1 + \eta\cot\eta \biggr] \frac{2}{\eta} \cdot \frac{dx_\mathrm{env}}{d\eta} + 2 \biggl[ \biggl( \frac{\sigma_c^2}{6\gamma_\mathrm{env} } \biggr) \frac{\eta^3}{\sin\eta} - \alpha_\mathrm{env} ( 1 - \eta\cot\eta )\biggr] \frac{x_\mathrm{env}}{\eta^2} \, ,$ for, $~\eta_i \le \eta \le \eta_s \, .$
The boundary conditions at the center of the configuration are, $~x_\mathrm{env} = 1$, while $~dx_\mathrm{env}/d\xi = 0$.
As described above, the two interface boundary conditions are, $~x_\mathrm{env} = x_\mathrm{core}$, and,
$~\frac{d\ln x_\mathrm{env}}{d\ln \xi} \biggr|_{\xi=\xi_i}$ $~=$ $~3\biggl(\frac{\gamma_\mathrm{core}}{\gamma_\mathrm{env}} -1\biggr) + \frac{\gamma_\mathrm{core}}{\gamma_\mathrm{env}} \biggl( \frac{d\ln x_\mathrm{core}}{d\ln \xi} \biggr)_{\xi=\xi_i} \, .$
Finally, drawing from a related discussion of the surface boundary condition for isolated n = 3 polytropes, at the surface we need,
$~\frac{d\ln x_\mathrm{env}}{d\ln \eta}\biggr|_\mathrm{surface}$ $~=$ $~\frac{1}{\gamma_\mathrm{env}} \biggl( 4 - 3\gamma_\mathrm{env} + \frac{\omega^2 R^3}{GM_\mathrm{tot}}\biggr)$ $~=$ $~\biggl[ \frac{3\omega^2 R^3}{4\pi G \gamma_\mathrm{env} \bar\rho} - \alpha_\mathrm{env} \biggr]$ $~=$ $~\frac{1}{2} \biggl[ \frac{\sigma_c^2}{\gamma_\mathrm{env} } \biggl(\frac{\rho_c}{\bar\rho}\biggr) -2 \alpha_\mathrm{env} \biggr] \, ,$
where, for an $~(n_c, n_e) = (5, 1)$ bipolytrope,
$~\frac{\rho_c}{\bar\rho}$ $~=$ $~ \biggl(\frac{\mu_e}{\mu_c}\biggr)^{-1} \frac{\eta_s^2}{3A\theta_i^5} \, .$ |
# Lab Notebook
### (Introduction)
#### Coding
• cboettig pushed to master at cboettig/labnotebook: update site draft update layout tag 05:05 2013/12/06
• cboettig pushed to master at ropensci/reml: eml to rdf 11:46 2013/12/05
• cboettig pushed to master at ropensci/reml: example of how hf205.xml might be built in reml 11:45 2013/12/05
• cboettig commented on issue ropensci/reml#62: @emhart that would be awesome. I've just figured out how to get the text: library(RWordXML) f <- wordDoc("inst/examples/methods.docx") doc <- methods… 09:37 2013/12/05
• cboettig opened issue ropensci/reml#62: Parse a .docx file to get methods and other text 09:23 2013/12/05
#### Discussing
• RT @_inundata: Excited to announce that we’re (@ropensci) writing a book on open science. ETA July 2014. http://t.co/TyIY1WN5aE
10:28 2013/12/02
• RT @researchremix: Great read on CC0 vs CC-BY by @dancohen: http://t.co/7EeR8XdO7Q #openaccess #opendata
10:27 2013/12/02
• RT @pebourne: Who is willing to measure the reproducibility of research in their own lab? We did in this PLOS ONE paper http://t.co/hKZIfhO…
10:24 2013/12/02
• @davidjayharris I'm so glad someone actually looked at the paper links, they are far better than my babble. That one is delightful.
07:03 2013/11/26
• RT @davidjayharris: An actual paper title, via @cboettig: Are exercises like this a good use of anybody's time?
07:02 2013/11/26
• James Wilson White, Louis W Botsford, Alan Hastings et al. 2013. Stochastic models reveal conditions for cyclic dominance in sockeye salmon populations Ecological Monographs 10.1890/12-1796.1
• David Lindenmayer, Gene E. Likens. 2013. Benchmarking Open Access Science Against Good Science Bulletin of the Ecological Society of America 94 4 10.1890/0012-9623-94.4.338
• Elizabeth Eli Holmes, John L Sabo, Steven Vincent Viscido et al. 2007. A statistical approach to quasi-extinction forecasting. Ecology letters 10 12 10.1111/j.1461-0248.2007.01105.x
• Jan Esper, Ulf Büntgen, David C Frank et al. 2007. 1200 Years of Regular Outbreaks in Alpine Insects. Proceedings. Biological sciences / The Royal Society 274 1610 10.1098/rspb.2006.0191
• Santiago Salinas, Simon C. Brown, Marc Mangel et al. 2013. Non-genetic inheritance and changing environments Non-Genetic Inheritance 1 10.2478/ngi-2013-0005
19 Nov 2013
pageviews: 8
### RNeXML
• Feedback from Rutger, need to add about attributes so that RDFa abstraction references the right level of the DOM (issue #35).
• Looking for strategy for distilling RDF from RDFa in R, see my question on SO. Hopefully don’t have to wrap some C library myself…
### nonparametric-bayes
Writing writing.
• Update pandoc templates to use yaml metadata for author, affiliation, abstract, etc. Avoids having to manually edit the elsarticle.latex template with this metadata. Added fork for my templates, e.g. see my elsarticle.latex. Example metadata in manuscript.
• fixing xtable caption (as argument)
• Extended discussion. Adjustments to figures. See commit log /diffs for details.
Mace (2013) , e.g.
a new kind of ecology is needed that is predicated on scaling up efforts, data sharing and collaboration
hear hear.
• PNAS with a somewhat confused take on error rates, suggesting a revised threshold p-value…
• AmNat Asilomar schedule (pdf) is up.
17 Nov 2013
pageviews: (not calculated)
(From issue #20)
a question of how the user queries that metadata. Currently we have a metadata function that simply extracts all the metadata at the specified level (nexml, otus, trees, tree, etc) and returns a named character string in which the name corresponds to the rel or property and the value corresponds to the content or href, e.g.:
birds <- read.nexml("birdOrders.xml")
meta <- get_metadata(birds)
prints the named string with the top-level (default-level) metadata elements as so:
> meta
## dc:date
## "2013-11-17"
## "http://creativecommons.org/publicdomain/zero/1.0/"
Which we can subset by name, e.g. meta["dc:date"]. This is probably simplest to most R users; though exactly what the namespace prefix means may be unclear if they haven’t worked with namespaces before. (The user can always print a summary of the namespaces and prefixes in the nexml file using birds@namespaces).
This approach is simple, albeit a bit limited.
### XPath queries
For instance, the R user has a much more natural and powerful way to handle these issues of prefixes and namespaces using either the XML or rrdf libraries. For instance, if we extract meta nodes into RDF-XML, we could handle the queries like so:
xpathSApply(meta, "//dc:title", xmlValue)
which uses the namespace prefix defined in the nexml; or
xpathSApply(meta, "//x:title", xmlValue, namespaces=c(x = "http://purl.org/dc/elements/1.1/"))
defining the custom prefix x to the URI
### Sparql queries
Pretty exciting that qe can make arbitrary SPARQL queries of the metadata as well.
library(rrdf)
sparql.rdf(ex, "SELECT ?title WHERE { ?x <http://purl.org/dc/elements/1.1/title> ?title })
Obviously the XPath or SPARQL queries are more expressive / powerful than drawing out the metadata from the S4 structure directly. On the other hand, because both of these approaches use just the distilled metadata, the original connection between metadata elements and the structure of the XML tree is lost unless stated explicitly. An in-between solution is to use XPath on the nexml XML instead, though I think we cannot make use of the namespaces in that case, since they appear in attribute values rather than structure.
Anyway, it’s nice to have these options in R, particularly for more complex queries where we might want to make some use of the ontology as well. On the other hand, simple presentation of basic metadata is probably necessary for most users.
Would be nice to illustrate with a query that required some logical deduction from the ontology.
#### Mbi Day Five Notes
08 Nov 2013
pageviews: 27
Panel discussion
• Hugh’s question on the usefulness of dynamic vs static models: do we have dynamical systems envy?
• Chris: are temporal dynamics historical artefact, and space the new frontier?
• Hugh: though decision theory is fundamentally temporal. really question of sequential decision vs single decision
• Hugh, on what would be his priority if he had time for new question: Solve the 2 player, 2 step SDP competition closed form.
• Paul: the narrow definitions of “math biology” with 1980s flavor.
• @mathbiopaul: Formulating the hard problems arising in application in an appropriate abstraction that mathematicians will attack.
• Leah raises issue of publishing software and reproducibility
• Julia mentions Environmental modeling and software journal
### pdg-control
Trying to understand pattern of increasing ENPV with increasing stochasticity. Despite having the same optimal policy inferred under increasing stochasticity (i.e. still in Reed’s self-sustaining criterion, below $$\sigma_g$$ of 0.2 or so) the average over simulated replicates is higher. We don’t seem to obtain the theoretical ENPV, but something less, in either case. See code noise_effects.md.
### ropensci
Schema.org defines a vocabulary for datasets (microdata/rdfa)
Rutger gives a one-liner solution for tolweb to nexml using bio-phylo perl library:
perl -MBio::Phylo::IO=parse -e 'print parse->to_xml' format tolweb as_project 1 url 'http://tolweb.org/onlinecontributors/app?service=external&page=xml/TreeStructureService&node_id=52643'
Hmm, there’s a journal of Ecological Informatics.
## References
• Rebecca S. Epanchin-Niell, Robert G. Haight, Ludek Berec, John M. Kean, Andrew M. Liebhold, Helen Regan, (2012) Optimal Surveillance And Eradication of Invasive Species in Heterogeneous Landscapes. Ecology Letters 15 803-812 10.1111/j.1461-0248.2012.01800.x
• Rebecca S. Epanchin-Niell, James E. Wilen, (2012) Optimal Spatial Control of Biological Invasions. Journal of Environmental Economics And Management 63 260-270 10.1016/j.jeem.2011.10.003
• J. Esper, U. Buntgen, D. C Frank, D. Nievergelt, A. Liebhold, (2007) 1200 Years of Regular Outbreaks in Alpine Insects. Proceedings of The Royal Society B: Biological Sciences 274 671-679 10.1098/rspb.2006.0191
• unknown Fagan, unknown Meir, unknown Prendergast, unknown Folarin, unknown Karieva, (2001) Characterizing Population Vulnerability For 758 Species. Ecology Letters 4 132-138 10.1046/j.1461-0248.2001.00206.x
• A. R. Hall, A. D. Miller, H. C. Leggett, S. H. Roxburgh, A. Buckling, K. Shea, (2012) Diversity-Disturbance Relationships: Frequency And Intensity Interact. Biology Letters 8 768-771 10.1098/rsbl.2012.0282
• Elizabeth Eli Holmes, John L. Sabo, Steven Vincent Viscido, William Fredric Fagan, (2007) A Statistical Approach to Quasi-Extinction Forecasting. Ecology Letters 10 1182-1198 10.1111/j.1461-0248.2007.01105.x
• Brian Leung, Nuria Roura-Pascual, Sven Bacher, Jaakko Heikkilä, Lluis Brotons, Mark A. Burgman, Katharina Dehnen-Schmutz, Franz Essl, Philip E. Hulme, David M. Richardson, Daniel Sol, Montserrat Vilà, Marcel Rejmanek, (2012) Teasing Apart Alien Species Risk Assessments: A Framework For Best Practices. Ecology Letters 15 1475-1493 10.1111/ele.12003
• A. D. Miller, S. H. Roxburgh, K. Shea, (2011) How Frequency And Intensity Shape Diversity-Disturbance Relationships. Proceedings of The National Academy of Sciences 108 5643-5648 10.1073/pnas.1018594108
• Adam David Miller, Stephen H. Roxburgh, Katriona Shea, (2011) Timing of Disturbance Alters Competitive Outcomes And Mechanisms of Coexistence in an Annual Plant Model. Theoretical Ecology 5 419-432 10.1007/s12080-011-0133-1
07 Nov 2013
pageviews: 17
## Morning session
• Chadès, I., Carwardine, J., Martin, T.G., Nicol, S., Sabbadin, R. & Buffet, O. (2012) MOMDPs: a solution for modelling adaptive management problems. The Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12), pp. 267-273. Toronto, Canada.
• 10.1098/rspb.2013.0325 Migratory connectivity magnifies the consequences of habitat loss from sea-level rise for shorebird populations
#### Jake LaRiviera
presents the challenges of the uncertainty table. Additional challenges in making an apples-to-apples comparison of the benefit of decreasing noise of different systems (e.g. in pricing information?)
#### Me
Some good questions following talk, primarily on BNP part.
• Where does the risk-adverse vs risk-prone behavior come from? Adjusting curvature of the uncertainty appropriately.
• Any lessons after stock collapsed, e.g. Rebuild a stock rather than mantain it? (Perhaps, but may face hysteresis in a way the intial collapse does not).
• Brute-force passive learning?
## Afternoon discussion
1. Is an active learning approach more or less valuable in a changing environment
2. Embracing surprise: how do we actually mathematically do this.
3. Limitations due to constraints on frequency of updating. (e.g. we don’t get to change harvest, we get to set a TAC once every ten years).
4. Uncertainty affecting net present value vs affecting model behavior
06 Nov 2013
pageviews: 17
## MBI Workshop
#### Becky Epanchin-Niell
• Motivating example of Rabies spatial control in Switzerland
• 2012 Eco Let: should New Zealand survey for bark beetle? Cost of survellience, control, and damage. Epanchin-Niell et al. (2012)
• 2012 JEEM spatial spread of star thistle: control spread and eradication as an integer programming problem in deterministic context. (with Wilen) Epanchin-Niell & Wilen (2012)
• Breaking landscape into individual landowners makes it less valuable to control early.
#### Brian Leung
“Data, uncertainty and risk in biological invasions”
• Alien risk assessments Leung et al. (2012) , shows a dominance of rank scoring over truely quantitative approaches. Limitations to each.
• It’s not the model complexity, but the implementation interface that poses the real barrier. Also need better integration needs.
Scott Barrett
### PDG Control
• back-and-forth with Paul: is second column redundant? Seems not: NPV when paying penalty that doesn’t exist is: profit under penalty ($$\Pi_0(x,h)$$) minus zero (adjustment cost), while scaling is set such that profit under penalty minus adjustment cost, $$\Pi_0(x,h) - \Pi_1(h, c_2)$$. Also, better to normalize everything against the adjustment-free ENPV ($$Pi_0$$) than to normalize by the truth/simulation model (which differs in different cases).
See commit log for updated versions.
## References
bibliography()
• Rebecca S. Epanchin-Niell, Robert G. Haight, Ludek Berec, John M. Kean, Andrew M. Liebhold, Helen Regan, (2012) Optimal Surveillance And Eradication of Invasive Species in Heterogeneous Landscapes. Ecology Letters 15 803-812 10.1111/j.1461-0248.2012.01800.x
• Rebecca S. Epanchin-Niell, James E. Wilen, (2012) Optimal Spatial Control of Biological Invasions. Journal of Environmental Economics And Management 63 260-270 10.1016/j.jeem.2011.10.003
• A. R. Hall, A. D. Miller, H. C. Leggett, S. H. Roxburgh, A. Buckling, K. Shea, (2012) Diversity-Disturbance Relationships: Frequency And Intensity Interact. Biology Letters 8 768-771 10.1098/rsbl.2012.0282
• Brian Leung, Nuria Roura-Pascual, Sven Bacher, Jaakko Heikkilä, Lluis Brotons, Mark A. Burgman, Katharina Dehnen-Schmutz, Franz Essl, Philip E. Hulme, David M. Richardson, Daniel Sol, Montserrat Vilà, Marcel Rejmanek, (2012) Teasing Apart Alien Species Risk Assessments: A Framework For Best Practices. Ecology Letters 15 1475-1493 10.1111/ele.12003
• A. D. Miller, S. H. Roxburgh, K. Shea, (2011) How Frequency And Intensity Shape Diversity-Disturbance Relationships. Proceedings of The National Academy of Sciences 108 5643-5648 10.1073/pnas.1018594108
• Adam David Miller, Stephen H. Roxburgh, Katriona Shea, (2011) Timing of Disturbance Alters Competitive Outcomes And Mechanisms of Coexistence in an Annual Plant Model. Theoretical Ecology 5 419-432 10.1007/s12080-011-0133-1
05 Nov 2013
pageviews: 21
## MBI Workshop
#### Paul Armsworth
Very nice example of control in creating a market for ecosystem services for landowners. Key feature is that multiple land-owners respond by adjusting their prices, and so payments can be divided into fraction going into subsidy and fraction going into conservation. When one land-owner controls land that is particularly good cost per conservation accomplished, they also stand to gain largest value.
Also looked at auction mechanism and impact of cooperation among landowners to create hold outs.
#### Hugh Possingham
Hot spot assignment. Marxan and success while ignoring dynamics. {to what extent is data substitute for dynamics}
Dynamics (but no heterogeneity)
#### Bill Fagan
Linking individual movements to population dynamics. Home range vs migration vs nomadic motion.
#### Afternoon breakout
Looking at role of institutions and tractability of implementation problems. Interesting observation from Lou in the long-tail effect of certain individuals in explaining heterogeneity in implementation success.
### PDG Control
Working on table following issue #41
Based on errors_table.Rmd, plot from plot_table.Rmd
shows the effect of greater noise actually reducing the impact of being wrong (either by ignoring adjustment costs that exist or assuming adjustment costs that don’t exist). Bigger induced reduction in NPV (higher cost) naturally decreases value.
Functional form doesn’t matter when assuming costs that don’t exist, since these are calibrated to be equivalent by selecting the coefficients in order to have equal reduction in NPV. Obviously functional form does matter when ignoring penalties that do exist, and it seems that ignoring L2 penalties is most damaging, ignoring L1 penalties the least damaging?
penalty_fn ignore_cost ignore_fraction assume_cost assume_fraction sigma_g reduction
1 L1 14536.09 1.00 16857.61 1.00 0.05 0.10
2 L2 11020.10 0.76 15538.44 0.92 0.05 0.10
3 fixed 13584.97 0.92 17582.78 1.05 0.05 0.10
4 L1 9273.66 1.09 17561.81 1.04 0.05 0.20
5 L2 11020.10 0.76 15538.44 0.92 0.05 0.20
6 fixed 8332.45 0.84 17401.82 1.03 0.05 0.20
7 L1 -641.78 0.21 15451.25 0.92 0.05 0.30
8 L2 -272586.11 -24.90 11567.12 0.69 0.05 0.30
9 fixed -1213.01 -0.46 15519.95 0.92 0.05 0.30
10 L1 18281.97 1.01 20973.56 0.99 0.20 0.10
11 L2 12462.51 0.72 19392.85 0.91 0.20 0.10
12 fixed 16691.94 1.03 20143.80 0.95 0.20 0.10
13 L1 12788.95 0.96 20663.21 0.97 0.20 0.20
14 L2 12462.51 0.72 19392.85 0.91 0.20 0.20
15 fixed 12399.01 0.88 21289.21 1.00 0.20 0.20
16 L1 9099.48 0.73 19999.07 0.94 0.20 0.30
17 L2 -6337.76 -0.39 17765.33 0.83 0.20 0.30
18 fixed 7399.01 0.62 21969.84 1.03 0.20 0.30
19 L1 30227.43 1.07 31377.07 0.94 0.50 0.10
20 L2 33472.18 1.00 33472.18 1.00 0.50 0.10
21 fixed 28926.73 1.09 30500.85 0.91 0.50 0.10
22 L1 24157.93 1.10 31767.92 0.95 0.50 0.20
23 L2 19091.31 0.93 29225.46 0.87 0.50 0.20
24 fixed 23573.19 1.02 30069.80 0.90 0.50 0.20
25 L1 21086.12 1.12 32035.93 0.96 0.50 0.30
26 L2 19091.31 0.93 29225.46 0.87 0.50 0.30
27 fixed 17209.56 0.85 32946.24 0.98 0.50 0.30
## rOpenSci / ecoinformatics
• Exploring strategies for addressing certificate authentication workflow.
• Plans to merge dataone and rdataone, shedding the current rJava dependency and dealing with existing vs new namespacing #1
04 Nov 2013
pageviews: 20
## Lou Gross
• Big picture: big data, computational challenges.
• “Convergence” as the new interdisciplinary (National Academies)
• Rise of synthesis centers
• “Enabling architecture for next gen life science research” – National Academies report Lou Gross (2013)
• Comp Science for Natural Resource Management – Fuller, Wang, Gross (2007)
• language barriers: What’s a model? Mouse, drosophila? logistic, ricker? GIS map layers? …
Contraints frequently dominate, not the control or the state equation.
Let stakeholders make their own rankings. Scenario analysis vs optimal control. Uncertainies! “Relative assessment protocol” Fuller, Gross, Duke-Sylvester, Palmer. “Testing the robustness of management descions under uncertainty” (ATLSS modeling)
### Breakouts
notes from my breakout session:
#### Questions
• Sensitivity analysis of Scenario Rankings
• What does a Resilience approach add
• Generalities
#### Tools for decision under uncertainty
• optimization
• threshold planning
• scenario planning
• resilience thinking
#### optimization
• Info gap “theory”
• Satisfiability / mini-max
• model approximation methods
• dynamic programming
• To what extent are these approaches different sides of the same coin?
• Are there truly non-optimization based approaches?
• Almost-optimal approaches
• Including constraints
#### what we do well
• Optimize easy problems
• open loop
#### State-of-the-art
• Starting to: simulate optimal solutions to simple problems under more realistic circumstances
• Starting to find multiple “optima”
• feedback control (SDP)
• large state space
#### Open challenges / what we do poorly
• Dual control under parameter uncertainty (without restrictive assumptions on parameters)
• high-dimensional problems
• multiple stake-holders / game-theory solutions (outside fisheries)
• mapping between control and implementation (partial controllability)
• large action space
• (multiple) delayed effect actions
#### Open challenges: Multiple stake-holder games
• beyond 2-player differential games (with feedback)
• simultaneous player actions
#### Open challenges: adaptive management timescales
• frequency of revisiting decisions
• biological timescales
• political timescales
#### Known nuisances
• curse of dimensionality
• data collection methods
• numerical methods
• local vs global
#### Missed things
• spatial data, using rich data under the curse of dimensionality
## ropensci
Writing out proof-of-principle interface to the dataone REST API, see rdataone and Introduction to the package
Key things:
• We can accomplish handling of certificates from httr, just add config = list(sslcert = ); see ?httr::config, e.g. archive a file with:
httr::PUT(paste0("https://knb.ecoinformatics.org/knb/d1/mn/v1/archive/",
config=config(sslcert = "/tmp/x509up_u1000"))
Posting new data requires writing a system metadata XML file. Currently have a crude minimal version of this, write_sysmeta.R, should see how dataone package is handling this.
#### Rnexml Semantic Considerations
17 Oct 2013
pageviews: 18
Some very good feedback from Hilmar as I tackle some of the semantic capabilities of NeXML in RNeXML. As the complete discussion is already archived in the Github issues tracking, (see in particular #26 and #24) I only paraphrase here.
One of the central advantages we can offer in programmatic generation of NeXML in the R environment is the ability to validate names and enhance the metadata included in the resulting nexml file using programmatic queries to taxonomic name resolution services such as ITIS, EOL, NCBI, as provided in the taxize package.
A subtly to this approach is discussed in issue #24. Whenever we provide new data, we should also provide future users the appropriate metadata describing where it came from, if we had found an exact match or maybe only a close match (perhaps an alternative spelling of the species name was used). Specifying the provenance exactly can become quite verbose, for each taxonomic unit we have:
which corresponds to adding RDFa to the NeXML looking something like:
<otus id="tax1">
<otu label="Struthioniformes" id="t1">
<meta xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns="http://www.w3.org/1999/xhtml"
xmlns:obo="http://purl.obolibrary.org/obo/"
xmlns:tc="http://rs.tdwg.org/ontology/voc/TaxonConcept#"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:tnrs="http://phylotastic.org/terms/tnrs.rdf#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
class="rdf2rdfa">
typeof="obo:CDAO_0000138">
<meta property="rdfs:label" content="Panthera tigris HQ263408"/>
<meta rel="tnrs:resolvesAs">
<meta class="description" typeof="tnrs:NameResolution">
<meta property="tnrs:matchCount" content="2"/>
<meta rel="tnrs:matches">
<meta class="description" typeof="tnrs:Match">
<meta property="tnrs:acceptedName" content="Panthera tigris"/>
<meta property="tnrs:matchedName" content="Panthera tigris"/>
<meta property="tnrs:score" content="1.0"/>
<meta rel="tc:toTaxon" resource="http://www.ncbi.nlm.nih.gov/taxonomy/9694"/>
<meta rel="tnrs:usedSource">
typeof="tnrs:ResolutionSource">
<meta property="dc:description" content="NCBI Taxonomy"/>
<meta property="tnrs:hasRank" content="3"/>
<meta property="tnrs:sourceStatus" content="200: OK"/>
<meta property="dc:title" content="NCBI"/>
</meta>
</meta>
</meta>
</meta>
<meta rel="tnrs:matches">
<meta class="description" typeof="tnrs:Match">
<meta property="tnrs:acceptedName" content="Megalachne"/>
<meta property="tnrs:matchedName" content="Pantathera"/>
<meta property="tnrs:score" content="0.47790686999749"/>
<meta rel="tc:toTaxon" resource="http://www.tropicos.org/Name/40015658"/>
<meta rel="tnrs:usedSource">
typeof="tnrs:ResolutionSource">
<meta property="dc:description"
content="The iPlant Collaborative TNRS provides parsing and fuzzy matching for plant taxa."/>
<meta property="tnrs:hasRank" content="2"/>
<meta property="tnrs:sourceStatus" content="200: OK"/>
<meta property="dc:title" content="iPlant Collaborative TNRS v3.0"/>
</meta>
</meta>
</meta>
</meta>
<meta rel="dcterms:source">
<meta class="description"
typeof="tnrs:ResolutionRequest">
<meta property="tnrs:submitDate" content="Mon Jun 11 20:25:16 2012"/>
<meta rel="tnrs:usedSource" resource="http://tnrs.iplantcollaborative.org/"/>
<meta rel="tnrs:usedSource" resource="http://www.ncbi.nlm.nih.gov/taxonomy"/>
</meta>
</meta>
<meta property="tnrs:submittedName" content="Panthera tigris"/>
</meta>
</meta>
</meta>
</meta>
</otu>
Why provenance? This can be particularly important in tracking down errors or inconsistencies. For instance, image the taxanomic barcode id number we provide for a taxon is later divided into multiple species. Ideally this would be reflected in the updated entries of the barcode service, establishing new id numbers for the split members and identifying that the old id was split – after all, a barcode system is supposed to facilitate addressing these kinds of issues.
Still, this appears much more verbose than say, what TreeBase does when automatically adding identifiers as annotations to the NeXML otu nodes.
Meanwhile, in more concrete terms, we seem to have some consensus on using NCBI taxonomic ids, since its API queries are pretty fast:
<otus id="tax1">
<otu label="Struthioniformes" id="t1">
<meta xsi:type="ResourceMeta" href="http://ncbi.nlm.nih.gov/taxonomy/8798" rel="tc:toTaxon"/>
</otu>
note this example uses the tc: http://rs.tdwg.org/ontology/voc/TaxonConcept# toTaxon concept to provide a ontological definition of the link as a taxon identier.
NCBI does not do partial matching, so we simply warn when a user’s taxonomic names do not match an NCBI id, giving them a chance to correct them if in error (either manually or automatically using the partial name matching functions in taxize)
#### Is it time to retire Pagel's lambda?
11 Oct 2013
pageviews: 482
Pagel’s $$\lambda$$ (lambda), introduced in Pagel 1999 as a potential measure of “phylogenetic signal,” the extent to which correlations in traits reflect their shared evolutionary history (as approximated by Brownian motion).
Numerous critiques and ready alternatives have not appeared to decrease it’s popularity. There are many issues with the statistic, some of which I attempt to summarise below.
The $$\lambda$$ statistic is defined by the Brownian motion model together with a transformation of the branch lengths: multiply all internal branches by $$\lambda$$. The motivation for the definition is obvious: $$\lambda = 1$$ the tree is unchanged and the model equivalent to Brownian motion, while for $$\lambda = 0$$ the tree becomes a star phylogeny and the model is equivalent to completely independent random walks. $$0 < \lambda < 1$$ provides an intermediate range where the correlations are weaker than expected.
Problem 1: It is biological nonsense to treat tips different from other edges.
All other problems arise from this. While it is okay that a statistic does not have a corresponding evolutionary model, being part of an explicit model might have helped avoid this sillyness. Technically $$\lambda$$ is a model, but one that treats evolution along “tips” as special, as if evolution should follow completely different rules for a species alive today relative to it’s former evolutionary history. Sounds almost creationist.
Problem 2: The statistic doesn’t measure what is says it measures.
To demonstrate this, we can consider two cases in which phylogeny has the identical effect of explaining trait correlations, and yet have very different lambdas. Consider that Researcher 1 examines the phylogeny in Figure 1 and estimates very little phylogenetic signal, $$\lambda = 0.1$$.
library(ape)
cat("(((A_sp:10,B_sp:10):1,C_sp:11):1,D_sp:12);", file = "ex.tre", sep = "\n")
plot(ex)
Now Researcher 2 discovers closely related sister species of some of the taxa originally studied, as in Figure 2.
cat("((((A_sp:1, A2_sp:1):9,(B_sp:1, B2_sp:1):9):1,(C_sp:1, C2_sp:1):10):1,(D_sp:1, D2_sp:1):11);",
file = "ex2.tre", sep = "\n")
plot(ex2)
There traits of sister taxa are very similar (indeed let us assume the sister species are hard to distinguish morphologically - perhaps why they were overlooked by Researcher 1). The OU or BM model estimates made by researcher 1 will closely agree with with those of Researcher 1, since the sister taxa have quite similar traits. Yet the $$\lambda$$ estimates differs greatly – all of a sudden the phylogenetic signal must be quite high!
And yet the underlying evolutionary process by which we have simulated the data has been unchanged! The difference arises because what formerly appeared as long tips have become short tips. How do we intepret a metric that depends so heavily on whether or not all sister species are present in the data? As noted, this problem does not impact other phylogenetic comparative methods to nearly the same extent.
Problem 3 The statistic has no notion of timescale or depth in the phylogeny.
In $$\lambda$$ (and other definitions such as Blomberg’s K), phylogenetic signal is an all-or-nothing proposition. If we find that really recently diverged species that happen to resemble each-other, while species that have diverged for longer than, say, a couple million years show no correlation – is this phylogenetic signal or not? This ‘extinction’ of phylogenetic signal as we go far enough back in time seems like a biologically reasonable concept that is perfectly well expressed in the $$\alpha$$ parameter of the OU model, but is lost in the consideration of $$\lambda$$. If folks really want to estimate a continuous quantity to measure phylogenetic signal, I suggest $$\alpha$$ is a far more meaningful number (note that it has units! (1/time or 1/branch length)).
Consider the returning force alpha in the OU model (i.e. stabilizing selection). When alpha is near zero, the model is essentially Brownian, (i.e. ‘strong phylogenetic signal,’ where more recently diverged species are more similar on average than distantly related ones). When alpha is very large, traits reflect the selective constraint of the environment rather than their history, and so recently diverged species are no more or less likely to be similar than distant ones (provided all species in question are under the same OU model / same selection strength for the trait in question). The size of alpha gives the timescale over which ‘phylogenetic signal’ is lost (in units of the branch length). Two very recently diverged sister-taxa may thus show some phylogenetic correlation because their divergence time is of order 1/alpha, while those with longer divergence times act phylogeneticly independent, such as in our Figure 2 above. I find this an imperfect but reasonable meaning of phylogenetic signal.
If we restrict $$\lambda$$ to be strictly 1 or 0 these problems are alliviated, though then it is unnessary to define the statistic as such as we may instead consider a star tree (sometimes called the “white noise” model of evolution).
#### other such statistics
Pagel’s $$\delta$$ is a transformation on node depth, which is again problematic as there is no meaningfully consistent way to describe what is a node (think about deep speciation events with no present day ancestor.) I believe $$\kappa$$ would also be problematic to interpret as it is a nonlinear transform of branch length – raises branch length to a power – and thus would have a rather different effect depending on the units in which branch length were measured. (For instance, consider the case where the tree is scaled to length unity, so all branch lengths are less than one and thus become shorter with large exponents, vs one in which lengths are all larger than one). Fortunately these statistics are far less popular than $$\lambda$$. |
# MTAP Reviewer for Grade 9 Solution Series(31-35)
Here is the part 7 of the solution series in 2015 MTAP(MMC) reviewer for grade 9 elimination level. Other solutions can be found in this website as well as the PDF copy of all the problems.
Problem 31:
An equilateral triangle and a square have the same perimeter. What is the ratio of the length of a side of the triangle to the length of a side of the square?
Solution:
The perimeter of equilateral triangle can be found using the formula,
$P=3t$
where t is the length of the side
The perimeter of square can be found using the formula,
$P=4s$
where s is the length of the side
Since the perimeter is equal, we equate their perimeters,
$P=P$
$3t=4s$
Since we are asked to find the ratio of the side of triangle to the length of the side of the square we find t/s.
$\dfrac{t}{s}=\dfrac{4}{3}$
Thus, the ratio is 4:3
Problem 32:
John cuts an equilateral triangular paper whose sides measure 2 cm into pieces. He then rearranges the pieces to form a square without overlapping. How long is the side of the square formed?
After cutting, the area of the figure is still the same. To find the area of an equilateral triangle, we use the following formula,
$A=\frac{a^2\sqrt{3}}{4}$
Where a is the length of the side of an equilateral triangle.
Finding the area we have,
$A=\frac{2^2\sqrt{3}}{4}$
$A=\frac{4\sqrt{3}}{4}$
$A=\sqrt{3}$
Now, since the area of this triangle is equal to the area of the square, we can use the formula of the area of the square to find the length of the side.
$A=s^2$
Now, we equate the area of the triangle,
$\sqrt{3}=s^2$
Since we are solving for s, we take the square root of both sides,
$\sqrt{\sqrt{3}}=\sqrt{s^2}$
$\sqrt[4]{3}=s$
Or,
$\boxed{s=\sqrt[4]{3}}$
Problem 23:
The sides of a triangle are of lengths 5, 12 and 13 cm. What is the length of its shortest altitude?
Solution:
Draw the triangle and label like shown below.
Since the triangle formed is a right triangle, 12 and 5 are also altitudes. If we draw another altitude to the hypotenuse and label it x. Both 5 and 12 will become the hypotenuse of the two new triangles formed. Thus, x is the shortest altitude.
Now, take note that $\triangle ABC\sim\triangle BDC\sim\triangle ADB$
By similar triangles we can solve for the value of x.
$\dfrac{BD}{BC}=\dfrac{AB}{AC}$
$\dfrac{x}{5}=\dfrac{12}{13}$
$x=5(\dfrac{12}{13})$
$\boxed{x=\dfrac{60}{13}}$
Problem 34:
Each side of triangle ABC measures 8 cm. If D is the foot of the altitude drawn from A to the side BC and E is the midpoint of AD, how long is segment BE?
Solution:
Draw the figure as shown below,
Since this is an equilateral triangle, triangle ADB is a 36-60-90 triangle. Using the property of this triangle, the side of AD is square root of 3 times the length of BD. Thus,
$AD=\sqrt{3}BD$
$AD=\sqrt{3}(4)$
$AD=4\sqrt{3}$
Since E is the midpoint of AD, $AE=ED=2\sqrt{3}$
Now, triangle BDE also forms a right triangle and we can solve for BE using Pythagorean theorem.
$BE^2=BD^2+ED^2$
$BE^2=4^2+(2\sqrt{3})^2$
$BE^2=16+12$
$BE^2=28$
$\sqrt{BE^2}=\sqrt{28}$
$\boxed{BE=2\sqrt{7}}$
Problem 35:
A point E is chosen inside a square of side 8 cm such that it is equidistant from two adjacent vertices of the square and the side opposite these vertices. Find the common distance.
Solution:
Draw the figure like shown below,
The only way that a point will be equidistant from two adjacent vertices, it should lie in the symmetrical axis of the square. Label the figure accordingly. If your figure is not correct, your solution will not be correct as well.
To find the value of x, we use the right triangle formed in the upper left side of upper right side. Either of the two.
Using Pythagorean theorem we have,
$c^2=a^2+b^2$
$x^2=4^2+(8-x)^2$
$x^2=16+64-16x+x^2$
$16x=80$
$x=\dfrac{80}{16}$
$\boxed{x=5}$
We would like to thank MMC 2013 finalist Daniel James Molina for helping us out solving this problem. |
# Syllabus¶
## Introductory Numerical Analysis: Numerical Linear Algebra, Math 504 / CS 575 - Spring 2016¶
Time and Place:
13:00-13:50, MWF, Science & Math Learning Center 356.
Instructor:
Daniel Appelo, 277-3310, appelo kanelbulle math.unm.edu
Required text book:
1. G. Golub and C. Van Loan, Matrix Computations, 2013, Johns Hopkins University Press, distributed by SIAM.
Recommended text books:
1. J. Demmel, Applied Numerical Linear Algebra, 1997, SIAM.
2. L. Trefethen and D. Bau, Numerical Linear Algebra, 1997, SIAM.
3. N. Higham, Accuracy and Stability of Numerical Algorithms, 2002, SIAM.
4. C. Meyer, Matrix Analysis and Applied Linear Algebra, 2000, SIAM.
5. A. Laub, Matrix Analysis for Scientists and Engineers, 2005, SIAM.
Office hours:
SMLC 310 Tuesday 13.00-15.00, Wednesday 15.30-17.00.
Prerequisites:
Prerequisites: MATH 464/514. I will also assume you have the basic skills of an applied mathematician or engineer in the field of computational science and engineering (The third pilar of science).
Description and goals:
From the course handbook: Direct and iterative methods of the solution of linear systems of equations and least squares problems. Error analysis and numerical stability. The eigenvalue problem. Descent methods for function minimization, time permitting.
The goals of this class are:
• To acquire practical and theoretical knowledge of computational algorithms for numerical linear algebra.
• To acquire a broad knowledge of the algorithms that are available for linear systems, linear least squares, eigenvalue problems.
• To master proof techniques commonly used in numerical linear algebra.
• To understand how to efficiently implement the algorithms and methods discussed in class on serial machines.
• To understand what influences the efficiency of the implementations in serial and parallel (we will not actually implement the algorithms in parallel).
• To understand what implications finite precision number systems have on the accuracy and stability of the algorithms we consider.
Homework / Computer Projects:
The homework will consist of weekly theoretical and computational assignments. The programs that are used for the computational assignments should be kept under version control using lobogit.unm.edu and must be shared with the instructor and TA. The computer programs should be written in Matlab, Fortran or C.
The homework reports should preferably be typed up. Grading will be made on the mathematical correctness, the correctness and style of the implementation and the overall style of the report.
Exams:
There will be two midterm exams and a final exam. The exams will be oral. Each midterm will be 25 minutes per person and consist of board work solving one or two problems. You will have 25 minutes to prepare. The final will be in the same format but longer, approximately one hour. The exams will be pass or fail.
Your grade for this course is based on homework and computing projects, in-class work/attendance, and exams, in the following proportion: Homework/computing projects 75%, exams 25%.
To get a letter grade A you will have to pass all the exams on the first try.
After the weighted percentage grade has been calculated as detailed above, letter grades will be assigned according to the following scheme: A, 90 or above, B, 80 or above, C, 70 or above, D, 60 or above, F below 60. However, the instructor reserves the right to “curve” grades to offset unforeseen circumstances. The curving of grades will never decrease a student’s letter grade below that given by the above formula.
Dishonesty policy:
Each student is expected to maintain the highest standards of honesty and integrity in academic and professional matters. The University reserves the right to take disciplinary action, including dismissal, against any student who is found responsible for academic dishonesty. Any student who has been judged to have engaged in academic dishonesty in course work may receive a reduced or failing grade for the work in question and/or for the course. Academic dishonesty includes, but is not limited to, dishonesty on quizzes, tests or assignments; claiming credit for work not done or done by others; and hindering the academic work of other students.
American disabilities act:
In accordance with University Policy 2310 and the American Disabilities Act (ADA), academic accommodations may be made for any student who notifies the instructor of the need for an accommodation. It is imperative that you take the initiative to bring such needs to the instructor’s attention, as the instructor is not legally permitted to inquire. Students who may require assistance in emergency evacuations should contact the instructor as to the most appropriate procedures to follow. Contact Accessibility Services at 505-661-4692 for additional information
Disclaimer:
I reserve the right to make reasonable and necessary changes to the policies outlined in this syllabus. Whenever possible, the class will be notified well in advance of such changes. An up-to-date copy of the syllabus can always be found on the course website. It is your responsibility to know and understand the policies discussed therein. If in doubt, ask questions. |
Content
Pages
Categories
Search
Top
Bottom
### Re: Apostrophes turn to “///////” in Subject Line of Private Messages
Lance Willett
Participant
@lancewillett
I also noticed a similar behavior here in this very form where I’m typing (forums, edit topic).
If I post a topic, then edit it, more slashes are added each time I edit. For example, if I edit the same post 3 or 4 times, I’ll end up with \\\\\\\\\\\\\\\it's instead of just it's. |
# Math Help - sum of two roots
1. ## sum of two roots
find
2. Originally Posted by fxs12
find
Hint: Try squaring it (and then take the square root after simplifying the result).
3. there is asecond solution
4. Originally Posted by fxs12
there is asecond solution
But only one of 6 or 4*sqrt(2) is correct. I admit I don't know how to tell which is right without a calculator though.
-Dan
5. Originally Posted by topsquark
But only one of 6 or 4*sqrt(2) is correct. I admit I don't know how to tell which is right without a calculator though.
-Dan
Six is correct. The 4*root(2) solution comes from when the first square root term is negative, as no sign is specified. |
# Homework Help: Creating new energy by E-m conversion - why can't this work?
1. Dec 1, 2012
### Michael Redei
1. The problem statement, all variables and given/known data
This isn't a homework problem, but a question my physics teacher asked us in school, more than 25 years ago. I've thought about this from time to time, but I never found anyone to ask whether my explanation is at least plausible.
Imagine you had a machine that could convert matter into energy and vice-versa. Now you take a mass, say, 1kg of iron, and convert it into energy that you receive as photons. Direct the stream of photons to the ISS, where you use another machine to convert it back to 1kg of iron.
Obviously, the 1kg in orbit has a greater potential energy that it had back on earth. Where did this energy come from?
2. Relevant equations
None. No exact (numerical) answer is expected. Assume no atmosphere between the earth and the ISS, and assume your machines convert mass to energy and back without losing either during the process.
3. The attempt at a solution
It's obvious that the mass on the ISS must be less than it was on earth, so it must have decreased somewhere "en route" while it was in the form of light. The number of photons can't have decreased, so it's their frequency that must have. The beam of light must have been red-shifted, but how?
My only explanation is that this happens every time an electromagnetic wave moves away from a source of gravity. Hence, to exaggerate a bit, if you point a ray of blue light at the moon, an astronaut there would see it as red. And, conversely, if this astronaut shines a red light on us, we'd see it as blue. (That's VERY exaggerated; I don't know whether the change in frequency would be noticeable at all to a human eye.)
Is this what my teacher was thinking of? Or am I missing something?
2. Dec 1, 2012
### Staff: Mentor
Light leaving a gravitational well loses energy. This is a consequence of General Relativity.
1kg of rest mass is the same amount of mass no matter where it is, regardless of its potential energy. To assemble 1kg of mass in a given location requires the same amount of energy, $E = M c^2$. However, whatever energy is required to do it has to first get to that location in some way, either by transporting it in some material form or, as you suggest, "beaming" it there as photons.
Since photons lose energy climbing out of a gravitational well you'll have to supply additional energy over and above that which comes directly from converting the 1kg mass to photons on the Earth's surface, so that the total energy made available at the new location is the required amount to "build" 1kg of matter. The additional energy you need to supply will in fact match the potential energy difference between the starting location on the Earth and the orbit at the ISS, plus any kinetic energy difference corresponding to the orbital velocity. The extra energy will be "small change" compared to the amount of energy of 1kg converted to photons!
3. Dec 1, 2012
### OmCheeto
The only way I can solve this is to actually build a device that does what you are saying.
First I would start with a half kilogram each of iron and anti-iron atoms. I would bring these together, atom by atom, into a photonic collimating device pointed towards the ISS. The photons, as you mentioned, would lose energy on the trip due to red shifting. At the ISS, the photons would be sent to a pair production module, where the iron and anti-iron atoms would be separated and stored in special containers. Now the atoms on earth will be attracted to each other via the coulomb force and this energy has to be added to the equation. When the photons generate the atom pairs at the ISS, the velocities will indicate a slightly lower energy level, balancing the whole thing out.
----------------------------
awaiting infraction for creating an overly simplified crackpot fantasy machine in my head in under 60 seconds
4. Dec 1, 2012
### Ibix
Gravitational redshift is right. I recall this being presented to me as a theoretical argument why photons had to be red-shifted when climbing a gravitational gradient. If not, you can construct an energy-generating perpetual motion machine by dropping the brick out of the station and harnessing its kinetic energy before beaming it back up.
5. Dec 1, 2012
### Michael Redei
Thanks a lot for the confirmation. As I thought (and Ibix spelled out), I could perform the experiment my teacher sugegsted, but I'd not end up with 1kg on the ISS, but slightly less. Or (as gneill suggested), I'd have to add some "small change" to make up for what the photons lost on their way up the earth's gravity well.
So no chance of a perpetuum-mobile device then? I'm saddened, but not at all surprised.
6. Dec 1, 2012
### phyzguy
The gravitational redshift has been experimentally verified here on Earth - look up the Pound-Rebka experiment.
7. Dec 1, 2012
### Michael Redei
Thank you, phyzguy. From what I could find about the Pound-Rebka experiment, it seems that for my own thought experiment to work at all, the E-m converter on the ISS would have to be moving upwards by some small speed v relative to the stationary converter on earth, and this speed v would depend on the difference Δh in height between the two machines.
So if I don't get a perpetuum-modile device from all this effort, at least I'd have a fancy altimeter.
8. Dec 1, 2012
### SteamKing
Staff Emeritus
It's perpetual motion device
9. Dec 1, 2012
### OmCheeto
I believe "perpetuum mobile" is the latin translation. But I don't speak latin, so I may be wrong.
-----------------------
Perhaps Michael is a doctor. They speak latin, don't they?
10. Dec 1, 2012
### Michael Redei
I can read and write Latin, but I don't actually speak it. And it's "perpetuum mobile" in Latin. ("modile" was a typo in my earlier post.) |
# 10.4 Matched or paired samples (Page 3/12)
Page 3 / 12
A college football coach was interested in whether the college's strength development class increased his players' maximum lift (in pounds) on the bench press exercise. He asked four of his players to participate in a study. The amount of weight they could each lift was recorded before they took the strength development class. After completing the class, the amount of weight they could each lift was again measured. The data are as follows:
Weight (in pounds) Player 1 Player 2 Player 3 Player 4
Amount of weight lifted prior to the class 205 241 338 368
Amount of weight lifted after the class 295 252 330 360
The coach wants to know if the strength development class makes his players stronger, on average.
Record the differences data. Calculate the differences by subtracting the amount of weight lifted prior to the class from the weight lifted after completing the class. The data for the differences are: {90, 11, -8, -8}. Assume the differences have a normal distribution.
${\stackrel{–}{x}}_{d}$ = 21.3, s d = 46.7
Using the difference data, this becomes a test of a single __________ (fill in the blank).
Define the random variable: ${\stackrel{–}{X}}_{d}$ mean difference in the maximum lift per player.
The distribution for the hypothesis test is a student's t with 3 degrees of freedom.
H 0 : μ d ≤ 0, H a : μ d >0
Calculate the test statistic look up the critical value: Critical value of the student's t at 5% level of significant and 3 degrees of freedom are 2.353 and 0.91, respectively.
Decision: If the level of significance is 5%, we cannot reject the null hypothesis, because the calculated value of the test statistic is not in the tail.
What is the conclusion?
At a 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the strength development class helped to make the players stronger, on average.
## Try it
A new prep class was designed to improve SAT test scores. Five students were selected at random. Their scores on two practice exams were recorded, one before the class and one after. The data recorded in [link] . Are the scores, on average, higher after the class? Test at a 5% level.
SAT Scores Student 1 Student 2 Student 3 Student 4
Score before class 1840 1960 1920 2150
Score after class 1920 2160 2200 2100
The p -value is 0.0874, so we decline to reject the null hypothesis. The data do not support that the class improves SAT scores significantly.
## Try-it
Five ball players think they can throw the same distance with their dominant hand (throwing) and off-hand (catching hand). The data were collected and recorded in [link] . Conduct a hypothesis test to determine whether the mean difference in distances between the dominant and off-hand is significant. Test at the 5% level.
Player 1 Player 2 Player 3 Player 4 Player 5
Dominant Hand 120 111 135 140 125
Off-hand 105 109 98 111 99
The p -level is 0.0230, so we can reject the null hypothesis. The data show that the players do not throw the same distance with their off-hands as they do with their dominant hands.
## Chapter review
A hypothesis test for matched or paired samples (t-test) has these characteristics:
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers! |
# Showing $G$ is the product of groups of prime order
Let $G$ be a (not necessarily finite) group with the property that for each subgroup $H$ of $G$, there exists a `retraction' of $G$ to $H$ (that is, a group homomorphism from $G$ to $H$ which is identity on $H$). Then, we claim :
• $G$ is abelian.
• Each element of $G$ has finite order.
• Each element of $G$ has square-free order.
Let $g$ be a nontrivial element of $G$ and consider a retraction $T : G \to \langle{g\rangle}$ which is identity on $\langle{g\rangle}$. As $G/Ker(T)$ is isomorphic to $\text{Img}\langle{g\rangle}$, it is cyclic and so, it is abelian.
Other than this i don't know how to prove the other claims of the problem. Moreover, a similar problem was asked in Berkeley Ph.D exam, in the year 2006, which actually asks us to prove that:
If $G$ is finite and there is a retraction for each subgroups $H$ of $G$, then $G$ is the products of groups of prime order.
-
There is no "Berkeley Ph.D. exam". You are talking either about the Berkeley Preliminary examinations (part of Ph.D. program, they appear in the book "Berkeley Problems in Mathematics") or a problem from a Qualifying Exam in Berkeley. – Arturo Magidin Sep 29 '10 at 16:35
@Arturo: Thats what i meant! – anonymous Sep 29 '10 at 20:04
@Chandru: I mentioned two (mutually exclusive) options. Apparently, you meant one of them. Sadly, you neither specified which one you meant, nor fixed the assertion in the question. – Arturo Magidin Sep 29 '10 at 22:33
Let $H$ be a subgroup of $K$ which is a subgroup of $G$. If there's a retraction of $G$ onto $H$, it restricts to a retraction of $K$ onto $H$. So if a group $G$ has this property (let's say it's "retractible") then each subgroup of $G$ is retractible. Which cyclic groups are retractible?
-
I have worked out some more details. why do we need the fact "Which cyclic groups are retractible" :x) – anonymous Sep 29 '10 at 14:51
Now i am stuck up on the last part :)! – anonymous Sep 29 '10 at 14:52
@Chandru: you may perhaps not "need" to know which cyclic groups are retractable, but if you did know which cyclic groups have the desired property, then you would have easily obtained the second and third part of the problems as a consequence. For example, the infinite cyclic group is not retractable, because all nontrivial images are finite, but all nontrivial subgroups are infinite (which immediately gives you the second property). The third part of the problem will follow if you figure out which finite cyclic groups have the desired property. – Arturo Magidin Sep 29 '10 at 23:11
Let $g$ be a nontrivial element of $G$ and consider a retraction $T : G \to \langle{g\rangle}$ which is identity on $\langle{g\rangle}$. As $G/Ker(T)$ is isomorphic to $\text{Img}\langle{g\rangle}$, it is cyclic and so, it is abelian.
Thus $[G,G]$ is contained in $Ker(T)$. Since $g \notin Ker(T)$, $g \notin [G,G]$. As $g$ is an arbitrary nontrivial element of $G$, this means that $[G,G] = {e}$; that is, $G$ is abelian.
Look at any element $g \in G$ and consider a retraction $T:G \to \langle{g^2 \rangle}$. $T(g)$ is in $\langle{g^2 \rangle}$ means $T(g) = g^{2r}$ for some $r$. Also, $T(g^2)=g^2$ means then that $g^{4r}=g^2$; that is, $g^{4r-2} = e$. As $4r-2$ is not zero, we get that $g$ has finite order.
- |
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
1999A&AS..139...97K - Astron. Astrophys., Suppl. Ser., 139, 97-103 (1999/October-1)
HI properties of nearby galaxies from a volume-limited sample.
KARACHENTSEV I.D., MAKAROV D.I. and HUCHTMEIER W.K.
Abstract (from CDS):
We consider global HI and optical properties of about three hundred nearby galaxies with V0<500km/s. The majority of them have individual photometric distance estimates. The galaxy sample parameters show some known and some new correlations implying a meaningful dynamic explanation: 1) In the whole range of diameters, 1-40Kpc, the galaxy standard diameter and rotational velocity follows a nearly linear Tully-Fisher relation, logA25∝(0.99±0.06)logVm. 2) The HI mass-to-luminosity ratio and the HI mass-to-total" mass (inside the standard optical diameter) ratio increase systematically from giant galaxies towards dwarfs, reaching maximum values 5M/L and 3, respectively. 3) For all the Local Volume galaxies their total mass-to-luminosity ratio lies within a range of [0.2-16]M/L with a median of 3.0M/L. The M25/L ratio decreases slightly from giant towards dwarf galaxies. 4) The MHI/L and M25/L ratios for the sample galaxies correlate with their mean optical surface brightness, which may be caused by star formation activity in the galaxies. 5) The MHI/L and M25/L ratios are practically independent of the local mass density of surrounding galaxies within the range of densities of about six orders of magnitude. 6) For the LV galaxies their HI mass and angular momentum follow a nearly linear relation: logMHI∝(0.99±0.04)log(Vm.A25), expected for rotating gaseous disks being near the threshold of gravitational instability, favourable for active star formation.
Journal keyword(s): galaxies: global HI parameters - galaxies
VizieR on-line data: <Available at CDS (J/A+AS/139/97): appen.dat>
Nomenclature: Appendix: [KMH99] FFF.N N=6. |
# Please help me to find the sum $\sum\limits_{n\geq1} \frac{\sin(nx)}{n^2}$
I must show that $\sum\limits_{n\geq1} \frac{\sin(nx)}{n^2}$ converges $\forall x \in {R}$. Then if $f(x)=\sum\limits_{n=1}^\infty f_n(x)$, I must prove that $f(x)$ is continuous for $x\in [0, \pi]$ and $\int\limits_0^\pi f(x)=2\sum\limits_{n=1}^\infty \frac{1}{(2n-1)^3}$.
I used Weistrass M-test: $\sum\limits_{n\geq1} \frac{\sin(nx)}{n^2}$ converges uniformly because $|\frac{\sin(nx)}{n^2}|\le \frac{1}{n^2}, \forall x\in R$ and $\sum \frac {1}{n^2}$ converges.
I found somewhere that $f(x)=\lim\limits_{n\to\infty}f_n(x)$. Where does it come from? I don't understand. How $f(x)$ looks like? Without $f(x)$ I don't know how to prove the last equality from exercise:(
Later edit: If $\sum\limits_{n\geq1}f_n\to f$ uniformly , then $f_n\to f$ (punctual converges to $f$- I hope this is correct translation) which means $f_n(x)\to f(x)$ when $n\to \infty ?$
I hope someone could help me again...
-
Convergence $\sum_{n\ge 1} f_n \to f$ can not imply $f_n\to f$, otherwise the series would be divergent. – Sasha Aug 11 '11 at 14:54
This is a Clausen function: en.wikipedia.org/wiki/Clausen_function – deoxygerbe Aug 11 '11 at 18:25
"converges pointwise to f" is the standard translation, but "punctual convergence" is perfectly comprehensible as the same thing. – zyx Aug 22 '11 at 6:15
As you observed, by the Weierstrass M-test, the series converges uniformly. Let $$f(x)=\sum_1^\infty \frac{\sin(nx)}{n^2}.$$ The functions $\sin(nx)/n^2$ are continuous, so their "sum" $f(x)$ is continuous. (In general, if we do not have uniform convergence, but only pointwise convergence, the sum need not be continuous.)
The above result about continuity is causing you some confusion. It is based on a theorem that says, or should say, that if $(f_n(x))$ is a sequence of continuous functions that converges uniformly to $f(x)$ in an interval, then $f(x)$ is continuous.
To apply the theorem to a series $\sum_{n \ge 1} g_n(x)$, we just let $f_n(x)=\sum_{k=1}^n g_k(x)$. There is nothing new in this: convergence of infinite series is defined in terms of what happens to partial sums as $n$ goes to infinity.
The additional fact that is needed is that a series uniformly convergent on a finite interval can be integrated term by term on that interval. So we need to calculate $\int_0^\pi \frac{\sin(nx)}{n^2} dx$ and "add up."
More explicitly, $$\int_0^\pi \left(\sum_{n\ge 1} \frac{\sin(nx)}{n^2}\right)\,dx=\sum_{n \ge 1} \left(\int_0^\pi \frac{\sin(nx)}{n^2}\,dx\right).$$ (The above interchange of the order of integration and summation is permitted because of the uniform convergence. It can fail when we do not have uniform convergence.)
The integration is straightforward, for $\sin(nx)$ has $-\dfrac{\cos(nx)}{n}$ as an antiderivative.
Thus $$\int_0^\pi \frac{\sin nx}{n^2}dx=\frac{1}{n^3}(-\cos(n\pi)+1).$$ If $n$ is even, $\cos(n\pi)=1$, so the integral is $0$, and contributes nothing to the ultimate sum.
If $n$ is odd, then $\cos(n\pi)=-1$, so the integral is $\dfrac{2}{n^3}$. Since $n$ is odd, let $n=2m-1$. Then $$\int_0^\pi \frac{\sin nx}{n^2}dx=\frac{2}{(2m-1)^3}.$$ We conclude that $$\int_0^\pi f(x) \;dx=\sum_1^\infty \frac{2}{(2m-1)^3}.$$ (In your post, the final variable of summation is $n$, but that makes no difference.)
Interesting fact: The sum that we end up with is closely related to $\sum_{n \ge 1} \frac{1}{n^3}$. This last sum, also known as $\zeta(3)$, was proved to be irrational by Apery about $30$ years ago, using an elementary (but not easy) argument. It was one of those rare mathematical results that even gets reported in the mainstream press.
-
Thank you very much André Nicolas. The first part of your explanations helps me a lot. – NumLock Aug 11 '11 at 15:18
In the last part of your explanation you wrote cos(nπ) instead of sin(nπ) :P Please change that. Thank you again for your help! – NumLock Aug 11 '11 at 15:59
@Numlock: Thanks for spotting the inconsistency. It really is $\cos n\pi)$. But the function I am integrating is $\sin(nx)$, and in a couple of places it was written as $\cos(nx)$. Fixed! Someday I will write an answer free of things like that. – André Nicolas Aug 11 '11 at 16:08
Perhaps for the sake of completeness we could include the the interchange of limits is justified by uniform convergence since we are integrating over a finite interval. Over infinite integrals, uniform convergence alone does not let us switch the limits. I have not been able to forget this detail since someone pointed out that my solution to a previous Putnam problem was incorrect because of this! – Ragib Zaman Aug 22 '11 at 3:37
@Ragib Zaman: Thank you for the suggested change, I will try to change wording so that no one is led astray on a Putnam! – André Nicolas Aug 22 '11 at 5:17
Differentiate twice, get the series for $-1/(2 \tan(x/2))$. So now integrate this twice... Of course that is not elementary, but still...
- |
Subsets and Splits